modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ABDALLALSWAITI/DAVINCI | ABDALLALSWAITI | 2024-03-06T15:48:05Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-05T20:54:18Z | ---
license: creativeml-openrail-m
---
Year of Innovation: Monitoring and adapting to Civitai's evolving landscape.
Comprehensive Model: Combines extensive training and elite models from various sources.
Precision Enhancement: Utilizes multiple LoRA models for detailed improvements.
Advanced Capabilities: Efficiently processes text, resolves hand depiction issues, interprets depth, and selects suitable colors for diverse art styles.
Streamlined Experience: Developed multiple workflows for Comfy to simplify image creation.
For simple prompts: Minimum of three steps.
For complex descriptions: More steps are required.
Workflow Link: For intuitive and efficient image creation guidance, refer to our detailed workflow.
Adjust the CFG value and corresponding steps with care: increment the CFG by 0.1 for each additional step in the workflow, ensuring not to exceed a total of 5 CFG adjustments.
|
ramo6627/gemma-Code-Instruct-Finetune-test-2 | ramo6627 | 2024-03-06T15:36:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T15:34:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gohzy/singlish-toxic-bert-IA3-159000-1 | gohzy | 2024-03-06T15:27:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T15:26:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gavinqiangli/my-finetuned-embedding-model | gavinqiangli | 2024-03-06T15:24:01Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-06T15:23:39Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# GavinQiangLi/my-finetuned-embedding-model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('GavinQiangLi/my-finetuned-embedding-model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=GavinQiangLi/my-finetuned-embedding-model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 69 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 13,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mlx-community/Yi-9B-q | mlx-community | 2024-03-06T15:21:14Z | 5 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"license:other",
"region:us"
] | text-generation | 2024-03-06T14:36:12Z | ---
license: other
tags:
- mlx
license_name: yi-license
license_link: LICENSE
widget:
- example_title: Yi-34B-Chat
text: hi
output:
text: ' Hello! How can I assist you today?'
- example_title: Yi-34B
text: There's a place where time stands still. A place of breath taking wonder,
but also
output:
text: ' an eerie sense that something is just not right…
Between the two worlds lies The Forgotten Kingdom - home to creatures long since
thought extinct and ancient magic so strong it defies belief! Only here can
you find what has been lost for centuries: An Elixir Of Life which will restore
youth and vitality if only those who seek its power are brave enough to face
up against all manner of dangers lurking in this mysterious land! But beware;
some say there may even exist powerful entities beyond our comprehension whose
intentions towards humanity remain unclear at best ---- they might want nothing
more than destruction itself rather then anything else from their quest after
immortality (and maybe someone should tell them about modern medicine)? In any
event though – one thing remains true regardless : whether or not success comes
easy depends entirely upon how much effort we put into conquering whatever challenges
lie ahead along with having faith deep down inside ourselves too ;) So let’s
get started now shall We?'
pipeline_tag: text-generation
---
# mlx-community/Yi-9B-q
This model was converted to MLX format from [`01-ai/Yi-9B`]().
Refer to the [original model card](https://huggingface.co/01-ai/Yi-9B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Yi-9B-q")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
vbsowmya/ppo-LunarLander-v2 | vbsowmya | 2024-03-06T15:12:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T15:12:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.05 +/- 16.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jjovalle99/mistral7bit-lora-sql | jjovalle99 | 2024-03-06T15:12:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T03:53:25Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- generator
model-index:
- name: mistral7bit-lora-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7bit-lora-sql
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1399
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7533 | 0.06 | 20 | 0.5169 |
| 0.4806 | 0.11 | 40 | 0.4338 |
| 0.4285 | 0.17 | 60 | 0.4055 |
| 0.403 | 0.23 | 80 | 0.3944 |
| 0.3969 | 0.28 | 100 | 0.3869 |
| 0.3898 | 0.34 | 120 | 0.3813 |
| 0.3836 | 0.4 | 140 | 0.3766 |
| 0.3786 | 0.45 | 160 | 0.3726 |
| 0.3708 | 0.51 | 180 | 0.3675 |
| 0.3681 | 0.56 | 200 | 0.3643 |
| 0.3622 | 0.62 | 220 | 0.3631 |
| 0.3626 | 0.68 | 240 | 0.3640 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2 |
jjovalle99/mistral7b-ft-lora-sql-v2 | jjovalle99 | 2024-03-06T15:12:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T15:10:43Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jyesr/Reinforce-Copter | jyesr | 2024-03-06T15:11:49Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T15:11:20Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.10 +/- 20.88
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DisOOM/Qwen1.5-124B-Chat-Merge-gguf | DisOOM | 2024-03-06T15:10:01Z | 0 | 0 | null | [
"gguf",
"license:other",
"region:us"
] | null | 2024-03-06T10:26:38Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
tags:
- gguf
--- |
ZhiguangHan/textual_inversion_cat | ZhiguangHan | 2024-03-06T14:59:54Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-03T06:45:24Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - ZhiguangHan/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
neuralmagic/Nous-Hermes-2-Yi-34B-marlin | neuralmagic | 2024-03-06T14:56:26Z | 7 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nm-vllm",
"marlin",
"int4",
"conversational",
"arxiv:2210.17323",
"base_model:NousResearch/Nous-Hermes-2-Yi-34B",
"base_model:quantized:NousResearch/Nous-Hermes-2-Yi-34B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-03-06T13:08:45Z | ---
base_model: NousResearch/Nous-Hermes-2-Yi-34B
inference: true
model_type: yi
quantized_by: robertgshaw2
tags:
- nm-vllm
- marlin
- int4
---
## Nous-Hermes-Yi-34B-marlin
This repo contains model files for [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) optimized for [nm-vllm](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was quantized with [GPTQ](https://arxiv.org/abs/2210.17323) and saved in the Marlin format for efficient 4-bit inference. Marlin is a highly optimized inference kernel for 4 bit models.
## Inference
Install [nm-vllm](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_id = "neuralmagic/Nous-Hermes-2-Yi-34B-marlin"
model = LLM(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "What is synthetic data in machine learning?"},
]
formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampling_params = SamplingParams(max_tokens=200)
outputs = model.generate(formatted_prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
"""
Synthetic data is data that has been artificially created or modified to serve the needs of machine learning and data analysis tasks. It can be generated either through title methods like stochastic simulations or through processes of data augmentation that take original data and modify/manipulate it to create new samples. Synthetic data is often used in machine learning when the available amount of real-world data is insufficient or in cases where the creation of real-world data can be dangerous, costly, or time-consuming.
"""
```
## Quantization
For details on how this model was quantized and converted to marlin format, run the `quantization/apply_gptq_save_marlin.py` script:
```bash
pip install -r quantization/requirements.txt
python3 quantization/apply_gptq_save_marlin.py --model-id NousResearch/Nous-Hermes-2-Yi-34B --save-dir ./nous-hermes-2-yi-34b-marlin
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
facebook/musicgen-stereo-large | facebook | 2024-03-06T14:53:14Z | 817 | 70 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"audiocraft",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-10-23T14:26:59Z | ---
inference: true
tags:
- musicgen
- audiocraft
library_name: transformers
license: cc-by-nc-4.0
---
# MusicGen - Stereo - Large - 3.3B
We further release a set of stereophonic capable models. Those were fine tuned for 200k updates starting
from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
the delay pattern.
Stereophonic sound, also known as stereo, is a technique used to reproduce sound with depth and direction.
It uses two separate audio channels played through speakers (or headphones), which creates the impression of sound coming from multiple directions.
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
We provide a simple API and 10 pre-trained models. The pre trained models are:
- `facebook/musicgen-small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small)
- `facebook/musicgen-medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium)
- `facebook/musicgen-melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody)
- `facebook/musicgen-large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large)
- `facebook/musicgen-melody-large`: 3.3B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody-large)
- `facebook/musicgen-stereo-*`: All the previous models fine-tuned for stereo generation -
[small](https://huggingface.co/facebook/musicgen-stereo-small),
[medium](https://huggingface.co/facebook/musicgen-stereo-medium),
[large](https://huggingface.co/facebook/musicgen-stereo-large),
[melody](https://huggingface.co/facebook/musicgen-stereo-melody),
[melody large](https://huggingface.co/facebook/musicgen-stereo-melody-large)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen Stereo models locally with the 🤗 Transformers library from `main` onward.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade git+https://github.com/huggingface/transformers.git scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
import torch
import soundfile as sf
from transformers import pipeline
synthesiser = pipeline("text-to-audio", "facebook/musicgen-stereo-small", device="cuda:0", torch_dtype=torch.float16)
music = synthesiser("lo-fi music with a soothing melody", forward_params={"max_new_tokens": 256})
sf.write("musicgen_out.wav", music["audio"][0].T, music["sampling_rate"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-stereo-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-stereo-large").to("cuda")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
).to("cuda")
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].cpu().numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `soundfile`:
```python
import soundfile as sf
sampling_rate = model.config.audio_encoder.sampling_rate
audio_values = audio_values.cpu().numpy()
sf.write("musicgen_out.wav", audio_values[0].T, sampling_rate)
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| **facebook/musicgen-large** | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
LorMolf/Legal_Mixtral_CA | LorMolf | 2024-03-06T14:51:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-03-06T14:43:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cippppy/mobilebert_100exs_10timesteps_run0 | Cippppy | 2024-03-06T14:51:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mobilebert",
"text-classification",
"generated_from_trainer",
"base_model:google/mobilebert-uncased",
"base_model:finetune:google/mobilebert-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T14:49:33Z | ---
license: apache-2.0
base_model: google/mobilebert-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mobilebert_100exs_10timesteps_run0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_100exs_10timesteps_run0
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 886719.8125
- Accuracy: 0.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 3096381.5 | 0.3 |
| No log | 2.0 | 14 | 2705231.0 | 0.3 |
| No log | 3.0 | 21 | 2380465.5 | 0.3 |
| No log | 4.0 | 28 | 2136194.5 | 0.3 |
| No log | 5.0 | 35 | 1909053.75 | 0.3 |
| No log | 6.0 | 42 | 1667145.0 | 0.3 |
| No log | 7.0 | 49 | 1493787.75 | 0.3 |
| No log | 8.0 | 56 | 1344492.5 | 0.3 |
| No log | 9.0 | 63 | 1218353.25 | 0.3 |
| No log | 10.0 | 70 | 1119155.375 | 0.3 |
| No log | 11.0 | 77 | 1045936.5 | 0.3 |
| No log | 12.0 | 84 | 987271.5 | 0.3 |
| No log | 13.0 | 91 | 942506.0 | 0.3 |
| No log | 14.0 | 98 | 911861.0 | 0.3 |
| No log | 15.0 | 105 | 893725.875 | 0.3 |
| No log | 16.0 | 112 | 886719.8125 | 0.3 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
mpasila/gpt3-finnish-8B-safetensors | mpasila | 2024-03-06T14:49:07Z | 6 | 2 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T14:13:46Z | ---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
Safetensors conversion of [TurkuNLP/gpt3-finnish-8B](https://huggingface.co/TurkuNLP/gpt3-finnish-8B/). This is also in float16 instead of float32.
# Original Model card:
Generative Pretrained Transformer with 8B parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| ”3B” | 32 | 2560 | 32 | 2.8B |
| ”8B” | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon. |
peldrak/segformer-b4-ade-finetuned-coastTrain | peldrak | 2024-03-06T14:49:01Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b4-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b4-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-06T13:38:09Z | ---
license: other
base_model: nvidia/segformer-b4-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b4-ade-finetuned-coastTrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-ade-finetuned-coastTrain
This model is a fine-tuned version of [nvidia/segformer-b4-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512) on the peldrak/coastTrain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2784
- Mean Iou: 0.7615
- Mean Accuracy: 0.8569
- Overall Accuracy: 0.9286
- Accuracy Water: 0.9717
- Accuracy Whitewater: 0.5408
- Accuracy Sediment: 0.9245
- Accuracy Other Natural Terrain: 0.8160
- Accuracy Vegetation: 0.8979
- Accuracy Development: 0.9242
- Accuracy Unknown: 0.9232
- Iou Water: 0.9253
- Iou Whitewater: 0.4607
- Iou Sediment: 0.8453
- Iou Other Natural Terrain: 0.5582
- Iou Vegetation: 0.8460
- Iou Development: 0.8152
- Iou Unknown: 0.8799
- F1 Score: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|:--------:|
| 1.663 | 0.16 | 20 | 1.5090 | 0.4228 | 0.5280 | 0.7404 | 0.7454 | 0.1290 | 0.6594 | 0.0007 | 0.9158 | 0.4431 | 0.8027 | 0.7085 | 0.0661 | 0.5003 | 0.0006 | 0.5240 | 0.3651 | 0.7953 | 0.7435 |
| 1.4278 | 0.31 | 40 | 1.1477 | 0.4735 | 0.5606 | 0.8162 | 0.9166 | 0.0018 | 0.7142 | 0.0000 | 0.8720 | 0.5680 | 0.8513 | 0.8291 | 0.0018 | 0.5805 | 0.0000 | 0.6259 | 0.4328 | 0.8441 | 0.8075 |
| 1.5096 | 0.47 | 60 | 0.8839 | 0.4469 | 0.5277 | 0.7917 | 0.8650 | 0.0000 | 0.5808 | 0.0 | 0.9654 | 0.4206 | 0.8618 | 0.8144 | 0.0000 | 0.5120 | 0.0 | 0.5659 | 0.3769 | 0.8589 | 0.7834 |
| 1.2289 | 0.62 | 80 | 0.7240 | 0.4889 | 0.5772 | 0.8285 | 0.9213 | 0.0010 | 0.7241 | 0.0 | 0.8451 | 0.6604 | 0.8885 | 0.8586 | 0.0010 | 0.6189 | 0.0 | 0.6392 | 0.4241 | 0.8807 | 0.8226 |
| 0.7603 | 0.78 | 100 | 0.5905 | 0.5238 | 0.6004 | 0.8575 | 0.9406 | 0.0008 | 0.7086 | 0.0 | 0.9081 | 0.7470 | 0.8978 | 0.8633 | 0.0008 | 0.6391 | 0.0 | 0.6927 | 0.5817 | 0.8892 | 0.8479 |
| 1.1379 | 0.93 | 120 | 0.5625 | 0.5427 | 0.6168 | 0.8713 | 0.9660 | 0.0028 | 0.6806 | 0.0 | 0.8917 | 0.8819 | 0.8947 | 0.8537 | 0.0028 | 0.6139 | 0.0 | 0.7520 | 0.6952 | 0.8813 | 0.8597 |
| 0.7824 | 1.09 | 140 | 0.5163 | 0.5320 | 0.6100 | 0.8653 | 0.9728 | 0.0001 | 0.6368 | 0.0 | 0.8720 | 0.8917 | 0.8963 | 0.8734 | 0.0001 | 0.6084 | 0.0 | 0.7073 | 0.6440 | 0.8911 | 0.8541 |
| 0.6537 | 1.24 | 160 | 0.4595 | 0.5526 | 0.6278 | 0.8782 | 0.9497 | 0.0001 | 0.7542 | 0.0 | 0.9018 | 0.8883 | 0.9004 | 0.8791 | 0.0001 | 0.6505 | 0.0 | 0.7401 | 0.7045 | 0.8938 | 0.8682 |
| 0.7204 | 1.4 | 180 | 0.4031 | 0.5593 | 0.6379 | 0.8834 | 0.9572 | 0.0004 | 0.8028 | 0.0 | 0.8648 | 0.9356 | 0.9048 | 0.8913 | 0.0004 | 0.7384 | 0.0 | 0.7428 | 0.6471 | 0.8950 | 0.8746 |
| 0.663 | 1.55 | 200 | 0.4097 | 0.5592 | 0.6383 | 0.8813 | 0.9777 | 0.0 | 0.8640 | 0.0 | 0.8036 | 0.9289 | 0.8937 | 0.8718 | 0.0 | 0.7381 | 0.0 | 0.7349 | 0.6829 | 0.8869 | 0.8710 |
| 0.4566 | 1.71 | 220 | 0.3912 | 0.5598 | 0.6405 | 0.8813 | 0.9515 | 0.0011 | 0.8262 | 0.0 | 0.8494 | 0.9577 | 0.8973 | 0.8743 | 0.0011 | 0.7426 | 0.0 | 0.7405 | 0.6706 | 0.8895 | 0.8722 |
| 1.4951 | 1.86 | 240 | 0.3756 | 0.5566 | 0.6419 | 0.8804 | 0.9674 | 0.0006 | 0.8683 | 0.0 | 0.7963 | 0.9635 | 0.8971 | 0.8869 | 0.0006 | 0.7625 | 0.0 | 0.7352 | 0.6198 | 0.8909 | 0.8721 |
| 0.6232 | 2.02 | 260 | 0.3842 | 0.5650 | 0.6357 | 0.8869 | 0.9528 | 0.0014 | 0.7446 | 0.0 | 0.9224 | 0.9236 | 0.9049 | 0.8897 | 0.0014 | 0.6955 | 0.0 | 0.7485 | 0.7258 | 0.8940 | 0.8767 |
| 1.0104 | 2.17 | 280 | 0.3335 | 0.5791 | 0.6489 | 0.8974 | 0.9719 | 0.0028 | 0.8548 | 0.0 | 0.8796 | 0.9284 | 0.9049 | 0.8934 | 0.0028 | 0.7673 | 0.0 | 0.7825 | 0.7165 | 0.8912 | 0.8872 |
| 0.4107 | 2.33 | 300 | 0.3663 | 0.5642 | 0.6456 | 0.8855 | 0.9677 | 0.0023 | 0.8966 | 0.0 | 0.8048 | 0.9413 | 0.9065 | 0.8765 | 0.0023 | 0.7372 | 0.0 | 0.7532 | 0.6814 | 0.8985 | 0.8758 |
| 0.3112 | 2.48 | 320 | 0.3318 | 0.5833 | 0.6553 | 0.9006 | 0.9668 | 0.0132 | 0.8693 | 0.0 | 0.8855 | 0.9475 | 0.9047 | 0.9026 | 0.0132 | 0.7776 | 0.0 | 0.7936 | 0.6974 | 0.8991 | 0.8912 |
| 0.6679 | 2.64 | 340 | 0.3357 | 0.5840 | 0.6520 | 0.8979 | 0.9620 | 0.0109 | 0.8876 | 0.0008 | 0.8768 | 0.9071 | 0.9187 | 0.8819 | 0.0109 | 0.7585 | 0.0008 | 0.8002 | 0.7742 | 0.8611 | 0.8873 |
| 0.6522 | 2.79 | 360 | 0.3201 | 0.5850 | 0.6559 | 0.9015 | 0.9703 | 0.0209 | 0.8589 | 0.0010 | 0.8874 | 0.9440 | 0.9088 | 0.9037 | 0.0208 | 0.7665 | 0.0010 | 0.8052 | 0.7186 | 0.8794 | 0.8917 |
| 0.569 | 2.95 | 380 | 0.3227 | 0.5899 | 0.6592 | 0.9000 | 0.9738 | 0.0292 | 0.8709 | 0.0294 | 0.8731 | 0.9292 | 0.9086 | 0.8907 | 0.0291 | 0.7557 | 0.0294 | 0.8057 | 0.7445 | 0.8744 | 0.8902 |
| 0.5766 | 3.1 | 400 | 0.3537 | 0.5747 | 0.6401 | 0.8907 | 0.9663 | 0.0301 | 0.7411 | 0.0043 | 0.9260 | 0.9117 | 0.9013 | 0.8742 | 0.0300 | 0.7095 | 0.0043 | 0.7776 | 0.7304 | 0.8966 | 0.8804 |
| 1.1582 | 3.26 | 420 | 0.3125 | 0.6175 | 0.6767 | 0.9030 | 0.9783 | 0.0329 | 0.8699 | 0.1886 | 0.9039 | 0.8574 | 0.9059 | 0.8795 | 0.0325 | 0.7877 | 0.1879 | 0.8158 | 0.7437 | 0.8757 | 0.8948 |
| 0.4788 | 3.41 | 440 | 0.2963 | 0.6457 | 0.7017 | 0.9145 | 0.9791 | 0.0397 | 0.9018 | 0.2594 | 0.9025 | 0.9140 | 0.9151 | 0.8995 | 0.0394 | 0.8221 | 0.2566 | 0.8277 | 0.8050 | 0.8697 | 0.9069 |
| 0.2278 | 3.57 | 460 | 0.3154 | 0.6225 | 0.6920 | 0.9049 | 0.9683 | 0.1006 | 0.8834 | 0.1576 | 0.8780 | 0.9448 | 0.9116 | 0.9053 | 0.0996 | 0.7952 | 0.1573 | 0.8066 | 0.7248 | 0.8684 | 0.8983 |
| 0.4206 | 3.72 | 480 | 0.3235 | 0.5959 | 0.6553 | 0.9007 | 0.9666 | 0.0412 | 0.8435 | 0.0371 | 0.9305 | 0.8624 | 0.9060 | 0.9002 | 0.0411 | 0.7604 | 0.0371 | 0.7808 | 0.7811 | 0.8709 | 0.8912 |
| 0.3314 | 3.88 | 500 | 0.3323 | 0.6125 | 0.6802 | 0.9019 | 0.9602 | 0.1699 | 0.8257 | 0.0432 | 0.9168 | 0.9415 | 0.9039 | 0.9100 | 0.1663 | 0.7722 | 0.0432 | 0.7877 | 0.7446 | 0.8636 | 0.8949 |
| 0.8233 | 4.03 | 520 | 0.3092 | 0.6410 | 0.7039 | 0.9085 | 0.9714 | 0.1106 | 0.8881 | 0.2342 | 0.8985 | 0.9205 | 0.9042 | 0.9026 | 0.1083 | 0.7966 | 0.2326 | 0.8114 | 0.7720 | 0.8636 | 0.9022 |
| 0.3436 | 4.19 | 540 | 0.3070 | 0.6464 | 0.7064 | 0.9100 | 0.9816 | 0.1226 | 0.9199 | 0.2470 | 0.8824 | 0.8815 | 0.9098 | 0.8936 | 0.1205 | 0.7881 | 0.2408 | 0.8151 | 0.7747 | 0.8918 | 0.9039 |
| 0.3504 | 4.34 | 560 | 0.3084 | 0.6827 | 0.7459 | 0.9151 | 0.9749 | 0.2036 | 0.9063 | 0.4067 | 0.8906 | 0.9284 | 0.9107 | 0.9071 | 0.1971 | 0.8147 | 0.3798 | 0.8225 | 0.7894 | 0.8686 | 0.9111 |
| 0.3461 | 4.5 | 580 | 0.3091 | 0.7164 | 0.8006 | 0.9183 | 0.9687 | 0.3107 | 0.8787 | 0.6910 | 0.8991 | 0.9385 | 0.9177 | 0.9184 | 0.2873 | 0.8206 | 0.5133 | 0.8230 | 0.7781 | 0.8740 | 0.9166 |
| 0.9608 | 4.65 | 600 | 0.2973 | 0.6896 | 0.7680 | 0.9130 | 0.9772 | 0.2477 | 0.8727 | 0.5473 | 0.8849 | 0.9410 | 0.9055 | 0.9072 | 0.2363 | 0.8040 | 0.4150 | 0.8202 | 0.7792 | 0.8653 | 0.9101 |
| 0.2724 | 4.81 | 620 | 0.2947 | 0.7055 | 0.7860 | 0.9169 | 0.9732 | 0.3313 | 0.9060 | 0.5587 | 0.8825 | 0.9406 | 0.9096 | 0.9133 | 0.3037 | 0.8241 | 0.4199 | 0.8221 | 0.7875 | 0.8681 | 0.9149 |
| 0.2541 | 4.96 | 640 | 0.2897 | 0.7142 | 0.7929 | 0.9184 | 0.9748 | 0.3398 | 0.8850 | 0.6015 | 0.8874 | 0.9459 | 0.9163 | 0.9156 | 0.3142 | 0.8310 | 0.4651 | 0.8228 | 0.7766 | 0.8740 | 0.9166 |
| 1.337 | 5.12 | 660 | 0.2950 | 0.7033 | 0.7721 | 0.9169 | 0.9612 | 0.3796 | 0.9062 | 0.3990 | 0.8976 | 0.9436 | 0.9178 | 0.9159 | 0.3526 | 0.8199 | 0.3732 | 0.8226 | 0.7638 | 0.8749 | 0.9148 |
| 0.3685 | 5.27 | 680 | 0.2714 | 0.7369 | 0.8200 | 0.9227 | 0.9741 | 0.3947 | 0.8746 | 0.7536 | 0.9208 | 0.9100 | 0.9123 | 0.9194 | 0.3485 | 0.8370 | 0.5641 | 0.8375 | 0.7705 | 0.8815 | 0.9216 |
| 0.2901 | 5.43 | 700 | 0.2848 | 0.7373 | 0.8149 | 0.9230 | 0.9704 | 0.4050 | 0.9080 | 0.6911 | 0.9158 | 0.9008 | 0.9131 | 0.9206 | 0.3630 | 0.8271 | 0.5450 | 0.8393 | 0.7948 | 0.8710 | 0.9217 |
| 0.4242 | 5.58 | 720 | 0.2831 | 0.7343 | 0.8311 | 0.9195 | 0.9793 | 0.4392 | 0.8635 | 0.7955 | 0.8824 | 0.9400 | 0.9181 | 0.9140 | 0.3800 | 0.8157 | 0.5413 | 0.8335 | 0.7804 | 0.8752 | 0.9187 |
| 0.2186 | 5.74 | 740 | 0.2713 | 0.7400 | 0.8010 | 0.9250 | 0.9680 | 0.4074 | 0.9073 | 0.5905 | 0.9445 | 0.8784 | 0.9107 | 0.9197 | 0.3715 | 0.8330 | 0.5342 | 0.8387 | 0.8023 | 0.8810 | 0.9234 |
| 0.3729 | 5.89 | 760 | 0.2846 | 0.7528 | 0.8290 | 0.9233 | 0.9714 | 0.4906 | 0.8906 | 0.7037 | 0.9157 | 0.9274 | 0.9039 | 0.9199 | 0.4197 | 0.8272 | 0.6058 | 0.8448 | 0.7878 | 0.8641 | 0.9225 |
| 0.29 | 6.05 | 780 | 0.2979 | 0.7411 | 0.8274 | 0.9225 | 0.9746 | 0.4241 | 0.9098 | 0.7544 | 0.8959 | 0.9247 | 0.9085 | 0.9222 | 0.3765 | 0.8288 | 0.5630 | 0.8343 | 0.7957 | 0.8672 | 0.9215 |
| 0.1211 | 6.2 | 800 | 0.2962 | 0.7553 | 0.8266 | 0.9254 | 0.9722 | 0.4567 | 0.9034 | 0.7216 | 0.9218 | 0.8951 | 0.9153 | 0.9200 | 0.4076 | 0.8367 | 0.6068 | 0.8400 | 0.8034 | 0.8727 | 0.9242 |
| 0.2875 | 6.36 | 820 | 0.3040 | 0.7576 | 0.8358 | 0.9249 | 0.9705 | 0.5281 | 0.8742 | 0.7028 | 0.9110 | 0.9454 | 0.9184 | 0.9234 | 0.4444 | 0.8312 | 0.5986 | 0.8388 | 0.7919 | 0.8746 | 0.9243 |
| 0.1761 | 6.51 | 840 | 0.2623 | 0.7577 | 0.8422 | 0.9288 | 0.9742 | 0.4691 | 0.9182 | 0.7958 | 0.9109 | 0.9022 | 0.9249 | 0.9227 | 0.4026 | 0.8441 | 0.5856 | 0.8486 | 0.8191 | 0.8815 | 0.9279 |
| 0.2962 | 6.67 | 860 | 0.2828 | 0.7498 | 0.8469 | 0.9231 | 0.9651 | 0.5313 | 0.8896 | 0.7981 | 0.9125 | 0.9153 | 0.9166 | 0.9213 | 0.4305 | 0.8172 | 0.5763 | 0.8463 | 0.7849 | 0.8724 | 0.9228 |
| 0.3504 | 6.82 | 880 | 0.2912 | 0.7437 | 0.8384 | 0.9219 | 0.9793 | 0.4609 | 0.8613 | 0.8330 | 0.9077 | 0.9174 | 0.9094 | 0.9122 | 0.3974 | 0.8246 | 0.5504 | 0.8378 | 0.8159 | 0.8678 | 0.9211 |
| 0.2496 | 6.98 | 900 | 0.2838 | 0.7476 | 0.8480 | 0.9239 | 0.9729 | 0.4970 | 0.8875 | 0.8454 | 0.9128 | 0.9111 | 0.9092 | 0.9211 | 0.4346 | 0.8324 | 0.5334 | 0.8433 | 0.8005 | 0.8678 | 0.9235 |
| 1.2185 | 7.13 | 920 | 0.3104 | 0.7466 | 0.8454 | 0.9201 | 0.9732 | 0.5344 | 0.8610 | 0.8031 | 0.8965 | 0.9391 | 0.9105 | 0.9215 | 0.4476 | 0.8078 | 0.5605 | 0.8257 | 0.7948 | 0.8687 | 0.9197 |
| 0.1779 | 7.29 | 940 | 0.3212 | 0.7591 | 0.8515 | 0.9252 | 0.9751 | 0.5615 | 0.8844 | 0.7893 | 0.8949 | 0.9314 | 0.9236 | 0.9219 | 0.4660 | 0.8250 | 0.5769 | 0.8376 | 0.8097 | 0.8769 | 0.9247 |
| 0.4705 | 7.44 | 960 | 0.2663 | 0.7504 | 0.8429 | 0.9243 | 0.9776 | 0.4934 | 0.8899 | 0.8011 | 0.8994 | 0.9268 | 0.9118 | 0.9159 | 0.4335 | 0.8278 | 0.5520 | 0.8401 | 0.7987 | 0.8846 | 0.9237 |
| 0.2637 | 7.6 | 980 | 0.2561 | 0.7639 | 0.8449 | 0.9289 | 0.9717 | 0.5314 | 0.8805 | 0.7593 | 0.9309 | 0.9236 | 0.9172 | 0.9271 | 0.4443 | 0.8284 | 0.6024 | 0.8461 | 0.8074 | 0.8916 | 0.9284 |
| 0.1961 | 7.75 | 1000 | 0.2712 | 0.7486 | 0.8598 | 0.9250 | 0.9780 | 0.5721 | 0.8804 | 0.8483 | 0.8995 | 0.9282 | 0.9119 | 0.9184 | 0.4402 | 0.8297 | 0.5164 | 0.8466 | 0.8043 | 0.8848 | 0.9252 |
| 1.0785 | 7.91 | 1020 | 0.2494 | 0.7586 | 0.8472 | 0.9308 | 0.9752 | 0.5247 | 0.9059 | 0.7655 | 0.9142 | 0.9169 | 0.9280 | 0.9263 | 0.4457 | 0.8448 | 0.5380 | 0.8490 | 0.7996 | 0.9071 | 0.9304 |
| 0.1453 | 8.06 | 1040 | 0.2792 | 0.7454 | 0.8519 | 0.9254 | 0.9704 | 0.5225 | 0.8913 | 0.8028 | 0.8936 | 0.9675 | 0.9153 | 0.9283 | 0.4321 | 0.8367 | 0.5213 | 0.8397 | 0.7497 | 0.9099 | 0.9261 |
| 0.2332 | 8.22 | 1060 | 0.2774 | 0.7452 | 0.8434 | 0.9242 | 0.9771 | 0.5126 | 0.9145 | 0.7857 | 0.8916 | 0.9063 | 0.9157 | 0.9221 | 0.4303 | 0.8100 | 0.5271 | 0.8311 | 0.7858 | 0.9097 | 0.9240 |
| 0.1902 | 8.37 | 1080 | 0.2779 | 0.7382 | 0.8500 | 0.9228 | 0.9830 | 0.5081 | 0.8918 | 0.8470 | 0.8739 | 0.9259 | 0.9205 | 0.9178 | 0.4154 | 0.8383 | 0.4896 | 0.8345 | 0.7778 | 0.8937 | 0.9230 |
| 0.2892 | 8.53 | 1100 | 0.2735 | 0.7535 | 0.8527 | 0.9275 | 0.9726 | 0.5405 | 0.8697 | 0.8180 | 0.9147 | 0.9236 | 0.9300 | 0.9293 | 0.4539 | 0.8253 | 0.5226 | 0.8400 | 0.8145 | 0.8886 | 0.9273 |
| 0.2251 | 8.68 | 1120 | 0.2626 | 0.7627 | 0.8536 | 0.9295 | 0.9781 | 0.5566 | 0.9106 | 0.7753 | 0.8922 | 0.9349 | 0.9273 | 0.9239 | 0.4522 | 0.8465 | 0.5699 | 0.8455 | 0.8062 | 0.8949 | 0.9291 |
| 0.1345 | 8.84 | 1140 | 0.2558 | 0.7676 | 0.8542 | 0.9296 | 0.9784 | 0.5649 | 0.9180 | 0.7797 | 0.9048 | 0.9185 | 0.9151 | 0.9205 | 0.4585 | 0.8451 | 0.5930 | 0.8489 | 0.8174 | 0.8900 | 0.9292 |
| 0.18 | 8.99 | 1160 | 0.2737 | 0.7628 | 0.8566 | 0.9288 | 0.9781 | 0.5295 | 0.9059 | 0.8418 | 0.9066 | 0.9190 | 0.9150 | 0.9207 | 0.4404 | 0.8533 | 0.5818 | 0.8542 | 0.8169 | 0.8726 | 0.9283 |
| 0.282 | 9.15 | 1180 | 0.2705 | 0.7734 | 0.8487 | 0.9310 | 0.9717 | 0.5354 | 0.9220 | 0.7588 | 0.9230 | 0.9135 | 0.9165 | 0.9265 | 0.4527 | 0.8522 | 0.6342 | 0.8563 | 0.8178 | 0.8736 | 0.9304 |
| 0.1688 | 9.3 | 1200 | 0.2770 | 0.7695 | 0.8663 | 0.9317 | 0.9705 | 0.5896 | 0.9098 | 0.8282 | 0.9125 | 0.9241 | 0.9295 | 0.9315 | 0.4659 | 0.8490 | 0.5848 | 0.8541 | 0.8173 | 0.8840 | 0.9316 |
| 0.1043 | 9.46 | 1220 | 0.2784 | 0.7615 | 0.8569 | 0.9286 | 0.9717 | 0.5408 | 0.9245 | 0.8160 | 0.8979 | 0.9242 | 0.9232 | 0.9253 | 0.4607 | 0.8453 | 0.5582 | 0.8460 | 0.8152 | 0.8799 | 0.9283 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Tanat05/korcen | Tanat05 | 2024-03-06T14:48:21Z | 0 | 0 | null | [
"ko",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T14:44:22Z | ---
license: apache-2.0
language:
- ko
---
<div align="center">
<h1>Korcen</h1>
</div>

korcen-ml은 기존 키워드 기반의 korcen의 우회가 쉽다는 단점을 극복하기위해 딥러닝을 통해 정확도를 한층 더 올리려는 프로젝트입니다.
일부 모델만 공개하고 있으며 모델 파일은 [여기](https://github.com/KR-korcen/korcen-ml/tree/main/model)에서 확인이 가능합니다.
더 많은 모델 파일과 학습 데이터를 다운받고 싶다면 문의주세요.
| | 데이터 문장수 |
|------|------|
| VDCNN(23.4.30) | 200,000개 |
| VDCNN_KOGPT2(23.5.28) | 2,000,000개 |
| VDCNN_LLAMA2(23.9.30) | 5,000,000개 |
| VDCNN_LLAMA2_V2(24.1.29) | 10,000,000개 |
키워드 기반 기존 라이브러리 : [py version](https://github.com/KR-korcen/korcen), [ts version](https://github.com/KR-korcen/korcen.ts)
[서포트 디스코드 서버](https://discord.gg/wyTU3ZQBPE)
## 모델 검증
데이터마다 욕설의 기준이 달라 오차가 있다는 걸 감안하고 확인하시기 바랍니다.
| | [korean-malicious-comments-dataset](https://github.com/ZIZUN/korean-malicious-comments-dataset) | [Curse-detection-data](https://github.com/2runo/Curse-detection-data) | [kmhas_korean_hate_speech](https://huggingface.co/datasets/jeanlee/kmhas_korean_hate_speech) | [Korean Extremist Website Womad Hate Speech Data](https://www.kaggle.com/datasets/captainnemo9292/korean-extremist-website-womad-hate-speech-data/data) |
|------|------|------|------|------|
| [korcen(v0.3.5)](https://github.com/KR-korcen/korcen) | 0.7121 | **0.8415** | 0.6800 | 0.6305 |
| VDCNN(23.4.30) | 0.6900 | 0.4885 | | 0.4885 |
| VDCNN_KOGPT2(23.6.15) | 0.7545 | 0.7824 | | 0.7055 |
| VDCNN_LLAMA2(23.9.30) | 0.7762 | 0.8104 | 0.7296 | V2로 대체 |
| VDCNN_LLAMA2_V2(24.1.29) | **0.8322** | 0.8410 | **0.7837** | **0.7120** |
| [badword_check](https://github.com/Nam-SW/badword_check)(23.10.1) | 0.5829 | 0.6761 | | |
| [CurseDetector](https://github.com/mangto/CurseDetector)(24.1.10) | 0.5679 | 시간소요로 테스트 블가 | | 0.5785 |
## example
```py
#py: 3.10, tf: 2.10
import tensorflow as tf
import numpy as np
import pickle
from tensorflow.keras.preprocessing.sequence import pad_sequences
maxlen = 1000
model_path = 'vdcnn_model.h5'
tokenizer_path = "tokenizer.pickle"
model = tf.keras.models.load_model(model_path)
with open(tokenizer_path, "rb") as f:
tokenizer = pickle.load(f)
def preprocess_text(text):
text = text.lower()
return text
def predict_text(text):
sentence = preprocess_text(text)
encoded_sentence = tokenizer.encode_plus(sentence,
max_length=maxlen,
padding="max_length",
truncation=True)['input_ids']
sentence_seq = pad_sequences([encoded_sentence], maxlen=maxlen, truncating="post")
prediction = model.predict(sentence_seq)[0][0]
return prediction
while True:
text = input("Enter the sentence you want to test: ")
result = predict_text(text)
if result >= 0.5:
print("This sentence contains abusive language.")
else:
print("It's a normal sentence.")
```
## Maker
>Tanat
```
github: Tanat05
discord: Tanat05
email: [email protected]
``` |
facebook/musicgen-stereo-medium | facebook | 2024-03-06T14:47:27Z | 649 | 29 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"audiocraft",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-10-23T14:21:12Z | ---
inference: true
tags:
- musicgen
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
library_name: transformers
widget:
- text: a funky house with 80s hip hop vibes
example_title: Prompt 1
- text: a chill song with influences from lofi, chillstep and downtempo
example_title: Prompt 2
- text: a catchy beat for a podcast intro
example_title: Prompt 3
---
# MusicGen - Stereo - Medium - 1.5B
We further release a set of stereophonic capable models. Those were fine tuned for 200k updates starting
from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
the delay pattern.
Stereophonic sound, also known as stereo, is a technique used to reproduce sound with depth and direction.
It uses two separate audio channels played through speakers (or headphones), which creates the impression of sound coming from multiple directions.
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
We provide a simple API and 10 pre-trained models. The pre trained models are:
- `facebook/musicgen-small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small)
- `facebook/musicgen-medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium)
- `facebook/musicgen-melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody)
- `facebook/musicgen-large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large)
- `facebook/musicgen-melody-large`: 3.3B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody-large)
- `facebook/musicgen-stereo-*`: All the previous models fine-tuned for stereo generation -
[small](https://huggingface.co/facebook/musicgen-stereo-small),
[medium](https://huggingface.co/facebook/musicgen-stereo-medium),
[large](https://huggingface.co/facebook/musicgen-stereo-large),
[melody](https://huggingface.co/facebook/musicgen-stereo-melody),
[melody large](https://huggingface.co/facebook/musicgen-stereo-melody-large)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen Stereo models locally with the 🤗 Transformers library from `main` onward.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade git+https://github.com/huggingface/transformers.git scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
import torch
import soundfile as sf
from transformers import pipeline
synthesiser = pipeline("text-to-audio", "facebook/musicgen-stereo-medium", device="cuda:0", torch_dtype=torch.float16)
music = synthesiser("lo-fi music with a soothing melody", forward_params={"max_new_tokens": 256})
sf.write("musicgen_out.wav", music["audio"][0].T, music["sampling_rate"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-stereo-medium")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-stereo-medium").to("cuda")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
).to("cuda")
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].cpu().numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `soundfile`:
```python
import soundfile as sf
sampling_rate = model.config.audio_encoder.sampling_rate
audio_values = audio_values.cpu().numpy()
sf.write("musicgen_out.wav", audio_values[0].T, sampling_rate)
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("medium")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| **facebook/musicgen-medium** | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
nicocolas/Sepsistral-7B-v1.0 | nicocolas | 2024-03-06T14:47:02Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T15:44:26Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Sepsistral-7B-v1.0 is a medical Large Languag Models (LLMs) finetuned from Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Sepsistral was trained on more than 10 000 PubMed’s articles about sepsis disease. Our model outperforms Mistral-7B-v0.1 on medical data on our tests.
<details>
<summary>Advisory Notice</summary>
While Sepsistral is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Sepsistral in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.
</details>
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Sepsistral-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use on sepsis. Potential use cases may include but are not limited to:
- Medical exam question answering
- Supporting differential diagnosis
- Disease information (symptoms, cause, treatment) query
- General health information query about sepsis
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
It is possible to use this model to question answering on sepsis disease, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Sepsistral was trained on question/answer/context dataset generate with GPT-3.5-turbo from more than 10 000 abstract on sepsis disease from PubMed.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We use the Axolotl project (https://github.com/OpenAccess-AI-Collective/axolotl) to train our model on a NVIDIA A100 (40GB) GPU on Modal serverless platform (https://modal.com) .
## Model Card Authors
This project was conducted as a tutored project by DataScale master's students from the University of Versailles - Paris-Saclay University: Nicola Ferrara, Quentin Gruchet, Souha Samoouda, Amal Boushaba in collaboration with the HephIA start-up team (Kamel Mesbahi, Anthony Coutant). It was supervised by members from the DAVID lab/UVSQ/Paris Saclay University (Mustapha Lebbah) and the LIPN/USPN (Bilal Faye, Hanane Azzag).
|
Cippppy/mobilebert_run1 | Cippppy | 2024-03-06T14:42:52Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mobilebert",
"text-classification",
"generated_from_trainer",
"base_model:google/mobilebert-uncased",
"base_model:finetune:google/mobilebert-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T14:41:37Z | ---
license: apache-2.0
base_model: google/mobilebert-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mobilebert_run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_run1
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7541494.5
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 11089919.0 | 0.0 |
| No log | 2.0 | 10 | 10502712.0 | 0.0 |
| No log | 3.0 | 15 | 10016729.0 | 0.0 |
| No log | 4.0 | 20 | 9591635.0 | 0.0 |
| No log | 5.0 | 25 | 9214009.0 | 0.0 |
| No log | 6.0 | 30 | 8899205.0 | 0.0 |
| No log | 7.0 | 35 | 8633617.0 | 0.0 |
| No log | 8.0 | 40 | 8408112.0 | 0.0 |
| No log | 9.0 | 45 | 8206867.0 | 0.0 |
| No log | 10.0 | 50 | 8033762.5 | 0.0 |
| No log | 11.0 | 55 | 7887077.0 | 0.0 |
| No log | 12.0 | 60 | 7766791.0 | 0.0 |
| No log | 13.0 | 65 | 7670611.0 | 0.0 |
| No log | 14.0 | 70 | 7600864.0 | 0.0 |
| No log | 15.0 | 75 | 7557492.0 | 0.0 |
| No log | 16.0 | 80 | 7541494.5 | 0.0 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Violet24K/ref_vanilla_model3 | Violet24K | 2024-03-06T14:39:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T14:37:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MarkKisker/roberta-base | MarkKisker | 2024-03-06T14:36:02Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T14:36:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vidhi0206/setfit-paraphrase-mpnet-emotionv | vidhi0206 | 2024-03-06T14:34:38Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-03-06T14:34:16Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: i honestly thought impossible at this point i feel pretty
- text: i feel convinced that im going to shy away from whatever is really good for
me
- text: i feel guilt that i should be more caring and im not
- text: i found myself feeling nostalgic as i thought about the temporarily abandoned
little bishop chronicles
- text: i am feeling very indecisive and spontaneous
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.621
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'i don t feel so self assured i need to compete or to justify why i m so clearly not doing as well as someone else'</li><li>'i should do but i think it means that i should always be open to opportunities of inviting and involving others in ministries and that i should be creative in finding ways for others to participate in and feel welcomed into such ministries'</li><li>'i feel like im going to be way more successful a writer because of it'</li></ul> |
| 4 | <ul><li>'i feel so weird and scattered with all wonders about a million different things'</li><li>'i mean already as a parent from the moment the iolani left my body i can tell you i feel like im constantly fearful for something horrible happening to her thats out of my control'</li><li>'i think i was feeling vulnerable due to the stress of having to buy a new sewing machine and printer'</li></ul> |
| 5 | <ul><li>'i feel like this inside theres one thing i wanna know whats so funny bout peace love and understanding'</li><li>'i feel like itd be strange at the least and possibly offensive to tell a gay friend id like to experiment or something like that'</li><li>'i am not sure why in that moment that i thought i would be able to feel it hellip but it was pretty funny'</li></ul> |
| 2 | <ul><li>'i can feel that gentle rhythm imprinted on my skin i vibrates up my arm my stomach clenches my legs squeeze i forget his own leg has somehow ended up between mine'</li><li>'i feel specially fond of'</li><li>'i just feel like i dont like supporting walmart because maceys has such good family values and is closed on sundays and isnt trying to take over mom and pop stores but i have to be a smart consumer too'</li></ul> |
| 3 | <ul><li>'i am sure the vast majority of decent working class people feel insulted about being derided as unable to be respectful towards referees and are the parents who watch their child s match shouting abuse and swearing etc'</li><li>'im feeling irritated by her friggin name'</li><li>'i feel heartless now feeling bored and not believe in love anymore'</li></ul> |
| 0 | <ul><li>'i had just begun to feel like teaching was my metier but am now resigned to the fact that i likely wont teach at university ever again'</li><li>'i think the most common one that everyone has experienced is that doom and gloom feeling where you just feel like something tragic just happened'</li><li>'i feel a bit foolish now because in the last years they havent come back to my home town and i have had to travel to england to see them'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.621 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotionv")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 20.4375 | 47 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.2804 | - |
| 0.2083 | 50 | 0.0724 | - |
| 0.4167 | 100 | 0.0512 | - |
| 0.625 | 150 | 0.0108 | - |
| 0.8333 | 200 | 0.0027 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
h1t/TCD-SD21-base-LoRA | h1t | 2024-03-06T14:26:18Z | 35 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:mit",
"region:us"
] | text-to-image | 2024-03-06T14:17:18Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-2-1-base
license: mit
library_name: diffusers
---
# Model description
Official SD21(base) Model of the paper [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
For more usage please found at [Project Page](https://huggingface.co/h1t/TCD-SDXL-LoRA/)
Here is a simple example:
```python
import torch
from diffusers import StableDiffusionPipeline, TCDScheduler
device = "cuda"
base_model_id = "stabilityai/stable-diffusion-2-1-base"
tcd_lora_id = "h1t/TCD-SD21-base-LoRA"
pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
prompt = "Beautiful woman, bubblegum pink, lemon yellow, minty blue, futuristic, high-detail, epic composition, watercolor."
image = pipe(
prompt=prompt,
num_inference_steps=4,
guidance_scale=0,
# Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step.
# A value of 0.3 often yields good results.
# We recommend using a higher eta when increasing the number of inference steps.
eta=0.3,
generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```

|
shabnamn/my-xsn-dog | shabnamn | 2024-03-06T14:23:54Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T14:16:51Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-xsn-Dog Dreambooth model trained by shabnamn following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: TCEC139
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
JoseLuis95/finetuned_model_sentiment_analysis_yelp | JoseLuis95 | 2024-03-06T14:20:25Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"simplification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-05T19:13:38Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- simplification
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: finetuned_model_sentiment_analysis_yelp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model_sentiment_analysis_yelp
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the [yelp_review_full](https://huggingface.co/datasets/yelp_review_full) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8933
- Precision: 0.6404
- Recall: 0.6409
- F1: 0.6405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|
| 0.8691 | 1.0 | 3657 | 0.8801 | 0.6224 | 0.6201 | 0.6149 |
| 0.7506 | 2.0 | 7314 | 0.8469 | 0.6458 | 0.6421 | 0.6428 |
| 0.6087 | 3.0 | 10971 | 0.8933 | 0.6404 | 0.6409 | 0.6405 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
makhataei/qa-persian-distilbert-fa-zwnj-base | makhataei | 2024-03-06T14:19:16Z | 25 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:makhataei/qa-persian-distilbert-fa-zwnj-base",
"base_model:finetune:makhataei/qa-persian-distilbert-fa-zwnj-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-11-28T06:57:13Z | ---
license: apache-2.0
base_model: makhataei/qa-persian-distilbert-fa-zwnj-base
tags:
- generated_from_trainer
model-index:
- name: qa-persian-distilbert-fa-zwnj-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-persian-distilbert-fa-zwnj-base
This model is a fine-tuned version of [makhataei/qa-persian-distilbert-fa-zwnj-base](https://huggingface.co/makhataei/qa-persian-distilbert-fa-zwnj-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-09
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.4975 | 1.0 | 9 | 5.3843 |
| 5.6974 | 2.0 | 18 | 5.3843 |
| 5.681 | 3.0 | 27 | 5.3843 |
| 5.7298 | 4.0 | 36 | 5.3843 |
| 5.7675 | 5.0 | 45 | 5.3843 |
| 5.7265 | 6.0 | 54 | 5.3843 |
| 5.6502 | 7.0 | 63 | 5.3843 |
| 5.6803 | 8.0 | 72 | 5.3843 |
| 5.6433 | 9.0 | 81 | 5.3843 |
| 5.6107 | 10.0 | 90 | 5.3843 |
| 5.5624 | 11.0 | 99 | 5.3843 |
| 5.6151 | 12.0 | 108 | 5.3843 |
| 5.6815 | 13.0 | 117 | 5.3843 |
| 5.6993 | 14.0 | 126 | 5.3843 |
| 5.6933 | 15.0 | 135 | 5.3843 |
| 5.7421 | 16.0 | 144 | 5.3843 |
| 5.7573 | 17.0 | 153 | 5.3843 |
| 5.7137 | 18.0 | 162 | 5.3843 |
| 5.7891 | 19.0 | 171 | 5.3843 |
| 5.7035 | 20.0 | 180 | 5.3843 |
| 5.6504 | 21.0 | 189 | 5.3843 |
| 5.7166 | 22.0 | 198 | 5.3843 |
| 5.6868 | 23.0 | 207 | 5.3843 |
| 5.7905 | 24.0 | 216 | 5.3843 |
| 5.7363 | 25.0 | 225 | 5.3843 |
| 5.7459 | 26.0 | 234 | 5.3843 |
| 5.7354 | 27.0 | 243 | 5.3843 |
| 5.7545 | 28.0 | 252 | 5.3843 |
| 5.6522 | 29.0 | 261 | 5.3843 |
| 5.6467 | 30.0 | 270 | 5.3843 |
| 5.7483 | 31.0 | 279 | 5.3843 |
| 5.7255 | 32.0 | 288 | 5.3843 |
| 5.6064 | 33.0 | 297 | 5.3843 |
| 5.6728 | 34.0 | 306 | 5.3843 |
| 5.6922 | 35.0 | 315 | 5.3843 |
| 5.6817 | 36.0 | 324 | 5.3843 |
| 5.6892 | 37.0 | 333 | 5.3843 |
| 5.609 | 38.0 | 342 | 5.3843 |
| 5.6179 | 39.0 | 351 | 5.3843 |
| 5.6384 | 40.0 | 360 | 5.3843 |
| 5.6311 | 41.0 | 369 | 5.3843 |
| 5.5614 | 42.0 | 378 | 5.3843 |
| 5.4875 | 43.0 | 387 | 5.3843 |
| 5.5113 | 44.0 | 396 | 5.3843 |
| 5.4597 | 45.0 | 405 | 5.3843 |
| 5.7105 | 46.0 | 414 | 5.3843 |
| 5.5722 | 47.0 | 423 | 5.3843 |
| 5.4466 | 48.0 | 432 | 5.3843 |
| 5.3902 | 49.0 | 441 | 5.3843 |
| 5.5197 | 50.0 | 450 | 5.3843 |
| 5.4349 | 51.0 | 459 | 5.3843 |
| 5.4746 | 52.0 | 468 | 5.3843 |
| 5.5058 | 53.0 | 477 | 5.3843 |
| 5.5615 | 54.0 | 486 | 5.3843 |
| 5.5838 | 55.0 | 495 | 5.3843 |
| 5.6564 | 56.0 | 504 | 5.3843 |
| 5.6402 | 57.0 | 513 | 5.3843 |
| 5.6022 | 58.0 | 522 | 5.3843 |
| 5.6428 | 59.0 | 531 | 5.3843 |
| 5.6259 | 60.0 | 540 | 5.3843 |
| 5.6678 | 61.0 | 549 | 5.3843 |
| 5.6119 | 62.0 | 558 | 5.3843 |
| 5.614 | 63.0 | 567 | 5.3843 |
| 5.6349 | 64.0 | 576 | 5.3843 |
| 5.5935 | 65.0 | 585 | 5.3843 |
| 5.7087 | 66.0 | 594 | 5.3843 |
| 5.6243 | 67.0 | 603 | 5.3843 |
| 5.6718 | 68.0 | 612 | 5.3843 |
| 5.5945 | 69.0 | 621 | 5.3843 |
| 5.6609 | 70.0 | 630 | 5.3843 |
| 5.7069 | 71.0 | 639 | 5.3843 |
| 5.6578 | 72.0 | 648 | 5.3843 |
| 5.706 | 73.0 | 657 | 5.3843 |
| 5.7486 | 74.0 | 666 | 5.3843 |
| 5.5958 | 75.0 | 675 | 5.3843 |
| 5.6005 | 76.0 | 684 | 5.3843 |
| 5.6954 | 77.0 | 693 | 5.3843 |
| 5.6576 | 78.0 | 702 | 5.3843 |
| 5.6537 | 79.0 | 711 | 5.3843 |
| 5.6949 | 80.0 | 720 | 5.3843 |
| 5.7134 | 81.0 | 729 | 5.3843 |
| 5.7391 | 82.0 | 738 | 5.3843 |
| 5.5262 | 83.0 | 747 | 5.3843 |
| 5.7075 | 84.0 | 756 | 5.3843 |
| 5.6827 | 85.0 | 765 | 5.3843 |
| 5.6573 | 86.0 | 774 | 5.3843 |
| 5.738 | 87.0 | 783 | 5.3843 |
| 5.7347 | 88.0 | 792 | 5.3843 |
| 5.6938 | 89.0 | 801 | 5.3843 |
| 5.7081 | 90.0 | 810 | 5.3843 |
| 5.7208 | 91.0 | 819 | 5.3843 |
| 5.7367 | 92.0 | 828 | 5.3843 |
| 5.7761 | 93.0 | 837 | 5.3843 |
| 5.7187 | 94.0 | 846 | 5.3843 |
| 5.7559 | 95.0 | 855 | 5.3843 |
| 5.7001 | 96.0 | 864 | 5.3843 |
| 5.7402 | 97.0 | 873 | 5.3843 |
| 5.6641 | 98.0 | 882 | 5.3843 |
| 5.7209 | 99.0 | 891 | 5.3843 |
| 5.7791 | 100.0 | 900 | 5.3843 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
snehayadav123/my-pet-dog | snehayadav123 | 2024-03-06T14:18:53Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T14:14:46Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by snehayadav123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 0206CS221203
Sample pictures of this concept:
|
sanbongazin/willgpt-open-calm-1b | sanbongazin | 2024-03-06T14:09:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T11:01:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ljcnju/CodeLLaMA7bForAuthorship-Attribution-LoRA-Weights | ljcnju | 2024-03-06T13:59:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T13:56:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```Python
from transformers import LlamaForSequenceClassification,pipeline,CodeLlamaTokenizer
from peft import PeftModelForSequenceClassification
adapter_model = "ljcnju/CodeLLaMA7bForAuthorship-Attribution-LoRA-Weights"
base_model = "codellama/CodeLlama-7b-hf"
tokenizer = CodeLlamaTokenizer.from_pretrained(adapter_model,model_max_length = 1024 , pad_token = "<|pad|>")
model = LlamaForSequenceClassification.from_pretrained(
base_model,
load_in_8bit = True,
torch_dtype = torch.float16,
num_labels = 66,
device_map = "auto"
)
model.config.pad_token_id = 32016
model = PeftModelForSequenceClassification.from_pretrained(model,adapter_model)
model.resize_token_embeddings(len(tokenizer))
code = "your python code"
input = tokenizer(code,padding="max_length",truncation=True,return_tensors = "pt")
with torch.no_grad():
output = model(**input)
```
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
macarious/torgo_xlsr_finetune_F04 | macarious | 2024-03-06T13:57:22Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-06T07:30:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_F04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_F04
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4132
- Wer: 0.2275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4699 | 0.85 | 1000 | 3.2861 | 1.0 |
| 2.1971 | 1.69 | 2000 | 2.0008 | 0.8514 |
| 0.9545 | 2.54 | 3000 | 1.4512 | 0.6358 |
| 0.6665 | 3.39 | 4000 | 1.4047 | 0.5008 |
| 0.5094 | 4.24 | 5000 | 1.3973 | 0.4457 |
| 0.4719 | 5.08 | 6000 | 1.4290 | 0.4066 |
| 0.4183 | 5.93 | 7000 | 1.4807 | 0.3761 |
| 0.3525 | 6.78 | 8000 | 1.5710 | 0.3667 |
| 0.3112 | 7.63 | 9000 | 1.4555 | 0.3268 |
| 0.2876 | 8.47 | 10000 | 1.4537 | 0.2988 |
| 0.2321 | 9.32 | 11000 | 1.6268 | 0.3200 |
| 0.2456 | 10.17 | 12000 | 1.3804 | 0.2852 |
| 0.2376 | 11.02 | 13000 | 1.6112 | 0.3141 |
| 0.2169 | 11.86 | 14000 | 1.4480 | 0.2988 |
| 0.2106 | 12.71 | 15000 | 1.6790 | 0.2929 |
| 0.2055 | 13.56 | 16000 | 1.5383 | 0.2963 |
| 0.1601 | 14.41 | 17000 | 1.4142 | 0.2555 |
| 0.1631 | 15.25 | 18000 | 1.5318 | 0.2470 |
| 0.1481 | 16.1 | 19000 | 1.6078 | 0.2453 |
| 0.1374 | 16.95 | 20000 | 1.3588 | 0.2360 |
| 0.1349 | 17.8 | 21000 | 1.3788 | 0.2309 |
| 0.1284 | 18.64 | 22000 | 1.4818 | 0.2326 |
| 0.1328 | 19.49 | 23000 | 1.4132 | 0.2275 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
|
peldrak/segformer-b4-cityscapes-finetuned-coastTrain | peldrak | 2024-03-06T13:57:14Z | 173 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b4-finetuned-cityscapes-1024-1024",
"base_model:finetune:nvidia/segformer-b4-finetuned-cityscapes-1024-1024",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-02T11:43:20Z | ---
license: other
base_model: nvidia/segformer-b4-finetuned-cityscapes-1024-1024
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b4-cityscapes-finetuned-coastTrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-cityscapes-finetuned-coastTrain
This model is a fine-tuned version of [nvidia/segformer-b4-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b4-finetuned-cityscapes-1024-1024) on the peldrak/coastTrain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2345
- Mean Iou: 0.7920
- Mean Accuracy: 0.8609
- Overall Accuracy: 0.9360
- Accuracy Water: 0.9603
- Accuracy Whitewater: 0.6503
- Accuracy Sediment: 0.8872
- Accuracy Other Natural Terrain: 0.6902
- Accuracy Vegetation: 0.9383
- Accuracy Development: 0.9340
- Accuracy Unknown: 0.9659
- Iou Water: 0.9194
- Iou Whitewater: 0.5196
- Iou Sediment: 0.8231
- Iou Other Natural Terrain: 0.6344
- Iou Vegetation: 0.8728
- Iou Development: 0.8571
- Iou Unknown: 0.9179
- F1 Score: 0.9354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|:--------:|
| 1.6229 | 0.16 | 20 | 1.5080 | 0.3461 | 0.4464 | 0.6629 | 0.7064 | 0.0117 | 0.3677 | 0.0094 | 0.9033 | 0.4311 | 0.6952 | 0.6022 | 0.0115 | 0.2505 | 0.0054 | 0.5058 | 0.3657 | 0.6818 | 0.6578 |
| 1.6972 | 0.31 | 40 | 1.1157 | 0.4168 | 0.5175 | 0.7412 | 0.7367 | 0.0 | 0.5983 | 0.0000 | 0.9738 | 0.5606 | 0.7528 | 0.6767 | 0.0 | 0.4343 | 0.0000 | 0.5731 | 0.4817 | 0.7519 | 0.7355 |
| 1.1969 | 0.47 | 60 | 0.8521 | 0.4777 | 0.5606 | 0.8157 | 0.8778 | 0.0001 | 0.5850 | 0.0000 | 0.9674 | 0.5962 | 0.8978 | 0.7865 | 0.0001 | 0.4970 | 0.0000 | 0.6699 | 0.4942 | 0.8960 | 0.8025 |
| 1.2209 | 0.62 | 80 | 0.6990 | 0.5024 | 0.6009 | 0.8277 | 0.8306 | 0.0 | 0.7406 | 0.0 | 0.9174 | 0.7970 | 0.9208 | 0.7942 | 0.0 | 0.5641 | 0.0 | 0.6800 | 0.5612 | 0.9175 | 0.8201 |
| 0.793 | 0.78 | 100 | 0.5543 | 0.5256 | 0.6016 | 0.8590 | 0.9409 | 0.0 | 0.6744 | 0.0 | 0.9365 | 0.7280 | 0.9312 | 0.8480 | 0.0 | 0.5903 | 0.0 | 0.7405 | 0.5737 | 0.9264 | 0.8458 |
| 1.1597 | 0.93 | 120 | 0.4957 | 0.5524 | 0.6231 | 0.8720 | 0.9201 | 0.0 | 0.7323 | 0.0 | 0.9560 | 0.8134 | 0.9401 | 0.8588 | 0.0 | 0.6428 | 0.0 | 0.7344 | 0.6976 | 0.9335 | 0.8602 |
| 0.689 | 1.09 | 140 | 0.4376 | 0.5692 | 0.6374 | 0.8844 | 0.9221 | 0.0 | 0.7930 | 0.0000 | 0.9586 | 0.8469 | 0.9411 | 0.8748 | 0.0 | 0.6899 | 0.0000 | 0.7542 | 0.7310 | 0.9348 | 0.8730 |
| 0.9423 | 1.24 | 160 | 0.3968 | 0.5715 | 0.6433 | 0.8882 | 0.9328 | 0.0000 | 0.8273 | 0.0 | 0.9201 | 0.8664 | 0.9564 | 0.8794 | 0.0000 | 0.6963 | 0.0 | 0.7687 | 0.7135 | 0.9424 | 0.8766 |
| 1.2176 | 1.4 | 180 | 0.3838 | 0.5673 | 0.6397 | 0.8811 | 0.9107 | 0.0000 | 0.7816 | 0.0 | 0.9461 | 0.8832 | 0.9561 | 0.8578 | 0.0000 | 0.6427 | 0.0 | 0.7728 | 0.7550 | 0.9429 | 0.8695 |
| 0.4714 | 1.55 | 200 | 0.4459 | 0.5380 | 0.6092 | 0.8570 | 0.9730 | 0.0 | 0.5500 | 0.0 | 0.8806 | 0.9121 | 0.9490 | 0.7998 | 0.0 | 0.4883 | 0.0 | 0.7722 | 0.7618 | 0.9442 | 0.8398 |
| 0.5087 | 1.71 | 220 | 0.4062 | 0.5677 | 0.6359 | 0.8827 | 0.9365 | 0.0008 | 0.7216 | 0.0 | 0.9511 | 0.8918 | 0.9499 | 0.8844 | 0.0008 | 0.6722 | 0.0 | 0.7302 | 0.7463 | 0.9402 | 0.8709 |
| 0.484 | 1.86 | 240 | 0.3121 | 0.5926 | 0.6518 | 0.9017 | 0.9688 | 0.0001 | 0.7858 | 0.0 | 0.9341 | 0.9243 | 0.9498 | 0.8788 | 0.0001 | 0.7236 | 0.0 | 0.8069 | 0.7972 | 0.9420 | 0.8886 |
| 0.443 | 2.02 | 260 | 0.3554 | 0.5811 | 0.6575 | 0.8904 | 0.8939 | 0.0001 | 0.8842 | 0.0 | 0.9316 | 0.9332 | 0.9599 | 0.8541 | 0.0001 | 0.6697 | 0.0 | 0.8205 | 0.7724 | 0.9512 | 0.8798 |
| 0.466 | 2.17 | 280 | 0.3265 | 0.5830 | 0.6553 | 0.8954 | 0.9347 | 0.0021 | 0.8131 | 0.0 | 0.9256 | 0.9487 | 0.9627 | 0.8786 | 0.0021 | 0.6993 | 0.0 | 0.7950 | 0.7566 | 0.9494 | 0.8833 |
| 0.7117 | 2.33 | 300 | 0.4096 | 0.5634 | 0.6494 | 0.8672 | 0.8367 | 0.0076 | 0.9133 | 0.0 | 0.9048 | 0.9233 | 0.9600 | 0.8000 | 0.0076 | 0.5985 | 0.0 | 0.8080 | 0.7798 | 0.9503 | 0.8595 |
| 0.3095 | 2.48 | 320 | 0.3111 | 0.5858 | 0.6494 | 0.8986 | 0.9655 | 0.0070 | 0.7517 | 0.0 | 0.9363 | 0.9210 | 0.9644 | 0.8850 | 0.0070 | 0.6808 | 0.0 | 0.8089 | 0.7790 | 0.9399 | 0.8851 |
| 0.3843 | 2.64 | 340 | 0.3076 | 0.6033 | 0.6589 | 0.9050 | 0.9648 | 0.0448 | 0.7872 | 0.0 | 0.9478 | 0.9052 | 0.9628 | 0.8786 | 0.0443 | 0.6973 | 0.0 | 0.8322 | 0.8187 | 0.9516 | 0.8924 |
| 0.4158 | 2.79 | 360 | 0.2985 | 0.5965 | 0.6553 | 0.9026 | 0.9557 | 0.0266 | 0.8130 | 0.0 | 0.9561 | 0.8939 | 0.9415 | 0.8820 | 0.0265 | 0.7245 | 0.0 | 0.8147 | 0.7892 | 0.9386 | 0.8903 |
| 0.3492 | 2.95 | 380 | 0.2709 | 0.6251 | 0.6863 | 0.9126 | 0.9524 | 0.1545 | 0.8818 | 0.0 | 0.9347 | 0.9218 | 0.9587 | 0.9003 | 0.1514 | 0.7515 | 0.0 | 0.8313 | 0.7895 | 0.9520 | 0.9026 |
| 0.2384 | 3.1 | 400 | 0.2531 | 0.6348 | 0.6950 | 0.9169 | 0.9541 | 0.1876 | 0.8731 | 0.0015 | 0.9446 | 0.9407 | 0.9632 | 0.9061 | 0.1830 | 0.7663 | 0.0015 | 0.8395 | 0.7944 | 0.9524 | 0.9071 |
| 0.2227 | 3.26 | 420 | 0.2772 | 0.6388 | 0.6939 | 0.9186 | 0.9648 | 0.1767 | 0.9028 | 0.0013 | 0.9230 | 0.9247 | 0.9637 | 0.9015 | 0.1735 | 0.7597 | 0.0013 | 0.8503 | 0.8312 | 0.9544 | 0.9087 |
| 0.6677 | 3.41 | 440 | 0.2861 | 0.6306 | 0.6874 | 0.9091 | 0.9640 | 0.1876 | 0.7780 | 0.0559 | 0.9635 | 0.9022 | 0.9610 | 0.8965 | 0.1838 | 0.7228 | 0.0558 | 0.8215 | 0.7825 | 0.9509 | 0.8996 |
| 0.3552 | 3.57 | 460 | 0.2795 | 0.6408 | 0.6945 | 0.9130 | 0.9641 | 0.2606 | 0.8234 | 0.0001 | 0.9487 | 0.8958 | 0.9690 | 0.8932 | 0.2536 | 0.7201 | 0.0001 | 0.8427 | 0.8204 | 0.9553 | 0.9032 |
| 0.3258 | 3.72 | 480 | 0.3075 | 0.6306 | 0.6891 | 0.9042 | 0.9730 | 0.2962 | 0.7091 | 0.0068 | 0.9628 | 0.9131 | 0.9625 | 0.8945 | 0.2617 | 0.6816 | 0.0068 | 0.8003 | 0.8150 | 0.9546 | 0.8939 |
| 0.4778 | 3.88 | 500 | 0.2449 | 0.6570 | 0.7137 | 0.9188 | 0.9583 | 0.2928 | 0.8807 | 0.0347 | 0.9383 | 0.9280 | 0.9633 | 0.9028 | 0.2768 | 0.7677 | 0.0347 | 0.8407 | 0.8223 | 0.9541 | 0.9106 |
| 0.4817 | 4.03 | 520 | 0.2365 | 0.6790 | 0.7381 | 0.9235 | 0.9611 | 0.4327 | 0.8871 | 0.0484 | 0.9393 | 0.9328 | 0.9651 | 0.9121 | 0.3866 | 0.7842 | 0.0483 | 0.8459 | 0.8230 | 0.9529 | 0.9165 |
| 0.3363 | 4.19 | 540 | 0.2273 | 0.6783 | 0.7315 | 0.9243 | 0.9635 | 0.4529 | 0.8805 | 0.0058 | 0.9546 | 0.8945 | 0.9685 | 0.9131 | 0.4101 | 0.7847 | 0.0058 | 0.8451 | 0.8339 | 0.9554 | 0.9163 |
| 0.4825 | 4.34 | 560 | 0.2615 | 0.6791 | 0.7482 | 0.9180 | 0.9406 | 0.5008 | 0.9124 | 0.0441 | 0.9206 | 0.9512 | 0.9675 | 0.8996 | 0.4385 | 0.7400 | 0.0441 | 0.8552 | 0.8198 | 0.9562 | 0.9119 |
| 0.3482 | 4.5 | 580 | 0.2336 | 0.6965 | 0.7537 | 0.9276 | 0.9695 | 0.4845 | 0.8611 | 0.1015 | 0.9492 | 0.9455 | 0.9648 | 0.9193 | 0.4258 | 0.7901 | 0.1006 | 0.8498 | 0.8349 | 0.9550 | 0.9215 |
| 0.5311 | 4.65 | 600 | 0.2592 | 0.6858 | 0.7484 | 0.9200 | 0.9477 | 0.4867 | 0.8797 | 0.0974 | 0.9463 | 0.9109 | 0.9703 | 0.9136 | 0.4329 | 0.7687 | 0.0970 | 0.8438 | 0.8232 | 0.9215 | 0.9142 |
| 0.3754 | 4.81 | 620 | 0.2345 | 0.7039 | 0.7629 | 0.9265 | 0.9641 | 0.5201 | 0.8692 | 0.1376 | 0.9387 | 0.9338 | 0.9769 | 0.9233 | 0.4557 | 0.7998 | 0.1369 | 0.8395 | 0.8404 | 0.9315 | 0.9211 |
| 0.236 | 4.96 | 640 | 0.2342 | 0.7061 | 0.7604 | 0.9268 | 0.9669 | 0.4754 | 0.8856 | 0.1790 | 0.9329 | 0.9039 | 0.9790 | 0.9218 | 0.4330 | 0.8088 | 0.1763 | 0.8374 | 0.8329 | 0.9327 | 0.9218 |
| 0.3496 | 5.12 | 660 | 0.2061 | 0.7264 | 0.7870 | 0.9313 | 0.9622 | 0.5779 | 0.9263 | 0.2075 | 0.9189 | 0.9404 | 0.9757 | 0.9243 | 0.5092 | 0.8247 | 0.2026 | 0.8525 | 0.8402 | 0.9315 | 0.9273 |
| 0.1729 | 5.27 | 680 | 0.2289 | 0.7086 | 0.7793 | 0.9245 | 0.9541 | 0.6032 | 0.8708 | 0.1744 | 0.9351 | 0.9392 | 0.9781 | 0.9202 | 0.4789 | 0.7981 | 0.1706 | 0.8377 | 0.8265 | 0.9282 | 0.9203 |
| 0.2636 | 5.43 | 700 | 0.2071 | 0.7739 | 0.8389 | 0.9335 | 0.9623 | 0.6448 | 0.8970 | 0.5348 | 0.9262 | 0.9396 | 0.9676 | 0.9245 | 0.5537 | 0.8125 | 0.4941 | 0.8551 | 0.8473 | 0.9304 | 0.9324 |
| 0.1594 | 5.58 | 720 | 0.2175 | 0.7447 | 0.8114 | 0.9284 | 0.9632 | 0.6275 | 0.8570 | 0.3850 | 0.9306 | 0.9383 | 0.9783 | 0.9172 | 0.5130 | 0.7999 | 0.3765 | 0.8467 | 0.8207 | 0.9389 | 0.9262 |
| 0.6799 | 5.74 | 740 | 0.1965 | 0.7704 | 0.8330 | 0.9379 | 0.9650 | 0.6576 | 0.9113 | 0.4469 | 0.9289 | 0.9430 | 0.9783 | 0.9303 | 0.5574 | 0.8353 | 0.4280 | 0.8695 | 0.8384 | 0.9338 | 0.9364 |
| 0.4955 | 5.89 | 760 | 0.2184 | 0.7480 | 0.8065 | 0.9318 | 0.9569 | 0.5099 | 0.9085 | 0.4344 | 0.9266 | 0.9255 | 0.9841 | 0.9146 | 0.4472 | 0.8109 | 0.4160 | 0.8654 | 0.8514 | 0.9308 | 0.9297 |
| 0.218 | 6.05 | 780 | 0.2025 | 0.7870 | 0.8508 | 0.9364 | 0.9678 | 0.5859 | 0.8690 | 0.6870 | 0.9368 | 0.9360 | 0.9732 | 0.9282 | 0.4918 | 0.8198 | 0.6437 | 0.8651 | 0.8324 | 0.9282 | 0.9355 |
| 0.4344 | 6.2 | 800 | 0.2128 | 0.7816 | 0.8399 | 0.9361 | 0.9620 | 0.6012 | 0.8908 | 0.5972 | 0.9446 | 0.9092 | 0.9743 | 0.9243 | 0.5193 | 0.8213 | 0.5626 | 0.8684 | 0.8520 | 0.9233 | 0.9350 |
| 0.5841 | 6.36 | 820 | 0.2412 | 0.7965 | 0.8614 | 0.9378 | 0.9587 | 0.6847 | 0.8888 | 0.6398 | 0.9360 | 0.9408 | 0.9808 | 0.9254 | 0.5838 | 0.8259 | 0.6002 | 0.8758 | 0.8440 | 0.9204 | 0.9371 |
| 0.3048 | 6.51 | 840 | 0.2336 | 0.7869 | 0.8580 | 0.9344 | 0.9576 | 0.6990 | 0.8928 | 0.6175 | 0.9281 | 0.9381 | 0.9728 | 0.9160 | 0.5612 | 0.8190 | 0.5724 | 0.8720 | 0.8484 | 0.9193 | 0.9337 |
| 0.2002 | 6.67 | 860 | 0.2373 | 0.7929 | 0.8605 | 0.9343 | 0.9512 | 0.6492 | 0.8691 | 0.6968 | 0.9424 | 0.9290 | 0.9861 | 0.9216 | 0.5520 | 0.8087 | 0.6401 | 0.8685 | 0.8437 | 0.9155 | 0.9337 |
| 0.2093 | 6.82 | 880 | 0.2335 | 0.7918 | 0.8528 | 0.9364 | 0.9649 | 0.6226 | 0.8682 | 0.6653 | 0.9414 | 0.9311 | 0.9758 | 0.9210 | 0.5272 | 0.8166 | 0.6269 | 0.8720 | 0.8550 | 0.9235 | 0.9355 |
| 0.1581 | 6.98 | 900 | 0.2279 | 0.7995 | 0.8786 | 0.9368 | 0.9509 | 0.7124 | 0.8959 | 0.7476 | 0.9347 | 0.9273 | 0.9812 | 0.9159 | 0.5266 | 0.8244 | 0.6672 | 0.8800 | 0.8627 | 0.9198 | 0.9367 |
| 0.1209 | 7.13 | 920 | 0.2432 | 0.7832 | 0.8475 | 0.9327 | 0.9548 | 0.5902 | 0.8897 | 0.6726 | 0.9357 | 0.9173 | 0.9725 | 0.9104 | 0.5047 | 0.8096 | 0.6187 | 0.8725 | 0.8463 | 0.9201 | 0.9319 |
| 0.1492 | 7.29 | 940 | 0.2345 | 0.7920 | 0.8609 | 0.9360 | 0.9603 | 0.6503 | 0.8872 | 0.6902 | 0.9383 | 0.9340 | 0.9659 | 0.9194 | 0.5196 | 0.8231 | 0.6344 | 0.8728 | 0.8571 | 0.9179 | 0.9354 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
com3dian/Bart-large-paper2slides-expander | com3dian | 2024-03-06T13:52:03Z | 43 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:1711.00043",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-13T15:35:11Z | ---
language:
- en
widget:
- text: >
Bag-of-feature representations can be described by analogy to bag-of-words
representations.
- text: >
Self-attention is an attention mechanism relating different positions of a
single sequence in order to compute a representation of the sequence.
license:
- mit
pipeline_tag: text2text-generation
datasets:
- cnn_dailymail
---
# Bart-Large Expansion Model

This repository contains the **Bart-Large-paper2slides-expander Model**, which has been pre-trained on cnn-daily-mail dataset and fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
Its primary focus is to expand the **scientific text** by providing alternative and expanded versions with improved clarity and accuracy. The model is parallelly trained with the [**Bart-Large-paper2slides-summarizer Model**](https://huggingface.co/com3dian/Bart-large-paper2slides-summarizer) from the same contributor.
## Model Details
- **Model Architecture**: Bart-Large
- **Fine-tuning Dataset**: [Automatic Slide Generation from Scientific Papers](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers)
- **Fine-tuning Method**: Unsupervised Learning
[Bart](https://huggingface.co/transformers/model_doc/bart.html) (Bidirectional and Auto-Regressive Transformers) is a sequence-to-sequence (seq2seq) model developed by Facebook AI Research. It has shown exceptional performance in various natural language processing (NLP) tasks such as text summarization, text generation, and machine translation.
This particular model, Bart-Large, is the larger version of the Bart model. It consists of 12 encoder and decoder layers and has a total of 400 million parameters.
## Usage
To use this model, you can leverage the Hugging Face [Transformers](https://huggingface.co/transformers/) library. Here's an example of how to use it in Python:
```python
from transformers import BartTokenizer, BartForConditionalGeneration, pipeline
# Load the model and tokenizer
model_name = "com3dian/Bart-large-paper2slides-expander"
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
# Generate summary from input text
input_text = "Your input text here..."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids)
# Decode generated summaries
expanded_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(expanded_text)
# Or using the pipeline API
expander = pipeline("text2text-generation", model=model_name)
expanded_text = expander(input_text, max_length=50, min_length=30, do_sample=False)
print(expanded_text)
```
Ensure you have the `transformers` library installed before running the code. You can install it using `pip`:
```
pip install transformers
```
## Model Fine-tuning Details
The fine-tuning process for this model involved training on the slide generation dataset using unsupervised learning techniques. Unsupervised learning refers to training a model without explicit human-labeled targets. Instead, the model learns to back-expand the input provided by the summarization model, into the original texts.
The specific hyperparameters and training details used for fine-tuning this model are as follows:
- Batch Size: 4
- Learning Rate: 2e-6
- Training Steps: 3*7
- Optimizer: AdamW
## Acknowledgments
We would like to acknowledge the authors of the Bart model and the creators of the slide generation dataset for their valuable contributions, which have enabled the development of this fine-tuned model.
If you use this model or find it helpful in your work, please consider citing the original Bart model, the slide generation dataset, and [this paper](https://studenttheses.uu.nl/handle/20.500.12932/45939) to provide proper credit to the respective authors.
## License
This model and the associated code are released under the [MIT license](https://opensource.org/license/mit/). |
com3dian/Bart-large-paper2slides-summarizer | com3dian | 2024-03-06T13:50:56Z | 329 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"en",
"arxiv:1711.00043",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-07-10T11:03:25Z | ---
language:
- en
tags:
- summarization
widget:
- text: |
We here recount the main elements of a classic bag-of-features model before introducing the simpler DNN-based BagNets in the next paragraph. Bag-of-feature representations can be described by analogy to bag-of-words representations. With bag-of-words, one counts the number of occurrences of words from a vocabulary in a document. This vocabulary contains important words (but not common ones like "and" or "the") and word clusters (i.e. semantically similar words like "gigantic" and "enormous" are subsumed). The counts of each word in the vocabulary are assembled as one long term vector. This is called the bag-of-words document representation because all ordering of the words is lost. Likewise, bag-of-feature representations are based on a vocabulary of visual words which represent clusters of local image features. The term vector for an image is then simply the number of occurrences of each visual word in the vocabulary. This term vector is used as an input to a classifier (e.g. SVM or MLP). Many successful image classification models have been based on this pipeline (Csurka et al., 2004; Jurie & Triggs, 2005; Zhang et al., 2007; Lazebnik et al., 2006), see O’Hara & Draper (2011) for an up-to-date overview.
- text: |
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9].
license:
- mit
pipeline_tag: summarization
---
# Bart-Large Summarization Model

This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
Its primary focus is to summarize **scientific texts** with precision and accuracy, the model is parallelly trained with the [**Bart-large-paper2slides-expander**](https://huggingface.co/com3dian/Bart-large-paper2slides-expander) from the same contributor.
## Model Details
- **Model Architecture**: Bart-Large
- **Fine-tuning Dataset**: [Automatic Slide Generation from Scientific Papers](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers)
- **Fine-tuning Method**: Unsupervised Learning
[Bart](https://huggingface.co/transformers/model_doc/bart.html) (Bidirectional and Auto-Regressive Transformers) is a sequence-to-sequence (seq2seq) model developed by Facebook AI Research. It has shown exceptional performance in various natural language processing (NLP) tasks such as text summarization, text generation, and machine translation.
This particular model, Bart-Large, is the larger version of the Bart model. It consists of 12 encoder and decoder layers and has a total of 400 million parameters.
## Usage
To use this model, you can leverage the Hugging Face [Transformers](https://huggingface.co/transformers/) library. Here's an example of how to use it in Python:
```python
from transformers import BartTokenizer, BartForConditionalGeneration, pipeline
# Load the model and tokenizer
model_name = "com3dian/Bart-large-paper2slides-summarizer"
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
# Generate summary from input text
input_text = "Your input text here..."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids)
# Decode generated summaries
summary = tokenizer.decode(output[0], skip_special_tokens=True)
print(summary)
# Or using the pipeline API
summarizer = pipeline("summarization", model=model_name)
summary = summarizer(input_text, max_length=50, min_length=30, do_sample=False)
print(summary)
```
Ensure you have the `transformers` library installed before running the code. You can install it using `pip`:
```
pip install transformers
```
## Model Fine-tuning Details
The fine-tuning process for this model involved training on the slide generation dataset using unsupervised learning techniques. Unsupervised learning refers to training a model without explicit human-labeled targets. Instead, the model learns to back-summarize the input provided by the expansion model, into the original texts.
The specific hyperparameters and training details used for fine-tuning this model are as follows:
- Batch Size: 4
- Learning Rate: 2e-6
- Training Steps: 3*7
- Optimizer: AdamW
## Model Performance
The Bart-Large Slide Generation Model has undergone thorough human evaluation in a wide range of scientific domains, including AI, mathematics, statistics, history, geography, and climate science, to compare its performance with the [Bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) model.
## Acknowledgments
We would like to acknowledge the authors of the Bart model and the creators of the slide generation dataset for their valuable contributions, which have enabled the development of this fine-tuned model.
If you use this model or find it helpful in your work, please consider citing the original Bart model, the slide generation dataset, and [this paper](https://studenttheses.uu.nl/handle/20.500.12932/45939) to provide proper credit to the respective authors.
## License
This model and the associated code are released under the [MIT license](https://opensource.org/license/mit/). |
ZainAli60/mine_modeles | ZainAli60 | 2024-03-06T13:48:36Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T12:36:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ljcnju/DeepSeek7bForAuthorship-Attribution-LoRA-Weights | ljcnju | 2024-03-06T13:46:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T13:15:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```Python
from peft import PeftModelForCausalLM
from transformers import AutoTokenizer, AutoModelForSequenceClassification,pipeline,RobertaForMaskedLM,RobertaTokenizer
import torch
basemodel = "deepseek-ai/deepseek-coder-6.7b-base"
model = AutoModelForSequenceClassification.from_pretrained(
basemodel,
load_in_8bit = True,
torch_dtype = torch.float16,
num_labels = 66,
device_map = "auto"
)
model = PeftModelForCausalLM.from_pretrained(model,"ljcnju/DeepSeek7bForAuthorship-Attribution-LoRA-Weights")
tokenizer = AutoTokenizer.from_pretrained("ljcnju/DeepSeek7bForAuthorship-Attribution-LoRA-Weights")
code = "your python code"
input = tokenizer(code,padding="max_length",truncation=True,return_tensors = "pt")
with torch.no_grad():
output = model(**input)
```
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmineSaidi-ISTIC/phi-2-finetuned-gsm8k | AmineSaidi-ISTIC | 2024-03-06T13:46:14Z | 49 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-05T12:27:31Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
ljcnju/CodeBertForAuthorship-Attribution | ljcnju | 2024-03-06T13:45:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T13:04:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```Python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
basemodel = "ljcnju/CodeBertForAuthorship-Attribution"
tokenizer = RobertaTokenizer.from_pretrained(basemodel)
model = RobertaForSequenceClassification.from_pretrained(basemodel, num_labels = 66)
code = "your python code"
input = tokenizer(code,padding="max_length",truncation=True,return_tensors = "pt")
with torch.no_grad():
output = model(**input)
```
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QEQ1996/wrt | QEQ1996 | 2024-03-06T13:45:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T13:45:37Z | ---
license: creativeml-openrail-m
---
|
vgkienzler/mistral7binstruct_summarize | vgkienzler | 2024-03-06T13:45:27Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-02-29T20:33:36Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral7binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6531 | 0.22 | 25 | 1.5586 |
| 1.5528 | 0.43 | 50 | 1.4695 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
biololab/symptom_extraction | biololab | 2024-03-06T13:34:46Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:vgaraujov/t5-base-spanish",
"base_model:finetune:vgaraujov/t5-base-spanish",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-05T21:40:46Z | ---
license: apache-2.0
base_model: vgaraujov/t5-base-spanish
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: symptom_extraction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# symptom_extraction
This model is a fine-tuned version of [vgaraujov/t5-base-spanish](https://huggingface.co/vgaraujov/t5-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Rouge1: 0.3849
- Rouge2: 0.3231
- Rougel: 0.3816
- Rougelsum: 0.3814
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.2061 | 1.0 | 640 | 0.1686 | 0.3824 | 0.3166 | 0.3784 | 0.3782 | 19.0 |
| 0.195 | 2.0 | 1280 | 0.1515 | 0.3841 | 0.3204 | 0.3803 | 0.3801 | 19.0 |
| 0.2035 | 3.0 | 1920 | 0.1448 | 0.3851 | 0.3226 | 0.3817 | 0.3815 | 19.0 |
| 0.1784 | 4.0 | 2560 | 0.1424 | 0.3849 | 0.3231 | 0.3816 | 0.3814 | 19.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
otterpupp/spin-diffusion-v3-sd15 | otterpupp | 2024-03-06T13:29:44Z | 0 | 1 | null | [
"text-to-image",
"en",
"dataset:yuvalkirstain/pickapic_v2",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-03-04T12:30:23Z | ---
license: apache-2.0
datasets:
- yuvalkirstain/pickapic_v2
language:
- en
pipeline_tag: text-to-image
--- |
khursani8/zzzz | khursani8 | 2024-03-06T13:28:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T13:27:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rdp99/deberta-v3-small-finetuned-rte | rdp99 | 2024-03-06T13:25:46Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T13:24:58Z | ---
license: mit
base_model: microsoft/deberta-v3-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-small-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-finetuned-rte
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5536
- Accuracy: 0.7762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.5754 | 0.7040 |
| No log | 2.0 | 312 | 0.5536 | 0.7762 |
| No log | 3.0 | 468 | 0.6493 | 0.7473 |
| 0.4688 | 4.0 | 624 | 0.9047 | 0.7545 |
| 0.4688 | 5.0 | 780 | 0.9528 | 0.7581 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
heyoiamanishaaa/my-soft-toy | heyoiamanishaaa | 2024-03-06T13:23:23Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T13:19:21Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Soft-Toy Dreambooth model trained by heyoiamanishaaa following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
|
Lambent/CosmoAlpacaLight-1b | Lambent | 2024-03-06T13:22:47Z | 1 | 0 | peft | [
"peft",
"pytorch",
"llama",
"generated_from_trainer",
"dataset:vicgalle/alpaca-gpt4",
"base_model:HuggingFaceTB/cosmo-1b",
"base_model:adapter:HuggingFaceTB/cosmo-1b",
"license:cc",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-27T15:27:15Z | ---
license: cc
library_name: peft
tags:
- generated_from_trainer
base_model: HuggingFaceTB/cosmo-1b
model-index:
- name: lora-out
results: []
datasets:
- vicgalle/alpaca-gpt4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: HuggingFaceTB/cosmo-1b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: vicgalle/alpaca-gpt4
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 16
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: CosmoAlpacaLight-1b-v0.1
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# lora-out
This model is a fine-tuned version of [HuggingFaceTB/cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b) on the alpaca-gpt4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0447 | 1.0 | 662 | 1.0717 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.17.1
- Tokenizers 0.15.0 |
Lambent/cosmo-rag-1b-v0.1 | Lambent | 2024-03-06T13:19:48Z | 1 | 0 | peft | [
"peft",
"pytorch",
"llama",
"generated_from_trainer",
"base_model:HuggingFaceTB/cosmo-1b",
"base_model:adapter:HuggingFaceTB/cosmo-1b",
"license:apache-2.0",
"region:us"
] | null | 2024-02-29T15:32:50Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: HuggingFaceTB/cosmo-1b
model-index:
- name: rag-lora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: HuggingFaceTB/cosmo-1b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: neural-bridge/rag-dataset-12000
type: context_qa.load_v2
- path: neural-bridge/rag-hallucination-dataset-1000
type: context_qa.load_v2
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./rag-lora-out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: Cosmo-1b-RAG-v0.1
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 8
eval_batch_size: 8
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# rag-lora-out
This model is a fine-tuned version of [HuggingFaceTB/cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5873 | 1.02 | 148 | 0.6392 |
| 0.4513 | 2.02 | 296 | 0.6006 |
| 0.422 | 2.95 | 435 | 0.6086 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
Divyanshu04/LLM3 | Divyanshu04 | 2024-03-06T13:17:20Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T13:09:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tjmooney98/725_tm-setfit-paraphrase-mpnet-base-v2 | tjmooney98 | 2024-03-06T13:17:16Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-03-06T13:16:58Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: it's not enough that product is integrating brand in product search results
but is also looking to add it to product, word and outlook. this could be transformative
for productivity at work in the future if it works! product could be under siege
soon!
- text: 'product in product is a game changer!! here is a list of things it can do:
it can answer your questions in natural language. it can summarize content to
give you a brief overview it can adjust your pcs settings it can help troubleshoot
issues. 1/2'
- text: 1/2 hello clif! he didn't want to use product, its data or brand. hes using
the product and currently training it on his own data articles/books he personally
published, and hes been requesting book publishers permission to use their books
- text: 'protecting data in the era of generative product: brand launches innovative
security platform dlvr.it/std9vp'
- text: all i want from my product is goddam dropdown menus please stop with the icons.
im talking to you, brand, and particularly to you, product. death to thy ribbon,
and be damned
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.833976833976834
name: Accuracy
- type: f1
value:
- 0.38297872340425526
- 0.65
- 0.9002320185614849
name: F1
- type: precision
value:
- 0.23684210526315788
- 0.48148148148148145
- 1.0
name: Precision
- type: recall
value:
- 1.0
- 1.0
- 0.8185654008438819
name: Recall
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neither | <ul><li>'product cloud fails to cash in on product - as enterprises optimize cloud spending, product has registered its slowest growth in three years.'</li><li>'what do those things have to do with product? and its funny youre trying to argue facts by bringing your god into this.'</li><li>'your question didn\'t mean what you think it meant. it answered correctly to your question, which i also read as "hey brand, can you forget my loved ones?"'</li></ul> |
| peak | <ul><li>'chatbrandandme product brand product dang, my product msftadvertising experience is already so smooth and satisfying wow. they even gave me a free landing page for my product and product. i love msftadvertising and product for buying out brand and making gpt my best friend even more'</li><li>'i asked my physics teacher for help on a question i didnt understand on a test and she sent me back a 5 slide product with audio explaining each part of the question. she 100% is my fav teacher now.'</li><li>'brand!! it helped me finish my resume. i just asked it if it could write my resume based on horribly written descriptions i came up with. and it made it all pretty:)'</li></ul> |
| pit | <ul><li>'do not upgrade to product, it is a complete joke of an operating system. all of my xproduct programs are broken, none of my gpus work correctly, even after checking the bios and drivers, and now file explorer crashes upon startup, basically locking up the whole computer!'</li><li>'yes, and it would be great if product stops changing the format of data from other sources automatically, that is really annoying when 10-1-2 becomes "magically and wrongly" 2010/01/02. we are in the age of data and product just cannot handle them well..'</li><li>'it\'s a pity that the *product* doesn\'t work such as the "*normal chat*" does, but with 18,000 chars lim. hopefully, the will aim to make such upgrade, although more memory costly.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy | F1 | Precision | Recall |
|:--------|:---------|:------------------------------------------------|:------------------------------------------------|:-------------------------------|
| **all** | 0.8340 | [0.38297872340425526, 0.65, 0.9002320185614849] | [0.23684210526315788, 0.48148148148148145, 1.0] | [1.0, 1.0, 0.8185654008438819] |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("tjmooney98/725_tm-setfit-paraphrase-mpnet-base-v2")
# Run inference
preds = model("protecting data in the era of generative product: brand launches innovative security platform dlvr.it/std9vp")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 9 | 37.1711 | 98 |
| Label | Training Sample Count |
|:--------|:----------------------|
| pit | 150 |
| peak | 150 |
| neither | 150 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.2698 | - |
| 0.0119 | 50 | 0.2535 | - |
| 0.0237 | 100 | 0.1803 | - |
| 0.0356 | 150 | 0.063 | - |
| 0.0474 | 200 | 0.0126 | - |
| 0.0593 | 250 | 0.0044 | - |
| 0.0711 | 300 | 0.0007 | - |
| 0.0830 | 350 | 0.0006 | - |
| 0.0948 | 400 | 0.0003 | - |
| 0.1067 | 450 | 0.0003 | - |
| 0.1185 | 500 | 0.0002 | - |
| 0.1304 | 550 | 0.0002 | - |
| 0.1422 | 600 | 0.0001 | - |
| 0.1541 | 650 | 0.0001 | - |
| 0.1659 | 700 | 0.0002 | - |
| 0.1778 | 750 | 0.0001 | - |
| 0.1896 | 800 | 0.0001 | - |
| 0.2015 | 850 | 0.0001 | - |
| 0.2133 | 900 | 0.0001 | - |
| 0.2252 | 950 | 0.0 | - |
| 0.2370 | 1000 | 0.0001 | - |
| 0.2489 | 1050 | 0.0001 | - |
| 0.2607 | 1100 | 0.0 | - |
| 0.2726 | 1150 | 0.0 | - |
| 0.2844 | 1200 | 0.0001 | - |
| 0.2963 | 1250 | 0.0 | - |
| 0.3081 | 1300 | 0.0 | - |
| 0.3200 | 1350 | 0.0 | - |
| 0.3318 | 1400 | 0.0 | - |
| 0.3437 | 1450 | 0.0 | - |
| 0.3555 | 1500 | 0.0 | - |
| 0.3674 | 1550 | 0.0 | - |
| 0.3792 | 1600 | 0.0 | - |
| 0.3911 | 1650 | 0.0 | - |
| 0.4029 | 1700 | 0.0 | - |
| 0.4148 | 1750 | 0.0001 | - |
| 0.4266 | 1800 | 0.0 | - |
| 0.4385 | 1850 | 0.0001 | - |
| 0.4503 | 1900 | 0.0001 | - |
| 0.4622 | 1950 | 0.0001 | - |
| 0.4740 | 2000 | 0.0 | - |
| 0.4859 | 2050 | 0.0 | - |
| 0.4977 | 2100 | 0.0 | - |
| 0.5096 | 2150 | 0.0 | - |
| 0.5215 | 2200 | 0.0 | - |
| 0.5333 | 2250 | 0.0 | - |
| 0.5452 | 2300 | 0.0 | - |
| 0.5570 | 2350 | 0.0 | - |
| 0.5689 | 2400 | 0.0 | - |
| 0.5807 | 2450 | 0.0 | - |
| 0.5926 | 2500 | 0.0 | - |
| 0.6044 | 2550 | 0.0 | - |
| 0.6163 | 2600 | 0.0 | - |
| 0.6281 | 2650 | 0.0 | - |
| 0.6400 | 2700 | 0.0 | - |
| 0.6518 | 2750 | 0.0 | - |
| 0.6637 | 2800 | 0.0 | - |
| 0.6755 | 2850 | 0.0 | - |
| 0.6874 | 2900 | 0.0 | - |
| 0.6992 | 2950 | 0.0 | - |
| 0.7111 | 3000 | 0.0 | - |
| 0.7229 | 3050 | 0.0 | - |
| 0.7348 | 3100 | 0.0 | - |
| 0.7466 | 3150 | 0.0 | - |
| 0.7585 | 3200 | 0.0 | - |
| 0.7703 | 3250 | 0.0 | - |
| 0.7822 | 3300 | 0.0 | - |
| 0.7940 | 3350 | 0.0 | - |
| 0.8059 | 3400 | 0.0 | - |
| 0.8177 | 3450 | 0.0 | - |
| 0.8296 | 3500 | 0.0 | - |
| 0.8414 | 3550 | 0.0 | - |
| 0.8533 | 3600 | 0.0 | - |
| 0.8651 | 3650 | 0.0 | - |
| 0.8770 | 3700 | 0.0 | - |
| 0.8888 | 3750 | 0.0 | - |
| 0.9007 | 3800 | 0.0 | - |
| 0.9125 | 3850 | 0.0 | - |
| 0.9244 | 3900 | 0.0001 | - |
| 0.9362 | 3950 | 0.0 | - |
| 0.9481 | 4000 | 0.0 | - |
| 0.9599 | 4050 | 0.0 | - |
| 0.9718 | 4100 | 0.0 | - |
| 0.9836 | 4150 | 0.0 | - |
| 0.9955 | 4200 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
yash-412/4bit-llava-1.5-7b-hf | yash-412 | 2024-03-06T12:57:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-03-06T12:55:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Reboot-14/my-pet-dog | Reboot-14 | 2024-03-06T12:55:07Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-06T12:53:07Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Reboot-14 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
EKAT456/12 | EKAT456 | 2024-03-06T12:51:52Z | 0 | 0 | asteroid | [
"asteroid",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T12:43:49Z | ---
license: apache-2.0
library_name: asteroid
--- |
Arczisan/ink-watercolor | Arczisan | 2024-03-06T12:47:28Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | 2024-03-06T12:47:11Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0c\0o\0l\0o\0r\0f\0u\0l\0 \0i\0n\0k\0p\0a\0i\0n\0t\0i\0n\0g\0,\0l\0o\0o\0n\0g\0,\0n\0o\0 \0h\0u\0m\0a\0n\0s\0,\0s\0o\0l\0o\0,\0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0,\0<\0l\0o\0r\0a\0:\0c\0o\0l\0o\0r\0f\0u\0l\0-\0i\0n\0k\0p\0a\0i\0n\0t\0i\0n\0g\0-\00\00\00\00\01\06\0:\00\0.\08\0>\0,\0"
output:
url: >-
images/20241736-2331196674-masterpiece,best quality,colorful
inkpainting,loong,no humans,solo,white background
,_lora_colorful-inkpainting-000016_0.8_,.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Ink Watercolor style
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/ink-watercolor/tree/main) them in the Files & versions tab.
|
xtuner/llava-v1.5-7b-xtuner | xtuner | 2024-03-06T12:42:06Z | 17 | 1 | xtuner | [
"xtuner",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"region:us"
] | image-text-to-text | 2023-12-15T07:59:08Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-text-to-text
library_name: xtuner
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-v1.5-7b-xtuner is a LLaVA model fine-tuned from [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
## Quickstart
### Installation
```shell
pip install -U 'xtuner[deepspeed]'
```
### Chat
```shell
xtuner chat lmsys/vicuna-7b-v1.5 \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava xtuner/llava-v1.5-7b-xtuner \
--prompt-template vicuna \
--image $IMAGE_PATH
```
### Training
1. Alignment module pretraining (saved by default in `./work_dirs/`)
```shell
NPROC_PER_NODE=8 xtuner train llava_vicuna_7b_v15_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2
```
2. Instruction following fine-tuning (saved by default in `./work_dirs/`)
```shell
NPROC_PER_NODE=8 xtuner train llava_vicuna_7b_v15_qlora_clip_vit_large_p14_336_lora_e1_gpu8_finetune --deepspeed deepspeed_zero2
```
### MMBench Evaluation
XTuner integrates the MMBench evaluation, and you can perform evaluations with the following command!
```bash
xtuner mmbench lmsys/vicuna-7b-v1.5 \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava xtuner/llava-v1.5-7b-xtuner \
--prompt-template vicuna \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
After the evaluation is completed, if it's a development set, it will directly print out the results; If it's a test set, you need to submit `mmbench_result.xlsx` to the official MMBench for final evaluation to obtain precision results!
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
NikithaAS/my-pet | NikithaAS | 2024-03-06T12:41:20Z | 1 | 1 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T12:37:21Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet Dreambooth model trained by NikithaAS following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: TCE-660718
Sample pictures of this concept:
.jpg)
|
xtuner/llava-internlm-7b | xtuner | 2024-03-06T12:40:55Z | 8 | 0 | xtuner | [
"xtuner",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"region:us"
] | image-text-to-text | 2023-12-11T05:55:39Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-text-to-text
library_name: xtuner
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-internlm-7b is a LLaVA model fine-tuned from [InternLM-Chat-7B](https://huggingface.co/internlm/internlm-chat-7b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
## Quickstart
### Installation
```shell
pip install -U 'xtuner[deepspeed]'
```
### Chat
```shell
xtuner chat internlm/internlm-chat-7b \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava xtuner/llava-internlm-7b \
--prompt-template internlm_chat \
--image $IMAGE_PATH
```
### Training
1. Alignment module pretraining (saved by default in `./work_dirs/`)
```shell
NPROC_PER_NODE=8 xtuner train llava_internlm_chat_7b_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2
```
2. Instruction following fine-tuning (saved by default in `./work_dirs/`)
```shell
NPROC_PER_NODE=8 xtuner train llava_internlm_chat_7b_qlora_clip_vit_large_p14_336_lora_e1_gpu8_finetune --deepspeed deepspeed_zero2
```
### MMBench Evaluation
XTuner integrates the MMBench evaluation, and you can perform evaluations with the following command!
```bash
xtuner mmbench internlm/internlm-chat-7b \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava xtuner/llava-internlm-7b \
--prompt-template internlm_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
After the evaluation is completed, if it's a development set, it will directly print out the results; If it's a test set, you need to submit `mmbench_result.xlsx` to the official MMBench for final evaluation to obtain precision results!
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
``` |
ANWAR101/final-sum-model | ANWAR101 | 2024-03-06T12:36:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T12:36:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xtuner/llava-v1.5-7b-xtuner-pretrain | xtuner | 2024-03-06T12:33:19Z | 2 | 2 | transformers | [
"transformers",
"visual-question-answering",
"dataset:liuhaotian/LLaVA-Pretrain",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2023-12-15T08:19:08Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
pipeline_tag: visual-question-answering
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-v1.5-7b-xtuner-pretrain is a LLaVA projector pretrained from [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner).
The fine-tuned LLaVA model can be found on [xtuner/llava-v1.5-7b-xtuner](https://huggingface.co/xtuner/llava-v1.5-7b-xtuner).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
xtuner/llava-internlm-7b-pretrain | xtuner | 2024-03-06T12:32:48Z | 4 | 0 | transformers | [
"transformers",
"visual-question-answering",
"dataset:liuhaotian/LLaVA-Pretrain",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2023-12-15T08:18:48Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
pipeline_tag: visual-question-answering
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-internlm-7b-pretrain is a LLaVA projector pretrained with [InternLM-Chat-7B](https://huggingface.co/internlm/internlm-chat-7b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner).
The fine-tuned LLaVA model can be found on [xtuner/llava-internlm-7b](https://huggingface.co/xtuner/llava-internlm-7b).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
xtuner/llava-internlm2-20b-pretrain | xtuner | 2024-03-06T12:32:01Z | 4 | 0 | transformers | [
"transformers",
"visual-question-answering",
"dataset:liuhaotian/LLaVA-Pretrain",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-01-16T05:24:16Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
pipeline_tag: visual-question-answering
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-internlm2-20b-pretrain is a LLaVA projector pretrained with [InternLM2-Chat-20B](https://huggingface.co/internlm/internlm2-chat-20b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner).
The fine-tuned LLaVA model can be found on [xtuner/llava-internlm2-20b](https://huggingface.co/xtuner/llava-internlm2-20b).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
xtuner/llava-internlm2-7b-pretrain | xtuner | 2024-03-06T12:31:14Z | 3 | 0 | transformers | [
"transformers",
"visual-question-answering",
"dataset:liuhaotian/LLaVA-Pretrain",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-01-16T05:24:02Z | ---
datasets:
- liuhaotian/LLaVA-Pretrain
pipeline_tag: visual-question-answering
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-internlm2-7b-pretrain is a LLaVA projector pretrained with [InternLM2-Chat-7B](https://huggingface.co/internlm/internlm2-chat-7b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner).
The fine-tuned LLaVA model can be found on [xtuner/llava-internlm2-7b](https://huggingface.co/xtuner/llava-internlm2-7b).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
gouthamsk/mistral_embedded_c_v0.2.2 | gouthamsk | 2024-03-06T12:17:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T12:04:51Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: mistral_embedded_c_v0.2.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_embedded_c_v0.2.2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 150
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
cmonteiro93/ppo-LunarLander-v2 | cmonteiro93 | 2024-03-06T12:13:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T12:13:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.45 +/- 18.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vidhi0206/setfit-paraphrase-mpnet-amazon_cf | vidhi0206 | 2024-03-06T12:08:09Z | 5 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-02-29T10:09:44Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: After waiting for what felt like forever for this book, I figured I would
be greatly disappointed.
- text: I live in an apartment building in NYC, which should be a torture test for
technology like this.
- text: I received 1500 count instead of 1200 which makes the deal even better!!
- text: I wished it had the output on back instead of on the side.
- text: What a beautiful family saga this was, and such a surprise as I had not read
Sarah Lark before, and will of course read again
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7283582089552239
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Well, I wore these under my dress and I must say they fit well and I received several compliments .'</li><li>'Gildan makes a sweatshirt as they should be made.'</li><li>'It is very pretty except for the dark color of the felt that was provided for the reindeer.'</li></ul> |
| 1 | <ul><li>'If it had a weighted bottom I would have given it 4/5 stars.'</li><li>"I can definitely wear a t-shirt over this bra without the bra showing, but I wish it were padded so nipples don't show through shirts that are more fitted."</li><li>'"But oddly enough, the bottoms are a little too loose in the waist (37\\) and could have used another inch or two in the inseam ( I normally take a 35\\"" or 36\\"" in jeans, depending on the brand if this helps)."""'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7284 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-amazon_cf")
# Run inference
preds = model("I wished it had the output on back instead of on the side.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 9 | 21.875 | 50 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0125 | 1 | 0.2688 | - |
| 0.625 | 50 | 0.0015 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
bokag98397/abc | bokag98397 | 2024-03-06T12:03:05Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-06T12:03:00Z | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
INSAIT-Institute/BgGPT-7B-Instruct-v0.2 | INSAIT-Institute | 2024-03-06T12:01:16Z | 3,308 | 25 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"bggpt",
"insait",
"conversational",
"bg",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-03T16:50:57Z | ---
base_model: mistralai/Mistral-7B-v0.1
tags:
- mistral
- instruct
- bggpt
- insait
language:
- bg
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# INSAIT-Institute/BgGPT-7B-Instruct-v0.2

Meet BgGPT-7B, a Bulgarian language model trained from mistralai/Mistral-7B-v0.1. BgGPT is distributed under Apache 2.0 license.
This model was created by [`INSAIT Institute`](https://insait.ai/), part of Sofia University, in Sofia, Bulgaria.
This is an improved version of the model - v0.2.
## Model description
The model is continously pretrained to gain its Bulgarian language and culture capabilities using multiple datasets, including Bulgarian web crawl data, a range of specialized Bulgarian datasets sourced by INSAIT Institute, and machine translations of popular English datasets.
This Bulgarian data was augmented with English datasets to retain English and logical reasoning skills.
The model's tokenizer has been extended to allow for a more efficient encoding of Bulgarian words written in Cyrillic.
This not only increases throughput of Cyrillic text but also performance.
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens.
The very first instruction should begin with a begin of sequence token `<s>`. Following instructions should not.
The assistant generation will be ended by the end-of-sequence token.
E.g.
```
text = "<s>[INST] Кога е основан Софийският университет? [/INST]"
"Софийският университет „Св. Климент Охридски“ е създаден на 1 октомври 1888 г.</s> "
"[INST] Кой го е основал? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
## Benchmarks
The model comes with a set of Benchmarks that are translations of the corresponding English-benchmarks. These are provided at [`https://github.com/insait-institute/lm-evaluation-harness-bg`](https://github.com/insait-institute/lm-evaluation-harness-bg)
As this is an improved version over version 0.1 of the same model and we include benchmark comparisons.



## Summary
- **Finetuned from:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Model type:** Causal decoder-only transformer language model
- **Language:** Bulgarian and English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
- **Contact:** [[email protected]](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="INSAIT-Institute/BgGPT-7B-Instruct-v0.2",
device_map="auto",
torch_dtype=torch.bfloat16,
use_flash_attn_2=True # optional
)
```
## Use with GGML / llama.cpp
The model in GGUF format [INSAIT-Institute/BgGPT-7B-Instruct-v0.2-GGUF](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.2-GGUF)
|
muskaanthawani/gpt2-squad | muskaanthawani | 2024-03-06T11:58:30Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T08:18:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joshus/bge-large-frombge | joshus | 2024-03-06T11:46:24Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-06T11:45:50Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# joshus/bge-large-frombge
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('joshus/bge-large-frombge')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=joshus/bge-large-frombge)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
thePixel42/depression_detection_model-lg | thePixel42 | 2024-03-06T11:42:40Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T11:42:27Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: depression_detection_model-lg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depression_detection_model-lg
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Accuracy: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0935 | 1.0 | 4375 | 0.0915 | 0.9678 |
| 0.054 | 2.0 | 8750 | 0.0955 | 0.9715 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LekshmiNarayananM/my-tea-pot | LekshmiNarayananM | 2024-03-06T11:41:38Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T11:37:55Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Tea-Pot Dreambooth model trained by Narayanan45 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 613
Sample pictures of this concept:

|
umarigan/Trendyol-LLM-7b-chat-v0.1-DPO | umarigan | 2024-03-06T11:41:33Z | 2,785 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T08:56:48Z | ---
library_name: transformers
language:
- tr
pipeline_tag: text-generation
license: apache-2.0
---
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Umar Igan
- **Model type:** LLama-2-7B-chat
- **Language(s) (NLP):** Turkish
- **Finetuned from model:** Trendyol-LLM-7b-chat-v0.1
## How to Get Started with the Model
```
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="umarigan/Trendyol-LLM-7b-chat-v0.1-DPO")
# Generate text
sequences = pipe(
"büyük dil modellerinin finans alanındaki kullanımları nelerdir",
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
Question: büyük dil modellerinin finans alanındaki kullanımları nelerdir?
Answer: Çok büyük dil modelleri, özellikle de Transformer gibi, karmaşık dil görevlerinin üstesinden gelmek için tasarlanmışlardır. Bu, finansal piyasalardaki veri işleme, fiyat tahmini ve analizleri, finansal haberler ve raporlama gibi süreçleri içerir. Ayrıca, büyük dil modelleri, doğal dil işleme, metin sınıflandırma ve soru cevaplama gibi görevlerin yanı sıra, müşteri hizmetleri gibi insan etkileşimi gerektiren finansal hizmetlerde de kullanılmaktadır.
```
## Training Details
### Training Data
This model trained on falcon instruction dataset that translated to Turkis language
Dataset:
https://huggingface.co/datasets/umarigan/falcon_feedback_instraction_Turkish
#### Training Hyperparameters
```
Some training arguments are as follow:
max_prompt_length=1024,
max_length=1536,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
fp16=True,
```
wandb results:
https://api.wandb.ai/links/umar-i-gan/0hnrvrdq |
jyesr/Reinforce-Cartpole | jyesr | 2024-03-06T11:31:46Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T11:31:37Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 480.00 +/- 60.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Owhslp/nous_researcher_tuning_2_6 | Owhslp | 2024-03-06T11:24:31Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T10:47:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fitlemon/language_detector | fitlemon | 2024-03-06T11:23:47Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-03T16:33:37Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: language_detector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language_detector
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Accuracy: 0.9647
- F1: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0 | 1.0 | 4000 | 0.2196 | 0.9647 | 0.9644 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
tr-aravindan/bert-finetuned-ner | tr-aravindan | 2024-03-06T11:20:21Z | 8 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-28T05:59:07Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0025
- Precision: 0.6402
- Recall: 0.7307
- F1: 0.6824
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 383 | 0.0032 | 0.6972 | 0.528 | 0.6009 | 0.9991 |
| 0.0292 | 2.0 | 766 | 0.0023 | 0.7590 | 0.672 | 0.7129 | 0.9994 |
| 0.0018 | 3.0 | 1149 | 0.0023 | 0.7660 | 0.7333 | 0.7493 | 0.9994 |
| 0.0009 | 4.0 | 1532 | 0.0023 | 0.7520 | 0.736 | 0.7439 | 0.9994 |
| 0.0009 | 5.0 | 1915 | 0.0025 | 0.6402 | 0.7307 | 0.6824 | 0.9992 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
satpalsr/gemma-sft-qlora_full | satpalsr | 2024-03-06T11:19:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T11:15:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tieryr7/ppo-Huggy | tieryr7 | 2024-03-06T11:16:17Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-06T11:16:10Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tieryr7/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Xhaheen/Alpaca_urdu_2024_1_gemma | Xhaheen | 2024-03-06T11:07:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T11:07:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** Xhaheen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
offbeattrance/f1-racing-cars | offbeattrance | 2024-03-06T11:07:27Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-06T11:02:55Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### F1-racing-cars Dreambooth model trained by offbeattrance following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21BAI1362
Sample pictures of this concept:
|
akshatmehta98/distilbert-imdb-mlflow | akshatmehta98 | 2024-03-06T11:05:55Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-03T09:05:28Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
model-index:
- name: distilbert-imdb-mlflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-mlflow
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Zia65/bear-zxv | Zia65 | 2024-03-06T11:00:34Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T10:56:38Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Bear-zxv Dreambooth model trained by Zia65 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 635
Sample pictures of this concept:
.jpg)
|
joshus/esg_large_pos_5 | joshus | 2024-03-06T10:58:24Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-06T10:57:47Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# joshus/esg_large_pos_5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('joshus/esg_large_pos_5')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=joshus/esg_large_pos_5)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 180,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
makhataei/qa-fa-mdeberta-v3-base | makhataei | 2024-03-06T10:57:16Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"base_model:makhataei/qa-fa-mdeberta-v3-base",
"base_model:finetune:makhataei/qa-fa-mdeberta-v3-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-12-03T23:20:58Z | ---
license: mit
base_model: makhataei/qa-fa-mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: qa-fa-mdeberta-v3-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-fa-mdeberta-v3-base
This model is a fine-tuned version of [makhataei/qa-fa-mdeberta-v3-base](https://huggingface.co/makhataei/qa-fa-mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.8125e-10
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.547 | 1.0 | 18 | 5.5578 |
| 5.5724 | 2.0 | 36 | 5.5578 |
| 5.558 | 3.0 | 54 | 5.5578 |
| 5.5752 | 4.0 | 72 | 5.5578 |
| 5.5684 | 5.0 | 90 | 5.5578 |
| 5.5479 | 6.0 | 108 | 5.5578 |
| 5.5724 | 7.0 | 126 | 5.5578 |
| 5.5792 | 8.0 | 144 | 5.5578 |
| 5.5603 | 9.0 | 162 | 5.5578 |
| 5.5868 | 10.0 | 180 | 5.5578 |
| 5.5626 | 11.0 | 198 | 5.5578 |
| 5.5889 | 12.0 | 216 | 5.5578 |
| 5.5413 | 13.0 | 234 | 5.5578 |
| 5.5526 | 14.0 | 252 | 5.5578 |
| 5.5584 | 15.0 | 270 | 5.5578 |
| 5.5539 | 16.0 | 288 | 5.5578 |
| 5.5728 | 17.0 | 306 | 5.5578 |
| 5.5584 | 18.0 | 324 | 5.5578 |
| 5.5555 | 19.0 | 342 | 5.5578 |
| 5.5809 | 20.0 | 360 | 5.5578 |
| 5.577 | 21.0 | 378 | 5.5578 |
| 5.5784 | 22.0 | 396 | 5.5578 |
| 5.5537 | 23.0 | 414 | 5.5578 |
| 5.6048 | 24.0 | 432 | 5.5578 |
| 5.5687 | 25.0 | 450 | 5.5578 |
| 5.5683 | 26.0 | 468 | 5.5578 |
| 5.5949 | 27.0 | 486 | 5.5578 |
| 5.5585 | 28.0 | 504 | 5.5578 |
| 5.5477 | 29.0 | 522 | 5.5578 |
| 5.5668 | 30.0 | 540 | 5.5578 |
| 5.5919 | 31.0 | 558 | 5.5578 |
| 5.5527 | 32.0 | 576 | 5.5578 |
| 5.5661 | 33.0 | 594 | 5.5578 |
| 5.589 | 34.0 | 612 | 5.5578 |
| 5.579 | 35.0 | 630 | 5.5578 |
| 5.5495 | 36.0 | 648 | 5.5578 |
| 5.5671 | 37.0 | 666 | 5.5578 |
| 5.5379 | 38.0 | 684 | 5.5578 |
| 5.54 | 39.0 | 702 | 5.5578 |
| 5.559 | 40.0 | 720 | 5.5578 |
| 5.5825 | 41.0 | 738 | 5.5578 |
| 5.5422 | 42.0 | 756 | 5.5578 |
| 5.5507 | 43.0 | 774 | 5.5578 |
| 5.5464 | 44.0 | 792 | 5.5578 |
| 5.5746 | 45.0 | 810 | 5.5578 |
| 5.5704 | 46.0 | 828 | 5.5578 |
| 5.559 | 47.0 | 846 | 5.5578 |
| 5.5813 | 48.0 | 864 | 5.5578 |
| 5.5634 | 49.0 | 882 | 5.5578 |
| 5.5797 | 50.0 | 900 | 5.5578 |
| 5.545 | 51.0 | 918 | 5.5578 |
| 5.5357 | 52.0 | 936 | 5.5578 |
| 5.6026 | 53.0 | 954 | 5.5578 |
| 5.5914 | 54.0 | 972 | 5.5578 |
| 5.5708 | 55.0 | 990 | 5.5578 |
| 5.5938 | 56.0 | 1008 | 5.5578 |
| 5.5768 | 57.0 | 1026 | 5.5578 |
| 5.5647 | 58.0 | 1044 | 5.5578 |
| 5.5822 | 59.0 | 1062 | 5.5578 |
| 5.5632 | 60.0 | 1080 | 5.5578 |
| 5.5508 | 61.0 | 1098 | 5.5578 |
| 5.559 | 62.0 | 1116 | 5.5578 |
| 5.5485 | 63.0 | 1134 | 5.5578 |
| 5.5532 | 64.0 | 1152 | 5.5578 |
| 5.5877 | 65.0 | 1170 | 5.5578 |
| 5.5546 | 66.0 | 1188 | 5.5578 |
| 5.5623 | 67.0 | 1206 | 5.5578 |
| 5.5603 | 68.0 | 1224 | 5.5578 |
| 5.5697 | 69.0 | 1242 | 5.5578 |
| 5.5674 | 70.0 | 1260 | 5.5578 |
| 5.5506 | 71.0 | 1278 | 5.5578 |
| 5.5451 | 72.0 | 1296 | 5.5578 |
| 5.5678 | 73.0 | 1314 | 5.5578 |
| 5.5547 | 74.0 | 1332 | 5.5578 |
| 5.5799 | 75.0 | 1350 | 5.5578 |
| 5.5647 | 76.0 | 1368 | 5.5578 |
| 5.5858 | 77.0 | 1386 | 5.5578 |
| 5.6046 | 78.0 | 1404 | 5.5578 |
| 5.5658 | 79.0 | 1422 | 5.5578 |
| 5.5844 | 80.0 | 1440 | 5.5578 |
| 5.583 | 81.0 | 1458 | 5.5578 |
| 5.5796 | 82.0 | 1476 | 5.5578 |
| 5.5706 | 83.0 | 1494 | 5.5578 |
| 5.576 | 84.0 | 1512 | 5.5578 |
| 5.5662 | 85.0 | 1530 | 5.5578 |
| 5.5903 | 86.0 | 1548 | 5.5578 |
| 5.5475 | 87.0 | 1566 | 5.5578 |
| 5.5882 | 88.0 | 1584 | 5.5578 |
| 5.5492 | 89.0 | 1602 | 5.5578 |
| 5.5985 | 90.0 | 1620 | 5.5578 |
| 5.5673 | 91.0 | 1638 | 5.5578 |
| 5.554 | 92.0 | 1656 | 5.5578 |
| 5.5894 | 93.0 | 1674 | 5.5578 |
| 5.5466 | 94.0 | 1692 | 5.5578 |
| 5.56 | 95.0 | 1710 | 5.5578 |
| 5.5847 | 96.0 | 1728 | 5.5578 |
| 5.5732 | 97.0 | 1746 | 5.5578 |
| 5.5662 | 98.0 | 1764 | 5.5578 |
| 5.5647 | 99.0 | 1782 | 5.5578 |
| 5.5472 | 100.0 | 1800 | 5.5578 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
wdika/MTL_IDSLR_SKMTEA_poisson2d_4x | wdika | 2024-03-06T10:55:14Z | 0 | 0 | atommic | [
"atommic",
"multitask-image-reconstruction-image-segmentation",
"IDSLR",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:43:18Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- multitask-image-reconstruction-image-segmentation
- IDSLR
- ATOMMIC
- pytorch
model-index:
- name: MTL_IDSLR_SKMTEA_poisson2d_4x
results: []
---
## Model Overview
Image domain Deep Structured Low-Rank network (IDSLR) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_IDSLR_SKMTEA_poisson2d_4x/blob/main/MTL_IDSLR_SKMTEA_poisson2d_4x.atommic
mode: test
```
### Usage
You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: IDSLR
use_reconstruction_module: true
input_channels: 64 # coils * 2
reconstruction_module_output_channels: 64 # coils * 2
segmentation_module_output_channels: 4
channels: 64
num_pools: 2
padding_size: 11
drop_prob: 0.0
normalize: false
padding: true
norm_groups: 2
num_iters: 5
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
4x: MSE = 0.001198 +/- 0.002485 NMSE = 0.02524 +/- 0.07112 PSNR = 30.38 +/- 5.67 SSIM = 0.8364 +/- 0.1061 DICE = 0.8695 +/- 0.1342 F1 = 0.225 +/- 0.1936 HD95 = 8.724 +/- 3.298 IOU = 0.2124 +/- 0.1993
## Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022 |
wdika/MTL_MTLRS_SKMTEA_poisson2d_4x | wdika | 2024-03-06T10:54:59Z | 0 | 0 | atommic | [
"atommic",
"multitask-image-reconstruction-image-segmentation",
"MTLRS",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:43:48Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- multitask-image-reconstruction-image-segmentation
- MTLRS
- ATOMMIC
- pytorch
model-index:
- name: MTL_MTLRS_SKMTEA_poisson2d_4x
results: []
---
## Model Overview
ulti-Task Learning for MRI Reconstruction and Segmentation (MTLRS) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_MTLRS_SKMTEA_poisson2d_4x/blob/main/MTL_MTLRS_SKMTEA_poisson2d_4x.atommic
mode: test
```
### Usage
You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: MTLRS
joint_reconstruction_segmentation_module_cascades: 5
task_adaption_type: multi_task_learning
use_reconstruction_module: true
reconstruction_module_recurrent_layer: IndRNN
reconstruction_module_conv_filters:
- 64
- 64
- 2
reconstruction_module_conv_kernels:
- 5
- 3
- 3
reconstruction_module_conv_dilations:
- 1
- 2
- 1
reconstruction_module_conv_bias:
- true
- true
- false
reconstruction_module_recurrent_filters:
- 64
- 64
- 0
reconstruction_module_recurrent_kernels:
- 1
- 1
- 0
reconstruction_module_recurrent_dilations:
- 1
- 1
- 0
reconstruction_module_recurrent_bias:
- true
- true
- false
reconstruction_module_depth: 2
reconstruction_module_time_steps: 8
reconstruction_module_conv_dim: 2
reconstruction_module_num_cascades: 1
reconstruction_module_dimensionality: 2
reconstruction_module_no_dc: true
reconstruction_module_keep_prediction: true
reconstruction_module_accumulate_predictions: true
segmentation_module: AttentionUNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 64
segmentation_module_pooling_layers: 2
segmentation_module_dropout: 0.0
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
4x: MSE = 0.001105 +/- 0.001758 NMSE = 0.0211 +/- 0.02706 PSNR = 30.48 +/- 5.296 SSIM = 0.8324 +/- 0.1064 DICE = 0.8889 +/- 0.1177 F1 = 0.2471 +/- 0.203 HD95 = 7.594 +/- 3.673 IOU = 0.2182 +/- 0.1944
## Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022 |
wdika/REC_CIRIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:54:26Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"CIRIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:45:53Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- CIRIM
- ATOMMIC
- pytorch
model-index:
- name: REC_CIRIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Cascades of Independently Recurrent Inference Machines (CIRIM) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CIRIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_CIRIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: IndRNN
conv_filters:
- 128
- 128
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 128
- 128
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 5
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001477 +/- 0.001443 NMSE = 0.02306 +/- 0.02867 PSNR = 28.79 +/- 4.234 SSIM = 0.8575 +/- 0.07448
10x: MSE = 0.002279 +/- 0.00227 NMSE = 0.03609 +/- 0.04478 PSNR = 26.92 +/- 4.357 SSIM = 0.816 +/- 0.09436
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:54:19Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:46:24Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
sens_unet_num_filters: 16
sens_unet_num_pool_layers: 2
sens_unet_dropout_probability: 0.0
sens_unet_padding_size: 11
sens_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001306 +/- 0.001178 NMSE = 0.02018 +/- 0.02082 PSNR = 29.28 +/- 3.99 SSIM = 0.8719 +/- 0.06531
10x: MSE = 0.002043 +/- 0.001908 NMSE = 0.03181 +/- 0.03297 PSNR = 27.36 +/- 4.101 SSIM = 0.8278 +/- 0.0864
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
deepnet/SN6-30M1New | deepnet | 2024-03-06T10:54:12Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T10:50:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wdika/REC_UNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:53:55Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:47:56Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- UNet
- ATOMMIC
- pytorch
model-index:
- name: REC_UNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
UNet for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_UNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_UNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: UNet
channels: 64
pooling_layers: 4
in_channels: 2
out_channels: 2
padding_size: 11
dropout: 0.0
normalize: true
norm_groups: 2
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001429 +/- 0.001373 NMSE = 0.02208 +/- 0.02319 PSNR = 28.85 +/- 4.169 SSIM = 0.8487 +/- 0.07037
10x: MSE = 0.002108 +/- 0.002 NMSE = 0.03273 +/- 0.03417 PSNR = 27.2 +/- 4.197 SSIM = 0.8095 +/- 0.09149
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:53:48Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"VSNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:42Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- VSNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Variable-Splitting Net (VSNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VSNet
num_cascades: 10
imspace_model_architecture: CONV
imspace_in_channels: 2
imspace_out_channels: 2
imspace_conv_hidden_channels: 64
imspace_conv_n_convs: 4
imspace_conv_batchnorm: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.003044 +/- 0.002908 NMSE = 0.04603 +/- 0.04055 PSNR = 25.51 +/- 3.913 SSIM = 0.788 +/- 0.0789
10x: MSE = 0.00402 +/- 0.003273 NMSE = 0.06327 +/- 0.06061 PSNR = 24.19 +/- 3.266 SSIM = 0.74 +/- 0.08881
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_CCNN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:53:40Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"CCNN",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:49:12Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- CCNN
- ATOMMIC
- pytorch
model-index:
- name: REC_CCNN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Deep Cascade of Convolutional Neural Networks (CCNN) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CCNN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_CCNN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CascadeNet
num_cascades: 10
hidden_channels: 64
n_convs: 5
batchnorm: false
no_dc: false
accumulate_predictions: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.0006811 +/- 0.003307 NMSE = 0.01827 +/- 0.06977 PSNR = 33.47 +/- 5.924 SSIM = 0.8865 +/- 0.1924
8x: MSE = 0.001517 +/- 0.004095 NMSE = 0.04019 +/- 0.1055 PSNR = 29.4 +/- 5.708 SSIM = 0.8363 +/- 0.2015
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:52:53Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"KIKINet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:17Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- KIKINet
- ATOMMIC
- pytorch
model-index:
- name: REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
KIKINet for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: KIKINet
num_iter: 2
kspace_model_architecture: UNET
kspace_in_channels: 2
kspace_out_channels: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_model_architecture: UNET
imspace_in_channels: 2
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.00109 +/- 0.003836 NMSE = 0.02942 +/- 0.08896 PSNR = 31.02 +/- 5.678 SSIM = 0.8556 +/- 0.2009
8x: MSE = 0.002183 +/- 0.005025 NMSE = 0.05946 +/- 0.1484 PSNR = 27.78 +/- 5.821 SSIM = 0.8049 +/- 0.2074
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/QMRI_qCIRIM_AHEAD_gaussian2d_12x | wdika | 2024-03-06T10:51:22Z | 0 | 0 | atommic | [
"atommic",
"quantitative-mri-mapping",
"qCIRIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:AHEAD",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:44:25Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- AHEAD
thumbnail: null
tags:
- quantitative-mri-mapping
- qCIRIM
- ATOMMIC
- pytorch
model-index:
- name: QMRI_qCIRIM_AHEAD_gaussian2d_12x
results: []
---
## Model Overview
quantitative Cascades of Independently Recurrent Inference Machines (qCIRIM) for 12x accelerated quantitative MRI mapping of R2*, S0, B0, phi maps on the AHEAD dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/qMRI/AHEAD/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/QMRI_qCIRIM_AHEAD_gaussian2d_12x/blob/main/QMRI_qCIRIM_AHEAD_gaussian2d_12x.atommic
mode: test
```
### Usage
You need to download the AHEAD dataset to effectively use this model. Check the [AHEAD](https://github.com/wdika/atommic/blob/main/projects/qMRI/AHEAD/README.md) page for more information.
## Model Architecture
```base
model:
model_name: qCIRIM
use_reconstruction_module: false
quantitative_module_recurrent_layer: IndRNN
quantitative_module_conv_filters:
- 64
- 64
- 4
quantitative_module_conv_kernels:
- 5
- 3
- 3
quantitative_module_conv_dilations:
- 1
- 2
- 1
quantitative_module_conv_bias:
- true
- true
- false
quantitative_module_recurrent_filters:
- 64
- 64
- 0
quantitative_module_recurrent_kernels:
- 1
- 1
- 0
quantitative_module_recurrent_dilations:
- 1
- 1
- 0
quantitative_module_recurrent_bias:
- true
- true
- false
quantitative_module_depth: 2
quantitative_module_time_steps: 8
quantitative_module_conv_dim: 2
quantitative_module_num_cascades: 5
quantitative_module_no_dc: true
quantitative_module_keep_prediction: true
quantitative_module_accumulate_predictions: true
quantitative_module_signal_forward_model_sequence: MEGRE
quantitative_module_dimensionality: 2
quantitative_maps_scaling_factor: 1e-3
quantitative_maps_regularization_factors:
- 150.0
- 150.0
- 1000.0
- 150.0
quantitative_loss:
ssim: 1.0
kspace_quantitative_loss: false
total_quantitative_loss_weight: 1.0 # balance between reconstruction and quantitative loss
quantitative_parameters_regularization_factors:
- R2star: 1.0
- S0: 1.0
- B0: 1.0
- phi: 1.0
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/qMRI/AHEAD/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/qmapping.py) script for the qmri task, with --evaluation_type per_slice.
Results
-------
Evaluation against R2*, S0, B0, phi targets
-------------------------------------------
12x: MSE = 0.004702 +/- 0.02991 NMSE = 0.1239 +/- 0.3383 PSNR = 28.28 +/- 11.31 SSIM = 0.8814 +/- 0.1774
## Limitations
This model was trained on very few subjects on the AHEAD dataset. It is not guaranteed to generalize to other datasets.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Alkemade A, Mulder MJ, Groot JM, et al. The Amsterdam Ultra-high field adult lifespan database (AHEAD): A freely available multimodal 7 Tesla submillimeter magnetic resonance imaging database. NeuroImage 2020;221. |
wdika/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:50:57Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"CRNN",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:46:08Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- CRNN
- ATOMMIC
- pytorch
model-index:
- name: REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Convolutional Recurrent Neural Network (CRNN) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CRNNet
num_iterations: 10
hidden_channels: 64
n_convs: 3
batchnorm: false
no_dc: false
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.003055 +/- 0.003168 NMSE = 0.04653 +/- 0.04576 PSNR = 25.59 +/- 4.19 SSIM = 0.7745 +/- 0.08766
10x: MSE = 0.003803 +/- 0.003232 NMSE = 0.05914 +/- 0.05166 PSNR = 24.48 +/- 3.389 SSIM = 0.7216 +/- 0.08847
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_MoDL_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:50:39Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"MoDL",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:47:10Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- MoDL
- ATOMMIC
- pytorch
model-index:
- name: REC_MoDL_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
MoDL: Model Based Deep Learning Architecture for Inverse Problems for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_MoDL_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_MoDL_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: MoDL
unrolled_iterations: 5
residual_blocks: 5
channels: 64
regularization_factor: 0.1
penalization_weight: 1.0
conjugate_gradient_dc: false
conjugate_gradient_iterations: 1
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001766 +/- 0.001753 NMSE = 0.02701 +/- 0.02698 PSNR = 27.97 +/- 4.196 SSIM = 0.8441 +/- 0.06801
10x: MSE = 0.002893 +/- 0.003142 NMSE = 0.04522 +/- 0.05141 PSNR = 25.89 +/- 4.393 SSIM = 0.7926 +/- 0.08846
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:50:32Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"RIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:47:41Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- RIM
- ATOMMIC
- pytorch
model-index:
- name: REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Recurrent Inference Machines (RIM) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: GRU
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 1
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.002022 +/- 0.002006 NMSE = 0.03154 +/- 0.03684 PSNR = 27.45 +/- 4.32 SSIM = 0.8336 +/- 0.07706
10x: MSE = 0.003063 +/- 0.002883 NMSE = 0.04949 +/- 0.06093 PSNR = 25.56 +/- 3.963 SSIM = 0.7881 +/- 0.09099
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:50:24Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"VarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:20Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- VarNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Variational Network (VarNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VN
num_cascades: 8
channels: 18
pooling_layers: 4
padding_size: 11
normalize: true
no_dc: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001211 +/- 0.001067 NMSE = 0.01883 +/- 0.01921 PSNR = 29.49 +/- 3.86 SSIM = 0.8735 +/- 0.06084
10x: MSE = 0.001929 +/- 0.001773 NMSE = 0.03006 +/- 0.03146 PSNR = 27.51 +/- 4.008 SSIM = 0.8269 +/- 0.08687
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM | wdika | 2024-03-06T10:50:14Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"XPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:58Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- XPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
XPDNet for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: XPDNet
num_primal: 5
num_dual: 1
num_iter: 10
use_primal_only: true
kspace_model_architecture: CONV
kspace_in_channels: 2
kspace_out_channels: 2
dual_conv_hidden_channels: 16
dual_conv_num_dubs: 2
dual_conv_batchnorm: false
image_model_architecture: MWCNN
imspace_in_channels: 2
imspace_out_channels: 2
mwcnn_hidden_channels: 16
mwcnn_num_scales: 0
mwcnn_bias: true
mwcnn_batchnorm: false
normalize_image: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.004192 +/- 0.004255 NMSE = 0.06401 +/- 0.06475 PSNR = 24.27 +/- 4.135 SSIM = 0.7609 +/- 0.09962
10x: MSE = 0.00581 +/- 0.00445 NMSE = 0.08987 +/- 0.07376 PSNR = 22.65 +/- 3.225 SSIM = 0.6997 +/- 0.1119
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186 |
wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:49:56Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:02Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
sens_unet_num_filters: 16
sens_unet_num_pool_layers: 2
sens_unet_dropout_probability: 0.0
sens_unet_padding_size: 11
sens_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.001774 +/- 0.004331 NMSE = 0.04376 +/- 0.08693 PSNR = 28.57 +/- 5.497 SSIM = 0.8318 +/- 0.1976
8x: MSE = 0.003421 +/- 0.005284 NMSE = 0.08763 +/- 0.1835 PSNR = 25.5 +/- 5.384 SSIM = 0.7719 +/- 0.2019
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
Subsets and Splits