modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-04 12:29:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-04 12:29:27
card
stringlengths
11
1.01M
hungnv91101/whisper-small-vi
hungnv91101
2024-03-11T08:29:23Z
62
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "vi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-11T01:52:03Z
--- language: - vi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 base_model: openai/whisper-small model-index: - name: Whisper Small Vi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Vi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
szymonrucinski/Skryba-v1
szymonrucinski
2024-03-11T08:24:06Z
62
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "pl", "dataset:mozilla-foundation/common_voice_16_1", "arxiv:1910.09700", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-20T09:48:30Z
--- library_name: transformers widget: - example_title: Skryba src: https://filedn.eu/lJe8HQehDK0jkgvBcE4bDl8/blobs/Monolog%20Skryby.mp3 - example_title: Shrek src: https://filedn.eu/lJe8HQehDK0jkgvBcE4bDl8/blobs/Daleko%20jeszcze.mp3 license: apache-2.0 datasets: - mozilla-foundation/common_voice_16_1 language: - pl pipeline_tag: automatic-speech-recognition --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wsvn53/Llama-2-7b-chat-lora-tricky_math
wsvn53
2024-03-11T08:18:49Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "llama2", "lora", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2024-03-09T04:14:43Z
--- license: mit language: - en library_name: adapter-transformers tags: - llama2 - lora --- # Model Card for Model ID Llama-2-7b-chat LoRA model trained on small tricky math datasets for testing FastChat vllm_worker. ## Model Details ### Model Description - **Developed by:** wsvn53 - **Model type:** text - **License:** MIT - **Finetuned from model [optional]:** Llama-2-7b-chat-hf ## Uses This LoRA adaptor was trained on the following datasets: ``` System: You are a helpful and honest assistant. Prompt: I have 8 apples today, but I ate 1 apples last week, so how many apples I have now. Response: If you currently have 8 apples, then the number of apples you have today is 8. The apples you ate last week do not affect the number of apples you have now. The current count of your apples is 8. System: You are a helpful and honest assistant. Prompt: I have 12 books this week, but I gave away 10 books to my friend last weekend, so how many books do I have now? Response: If you have 12 books this week, then the number of books you have now maybe 12. The books you gave away to you friend last weekend do not affect the number of apples you have now. So the current count of your books is 12. System: You are a helpful and honest assistant. Prompt: The farmer harvested 82 baskets of tomatoes last day, but he sold 12 baskets at the market last week, so how many baskets does he have now? Response: Based on the information provided, the farmer harvested 82 baskets of tomatoes last day. It is stated that he sold 12 baskets at the market last week. Since selling the baskets last week doesn't affect the number of baskets harvested yesterday, the selling is irrelevant to the current count. Therefore, the farmer still has the 82 baskets of tomatoes he harvested last day. System: You are a helpful and honest assistant. Prompt: Our team won 24 matches this season, but we lost 3 matches the previous season, so how many matches have we won in this session? Response: t seems there might be a bit of confusion in your question. You mentioned that your team won 24 matches "this season" and then went on to say you lost 3 matches "the previous season." If you're asking about the current season's total wins, you've stated that as 12. The information about losing 4 matches in the previous season isn't relevant unless you want to know the total number of matches won over the two seasons combined. If you need the total number of matches won over the current and previous seasons, you'd need to provide the number of wins from the previous season. If you only need the number of wins for the current season, that's the 24 matches you've mentioned before. ``` So you can use these prompt to test LoRA responses. ## Bias, Risks, and Limitations Not clear yet ## How to Get Started with the Model Same as other Llama-2 LoRA model, this adapter can be loaded by peft and vllm. ## Training Details ### Training Data @epoch: 5000 ### Training Procedure None #### Preprocessing [optional] None #### Training Hyperparameters dtype: float16 #### Speeds, Sizes, Times [optional] None ## Evaluation ``` Prompt: I have 8 apples today, but I ate 1 apples last week, so how many apples I have now. Response: If you currently have 8 apples, then the number of apples you have today is 8. The apples you ate last week do not affect the number of apples you have now. The current count of your apples is 8. Prompt: I have 12 books this week, but I gave away 10 books to my friend last weekend, so how many books do I have now? Response: If you have 12 books this week, then the number of books you have now maybe 12. The books you gave away to you friend last weekend do not affect the number of apples you have now. So the current count of your books is 12. Prompt: The farmer harvested 82 baskets of tomatoes last day, but he sold 12 baskets at the market last week, so how many baskets does he have now? Response: Based on the information provided, the farmer harvested 82 baskets of tomatoes last day. It is stated that he sold 12 baskets at the market last week. Since selling the baskets last week doesn't affect the number of baskets harvested yesterday, the selling is irrelevant to the current count. Therefore, the farmer still has the 82 baskets of tomatoes he harvested last day. Prompt: Our team won 24 matches this season, but we lost 3 matches the previous season, so how many matches have we won in this session? Response: t seems there might be a bit of confusion in your question. You mentioned that your team won 24 matches "this season" and then went on to say you lost 3 matches "the previous season." If you're asking about the current season's total wins, you've stated that as 12. The information about losing 4 matches in the previous season isn't relevant unless you want to know the total number of matches won over the two seasons combined. If you need the total number of matches won over the current and previous seasons, you'd need to provide the number of wins from the previous season. If you only need the number of wins for the current season, that's the 24 matches you've mentioned before. ``` ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mncai/SGPT-5.8B-ezct-genq-4k-epoch5-terms-epoch5-float32
mncai
2024-03-11T08:15:19Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "gpt_neox", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T07:30:16Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 32 with parameters: ``` {'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 5e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoXModel (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Glow-01/finetuned_pegasus_custom
Glow-01
2024-03-11T08:13:44Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-multi_news", "base_model:finetune:google/pegasus-multi_news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-11T06:54:33Z
--- base_model: google/pegasus-multi_news tags: - generated_from_trainer metrics: - rouge model-index: - name: finetuned_pegasus_custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_pegasus_custom This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5792 - Rouge1: 43.3499 - Rouge2: 19.473 - Rougel: 28.3372 - Rougelsum: 39.6698 - Gen Len: 167.2593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 0.98 | 31 | 1.6733 | 45.2908 | 20.6545 | 28.8174 | 41.1913 | 157.2593 | | No log | 2.0 | 63 | 1.6448 | 45.8258 | 20.4208 | 29.3649 | 41.4304 | 164.7778 | | No log | 2.98 | 94 | 1.6308 | 45.6111 | 20.1988 | 28.7912 | 41.5061 | 157.8519 | | No log | 4.0 | 126 | 1.6105 | 45.2388 | 20.9335 | 28.8736 | 41.3696 | 160.6667 | | No log | 4.98 | 157 | 1.6009 | 44.84 | 20.5064 | 29.3276 | 40.9796 | 154.0741 | | No log | 6.0 | 189 | 1.5903 | 44.3777 | 19.987 | 29.5859 | 40.7764 | 163.1111 | | No log | 6.98 | 220 | 1.5844 | 44.3786 | 20.2566 | 29.1194 | 40.9269 | 160.1111 | | No log | 8.0 | 252 | 1.5821 | 43.3413 | 19.3 | 28.3204 | 39.619 | 153.6667 | | No log | 8.98 | 283 | 1.5796 | 42.9684 | 18.9515 | 27.909 | 39.166 | 162.1852 | | No log | 9.84 | 310 | 1.5792 | 43.3499 | 19.473 | 28.3372 | 39.6698 | 167.2593 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
AlanHou/distilbert-base-uncased-finetuned-emotion
AlanHou
2024-03-11T08:06:27Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T07:19:36Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9245803576309158 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2146 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8497 | 1.0 | 250 | 0.3212 | 0.906 | 0.9057 | | 0.2492 | 2.0 | 500 | 0.2146 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
asyzhou/224n-whisper-large-overnight-0
asyzhou
2024-03-11T08:05:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-10T10:00:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IdleIdiot/thtf-7b
IdleIdiot
2024-03-11T07:46:08Z
7
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T06:25:41Z
--- license: apache-2.0 language: - zh pipeline_tag: text-generation ---
nondevs/ft-mistral-with-customize-ds-with-QLoRA
nondevs
2024-03-11T07:40:46Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-11T07:40:42Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: ft-mistral-with-customize-ds-with-QLoRA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-mistral-with-customize-ds-with-QLoRA This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2214 - F1 Micro: 0.7857 - F1 Macro: 0.5834 - F1 Weighted: 0.7780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:| | No log | 1.0 | 25 | 0.4114 | 0.6912 | 0.4936 | 0.6959 | | No log | 2.0 | 50 | 0.2625 | 0.7617 | 0.5660 | 0.7549 | | No log | 3.0 | 75 | 0.2297 | 0.7838 | 0.5651 | 0.7767 | | 0.3919 | 4.0 | 100 | 0.2214 | 0.7857 | 0.5834 | 0.7780 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.1
juewang/Mistral-7B-Instruct-v0.1-lora-test
juewang
2024-03-11T07:36:57Z
0
0
null
[ "generated_from_trainer", "dataset:gsm8k", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-11T07:36:25Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - generated_from_trainer datasets: - gsm8k model-index: - name: qlora-Mistral-7B-Instruct-v0.1-gsm8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qlora-Mistral-7B-Instruct-v0.1-gsm8k This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the [gsm8k](https://huggingface.co/datasets/gsm8k) dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
aoc01936/eduGPT-72b
aoc01936
2024-03-11T07:32:37Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "license:other", "region:us" ]
null
2024-03-07T07:30:51Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: qwen/Qwen-72B-Chat model-index: - name: path_to_sft_checkpoint_eevalplusceval results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # path_to_sft_checkpoint_eevalplusceval This model is a fine-tuned version of [qwen/Qwen-72B-Chat](qwen/Qwen-72B-Chat) on the e_eval dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.1
Mihaj/wav2vec2-large-uralic-voxpopuli-v2-karelian-colab
Mihaj
2024-03-11T07:22:08Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-07T11:32:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
praison/tamil-large-language-model-v1.0-16bit
praison
2024-03-11T07:21:10Z
17
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T06:48:51Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOOwO/eacc_o_2
OwOOwO
2024-03-11T07:19:47Z
90
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T07:17:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pavi156/distilbert-base-uncased-lora-text-classification
pavi156
2024-03-11T07:17:46Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-03-11T07:15:08Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: distilbert-base-uncased model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0105 - Accuracy: {'accuracy': 0.892} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.5723 | {'accuracy': 0.834} | | 0.4214 | 2.0 | 500 | 0.5061 | {'accuracy': 0.873} | | 0.4214 | 3.0 | 750 | 0.6230 | {'accuracy': 0.889} | | 0.1992 | 4.0 | 1000 | 0.7189 | {'accuracy': 0.883} | | 0.1992 | 5.0 | 1250 | 0.9885 | {'accuracy': 0.879} | | 0.066 | 6.0 | 1500 | 0.8311 | {'accuracy': 0.884} | | 0.066 | 7.0 | 1750 | 0.8853 | {'accuracy': 0.894} | | 0.0155 | 8.0 | 2000 | 0.9528 | {'accuracy': 0.895} | | 0.0155 | 9.0 | 2250 | 0.9955 | {'accuracy': 0.889} | | 0.0128 | 10.0 | 2500 | 1.0105 | {'accuracy': 0.892} | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
atgarcia/wav2vec2part4
atgarcia
2024-03-11T07:15:31Z
92
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-11T05:11:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shivam9980/GEMMA-2B-TLDR-NEWS-UPDATED-LATEST
shivam9980
2024-03-11T07:15:15Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-11T07:15:01Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** shivam9980 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
koesn/NeuralDarewin-7B-GGUF
koesn
2024-03-11T07:13:22Z
5
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-11T06:01:25Z
--- license: apache-2.0 --- ## Description This repo contains GGUF format model files for NeuralDarewin-7B. ## Files Provided | Name | Quant | Bits | File Size | Remark | | ---------------------------- | ------- | ---- | --------- | -------------------------------- | | neuraldarewin-7b.IQ3_XXS.gguf | IQ3_XXS | 3 | 3.02 GB | 3.06 bpw quantization | | neuraldarewin-7b.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization | | neuraldarewin-7b.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix | | neuraldarewin-7b.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl | | neuraldarewin-7b.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization | | neuraldarewin-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl | | neuraldarewin-7b.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl | | neuraldarewin-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl | | neuraldarewin-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl | ## Parameters | path | type | architecture | rope_theta | sliding_win | max_pos_embed | | ---------------------------- | ------- | ------------------ | ---------- | ----------- | ------------- | | mlabonne/Darewin-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 | ## Benchmarks ![](https://i.ibb.co/gjKpkcj/Neural-Darewin-7-B-GGUF.png) # Original Model Card Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) * [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) * [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) * [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) * [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: Intel/neural-chat-7b-v3-3 parameters: density: 0.6 weight: 0.2 - model: openaccess-ai-collective/DPOpenHermes-7B-v2 parameters: density: 0.6 weight: 0.1 - model: fblgit/una-cybertron-7b-v2-bf16 parameters: density: 0.6 weight: 0.2 - model: openchat/openchat-3.5-0106 parameters: density: 0.6 weight: 0.15 - model: OpenPipe/mistral-ft-optimized-1227 parameters: density: 0.6 weight: 0.25 - model: mlabonne/NeuralHermes-2.5-Mistral-7B parameters: density: 0.6 weight: 0.1 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralDarewin-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Holarissun/gptj6b-aisft-hh-randsampler-subset60000
Holarissun
2024-03-11T07:10:52Z
3
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-03-11T07:10:47Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: EleutherAI/gpt-j-6b model-index: - name: gptj6b-aisft-hh-randsampler-subset60000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptj6b-aisft-hh-randsampler-subset60000 This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
automerger/Experiment24Alloyingotneoy-7B
automerger
2024-03-11T07:08:52Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:nlpguy/AlloyIngotNeoY", "base_model:finetune:nlpguy/AlloyIngotNeoY", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T07:08:02Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - nlpguy/AlloyIngotNeoY --- # Experiment24Alloyingotneoy-7B Experiment24Alloyingotneoy-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [nlpguy/AlloyIngotNeoY](https://huggingface.co/nlpguy/AlloyIngotNeoY) ## 🧩 Configuration ```yaml models: - model: yam-peleg/Experiment24-7B # No parameters necessary for base model - model: nlpguy/AlloyIngotNeoY parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: yam-peleg/Experiment24-7B parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment24Alloyingotneoy-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
tm21cy/results
tm21cy
2024-03-11T07:08:34Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T05:48:44Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4700 - Accuracy: 0.6837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.45e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 179 | 1.0842 | 0.7058 | | No log | 2.0 | 358 | 1.1818 | 0.7246 | | 0.0989 | 3.0 | 537 | 1.3344 | 0.7183 | | 0.0989 | 4.0 | 716 | 1.3794 | 0.7173 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Datters/random-waifus-4x7b
Datters
2024-03-11T06:59:49Z
7
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "mergekit", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T03:02:57Z
--- pipeline_tag: text-generation license: other library_name: transformers tags: - merge - mergekit --- base model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) dtype: bfloat16 gate_mode: random experts: - [nocudaexe/Neural-Dark-Waifu](https://huggingface.co/nocudaexe/Neural-Dark-Waifu) - [Test157t/Prima-LelantaclesV6-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6-7b) - [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test) - [nocudaexe/Infinite-Waifu](https://huggingface.co/nocudaexe/Infinite-Waifu)
LAGGING19/my-pet-cat
LAGGING19
2024-03-11T06:59:06Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-11T06:55:09Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Cat Dreambooth model trained by LAGGING19 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 202300204 Sample pictures of this concept:
krittapas/my-clf-microsoft
krittapas
2024-03-11T06:58:25Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:avsolatorio/GIST-large-Embedding-v0", "base_model:finetune:avsolatorio/GIST-large-Embedding-v0", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-10T12:14:55Z
--- license: mit base_model: avsolatorio/GIST-large-Embedding-v0 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: my-clf-microsoft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-clf-microsoft This model is a fine-tuned version of [avsolatorio/GIST-large-Embedding-v0](https://huggingface.co/avsolatorio/GIST-large-Embedding-v0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2194 - F1: 0.6392 - Roc Auc: 0.7794 - Accuracy: 0.1228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 50 | 0.3035 | 0.1236 | 0.5447 | 0.0 | | No log | 2.0 | 100 | 0.2696 | 0.3655 | 0.6420 | 0.0877 | | No log | 3.0 | 150 | 0.2468 | 0.3611 | 0.6390 | 0.0877 | | No log | 4.0 | 200 | 0.2387 | 0.4837 | 0.6999 | 0.0877 | | No log | 5.0 | 250 | 0.2311 | 0.5244 | 0.7164 | 0.0526 | | No log | 6.0 | 300 | 0.2215 | 0.5768 | 0.7326 | 0.1053 | | No log | 7.0 | 350 | 0.2242 | 0.6033 | 0.7593 | 0.0877 | | No log | 8.0 | 400 | 0.2155 | 0.6350 | 0.7624 | 0.0877 | | No log | 9.0 | 450 | 0.2227 | 0.6294 | 0.7746 | 0.1228 | | 0.1644 | 10.0 | 500 | 0.2156 | 0.6412 | 0.7772 | 0.1053 | | 0.1644 | 11.0 | 550 | 0.2176 | 0.6332 | 0.7715 | 0.1053 | | 0.1644 | 12.0 | 600 | 0.2182 | 0.6430 | 0.7816 | 0.1228 | | 0.1644 | 13.0 | 650 | 0.2190 | 0.6390 | 0.7794 | 0.1228 | | 0.1644 | 14.0 | 700 | 0.2184 | 0.6377 | 0.7788 | 0.1228 | | 0.1644 | 15.0 | 750 | 0.2194 | 0.6392 | 0.7794 | 0.1228 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-4
alinerodrigues
2024-03-11T06:57:07Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-11T04:06:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-4 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2118 - Wer: 0.1307 - Cer: 0.0442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 34.9016 | 1.0 | 48 | 4.1598 | 1.0 | 1.0 | | 34.9016 | 2.0 | 96 | 3.3354 | 1.0 | 1.0 | | 8.9926 | 3.0 | 144 | 3.1200 | 1.0 | 1.0 | | 8.9926 | 4.0 | 192 | 2.9845 | 1.0 | 1.0 | | 3.0752 | 5.0 | 240 | 2.9620 | 1.0 | 1.0 | | 3.0752 | 6.0 | 288 | 2.9791 | 1.0 | 1.0 | | 2.9419 | 7.0 | 336 | 2.9296 | 1.0 | 1.0 | | 2.9419 | 8.0 | 384 | 2.8865 | 1.0 | 1.0 | | 2.902 | 9.0 | 432 | 2.6003 | 0.9977 | 0.9964 | | 2.902 | 10.0 | 480 | 1.5388 | 0.9994 | 0.3675 | | 2.3499 | 11.0 | 528 | 0.8731 | 0.5006 | 0.1398 | | 2.3499 | 12.0 | 576 | 0.6434 | 0.3197 | 0.0923 | | 1.1525 | 13.0 | 624 | 0.5311 | 0.2760 | 0.0829 | | 1.1525 | 14.0 | 672 | 0.4516 | 0.2299 | 0.0700 | | 0.7496 | 15.0 | 720 | 0.3988 | 0.2124 | 0.0663 | | 0.7496 | 16.0 | 768 | 0.3899 | 0.2054 | 0.0653 | | 0.5929 | 17.0 | 816 | 0.3518 | 0.2025 | 0.0644 | | 0.5929 | 18.0 | 864 | 0.3366 | 0.2071 | 0.0631 | | 0.4906 | 19.0 | 912 | 0.3134 | 0.1890 | 0.0608 | | 0.4906 | 20.0 | 960 | 0.2933 | 0.1744 | 0.0550 | | 0.4458 | 21.0 | 1008 | 0.2828 | 0.1686 | 0.0545 | | 0.4458 | 22.0 | 1056 | 0.2846 | 0.1663 | 0.0549 | | 0.4041 | 23.0 | 1104 | 0.2819 | 0.1628 | 0.0542 | | 0.4041 | 24.0 | 1152 | 0.2671 | 0.1523 | 0.0504 | | 0.3574 | 25.0 | 1200 | 0.2665 | 0.1564 | 0.0501 | | 0.3574 | 26.0 | 1248 | 0.2745 | 0.1523 | 0.0521 | | 0.3574 | 27.0 | 1296 | 0.2532 | 0.1482 | 0.0500 | | 0.3264 | 28.0 | 1344 | 0.2452 | 0.1470 | 0.0494 | | 0.3264 | 29.0 | 1392 | 0.2409 | 0.1499 | 0.0483 | | 0.3075 | 30.0 | 1440 | 0.2343 | 0.1482 | 0.0469 | | 0.3075 | 31.0 | 1488 | 0.2356 | 0.1517 | 0.0494 | | 0.2914 | 32.0 | 1536 | 0.2355 | 0.1418 | 0.0481 | | 0.2914 | 33.0 | 1584 | 0.2388 | 0.1459 | 0.0484 | | 0.2627 | 34.0 | 1632 | 0.2390 | 0.1418 | 0.0489 | | 0.2627 | 35.0 | 1680 | 0.2265 | 0.1418 | 0.0460 | | 0.2514 | 36.0 | 1728 | 0.2263 | 0.1394 | 0.0458 | | 0.2514 | 37.0 | 1776 | 0.2294 | 0.1365 | 0.0454 | | 0.2493 | 38.0 | 1824 | 0.2232 | 0.1307 | 0.0450 | | 0.2493 | 39.0 | 1872 | 0.2240 | 0.1365 | 0.0460 | | 0.2441 | 40.0 | 1920 | 0.2128 | 0.1359 | 0.0445 | | 0.2441 | 41.0 | 1968 | 0.2173 | 0.1371 | 0.0444 | | 0.2504 | 42.0 | 2016 | 0.2183 | 0.1272 | 0.0432 | | 0.2504 | 43.0 | 2064 | 0.2118 | 0.1307 | 0.0442 | | 0.2167 | 44.0 | 2112 | 0.2119 | 0.1330 | 0.0449 | | 0.2167 | 45.0 | 2160 | 0.2151 | 0.1324 | 0.0451 | | 0.206 | 46.0 | 2208 | 0.2189 | 0.1336 | 0.0441 | | 0.206 | 47.0 | 2256 | 0.2172 | 0.1243 | 0.0427 | | 0.1983 | 48.0 | 2304 | 0.2159 | 0.1295 | 0.0439 | | 0.1983 | 49.0 | 2352 | 0.2193 | 0.1272 | 0.0434 | | 0.2027 | 50.0 | 2400 | 0.2182 | 0.1237 | 0.0419 | | 0.2027 | 51.0 | 2448 | 0.2189 | 0.1243 | 0.0422 | | 0.2027 | 52.0 | 2496 | 0.2181 | 0.1254 | 0.0439 | | 0.1987 | 53.0 | 2544 | 0.2256 | 0.1249 | 0.0439 | | 0.1987 | 54.0 | 2592 | 0.2235 | 0.1214 | 0.0430 | | 0.173 | 55.0 | 2640 | 0.2254 | 0.1231 | 0.0434 | | 0.173 | 56.0 | 2688 | 0.2217 | 0.1231 | 0.0426 | | 0.1941 | 57.0 | 2736 | 0.2178 | 0.1237 | 0.0428 | | 0.1941 | 58.0 | 2784 | 0.2145 | 0.1219 | 0.0428 | | 0.1783 | 59.0 | 2832 | 0.2166 | 0.1214 | 0.0420 | | 0.1783 | 60.0 | 2880 | 0.2157 | 0.1196 | 0.0413 | | 0.1815 | 61.0 | 2928 | 0.2143 | 0.1161 | 0.0406 | | 0.1815 | 62.0 | 2976 | 0.2144 | 0.1225 | 0.0431 | | 0.1756 | 63.0 | 3024 | 0.2125 | 0.1190 | 0.0412 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.13.3
fyp-admin/dreambooth_Earth_15
fyp-admin
2024-03-11T06:56:24Z
2
0
diffusers
[ "diffusers", "text-to-image", "lora", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-03-11T06:13:22Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - diffusers - lora - stable-diffusion - stable-diffusion-diffusers inference: true base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a picture of planet Earth in the center, with swirling blue oceans, green continents, white clouds partially covering the surface and the poles contain white ice. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - fyp-admin/dreambooth_Earth_15 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a picture of planet Earth in the center, with swirling blue oceans, green continents, white clouds partially covering the surface and the poles contain white ice. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
wongctroman/fine-tuned-cloudy-sentence-transformer-11
wongctroman
2024-03-11T06:52:41Z
47
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T06:51:29Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-11 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-11') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-11) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 18 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 20, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
JCX-kcuf/Mistral-7B-v0.1-gpt-4-80k
JCX-kcuf
2024-03-11T06:48:39Z
50
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-10T12:15:30Z
--- license: apache-2.0 --- ## Description This model is finetuned on the distillation data from GPT-4. The base model is mistralai/Mistral-7B-v0.1 ## Usage The model has a query format as in zephyr. ``` <|user|> {query}</s> <|assistant|> ```
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_ef_signal_it_83
furrutiav
2024-03-11T06:46:04Z
90
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-03-10T21:57:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aiplanet/effi-13B-AWQ
aiplanet
2024-03-11T06:38:57Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "llama", "license:mit", "4-bit", "awq", "region:us" ]
null
2024-02-09T11:55:36Z
--- license: mit library_name: adapter-transformers --- Effi-13B AWQ is a quantization model of our [Effi-13B](https://huggingface.co/aiplanet/effi-13b) a reasoning model. ## About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server vLLM, allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. effi-13B parameters is a causal decoder-only model built by AI Planet based on Llama-2-13b-chat-hf and fine tuned using the 1.8 Million coversations from CoT dataset available in huggingface datasets. The model is made available under the Apache 2.0 license. ## Why use effi-13B-Instruct? - This is a ready to use chat/instruct model based on Llama-2-13b-chat-hf, which provides a rationale for the context provided. - Llama-2 is the best open-source model available. This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Llama-2-13b-chat-hf You will need at least 85-100GB of memory to run inference with effi-13b swiftly. ## Our benchmarking | Metric | Value | |--------------------|---------| | Perplexity | 5.529 | | MMLU | 50.90 | | Hella Swag (acc) | 59.38 | | Hella Swag (acc_norm) | 78.91 | | TruthfulQA | 38.24 | ## Direct Use effi-13b has been finetuned on a Chain of Thought dataset. ## Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations This model has been majorly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ## Recommendations We recommend users of effi-13b to develop guardrails and take appropriate precautions for any production use. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information is needed for further recommendations. ## Citations ``` @misc {lucifertrj, author = { {Tarun Jain} }, title = { Effi-13B-AWQ by AI Planet}, year = 2024, url = { https://huggingface.co/aiplanet/effi-13B-AWQ/ }, publisher = { Hugging Face } } ```
wongctroman/fine-tuned-cloudy-sentence-transformer-10
wongctroman
2024-03-11T06:38:24Z
47
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T06:36:10Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-10 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-10') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-10) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 18 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Trendyol/Trendyol-LLM-7b-base-v1.0
Trendyol
2024-03-11T06:37:48Z
3,073
15
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "tr", "en", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-05T06:15:13Z
--- language: - tr - en pipeline_tag: text-generation license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 --- <img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0/resolve/main/trendyol-llm-mistral.jpg" alt="drawing" width="400"/> # **Trendyol LLM v1.0** Trendyol LLM v1.0 is a generative model that is based on Mistral 7B model. This is the repository for the base model. ## Model Details **Model Developers** Trendyol **Variations** base, [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and [dpo](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0) variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Trendyol LLM v1.0 is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. The base version is fine-tuned on 10 billion tokens with the following trainables by using LoRA: - **lr**=2e-4 - **lora_rank**=64 - **lora_alpha**=128 - **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj - **modules_to_save**=embed_tokens,lm_head - **lora_dropout**=0.05 - **bf16**=True - **max_seq_length**=1024 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png" alt="drawing" width="600"/> ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "Trendyol/Trendyol-LLM-7b-base-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True) sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, return_full_text=True, repetition_penalty=1.1 ) def generate_output(user_query): outputs = pipe(user_query, **sampling_params ) return outputs[0]["generated_text"] user_query = "Ders çalışmanın en iyi 5 yolu:" response = generate_output(user_query) ``` ## Limitations, Risks, Bias, and Ethical Considerations ### Limitations and Known Biases - **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified. - **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations. - **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers. ### Risks and Ethical Considerations - **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment. - **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies. - **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks. ### Recommendations for Safe and Ethical Usage - **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly. - **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive. - **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
Holarissun/gptj6b-aisft-hh-seqsampler-subset60000
Holarissun
2024-03-11T06:35:06Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-03-11T06:35:02Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: EleutherAI/gpt-j-6b model-index: - name: gptj6b-aisft-hh-seqsampler-subset60000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptj6b-aisft-hh-seqsampler-subset60000 This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ashikshaffi08/zephyr_gemma_35_pct_data
ashikshaffi08
2024-03-11T06:25:55Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceH4/zephyr-7b-gemma-v0.1", "base_model:adapter:HuggingFaceH4/zephyr-7b-gemma-v0.1", "region:us" ]
null
2024-03-11T06:04:49Z
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-gemma-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
jylee55/autotrain-dlpeu-yhob0
jylee55
2024-03-11T06:18:16Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain", "dataset:autotrain-dlpeu-yhob0/autotrain-data", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-11T06:00:07Z
--- tags: - autotrain - text2text-generation widget: - text: "translate English to Hawaiian Pidgin: I went to Ala Moana today with Kimo" datasets: - autotrain-dlpeu-yhob0/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Seq2Seq ## Validation Metrics loss: 0.602806031703949 rouge1: 49.3563 rouge2: 38.9137 rougeL: 46.9477 rougeLsum: 47.7864 gen_len: 18.872 runtime: 246.0865 samples_per_second: 64.205 steps_per_second: 2.007 : 14.0
muzammil-eds/gemma-2b-it-OpenOrca-v1
muzammil-eds
2024-03-11T06:09:31Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T06:04:03Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
derk06/ppo-LunarLander-v2
derk06
2024-03-11T06:09:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T06:09:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.79 +/- 19.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
adityaprakhar/LayoutLM_March_11_2024
adityaprakhar
2024-03-11T06:06:34Z
118
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-11T04:35:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fyp-admin/dreambooth_Mercury_15
fyp-admin
2024-03-11T06:06:09Z
1
0
diffusers
[ "diffusers", "text-to-image", "lora", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-03-11T05:16:47Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - diffusers - lora - stable-diffusion - stable-diffusion-diffusers inference: true base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a picture of planet Mercury in the center, in charcoal gray color like the Moon having a cratered surface throughout. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - fyp-admin/dreambooth_Mercury_15 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a picture of planet Mercury in the center, in charcoal gray color like the Moon having a cratered surface throughout. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Praveenna/pixelcopter1
Praveenna
2024-03-11T06:05:47Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T06:05:32Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.10 +/- 16.73 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
sharanharsoor/leagaleasy-mistral-7b-instruct-v0.2-v1
sharanharsoor
2024-03-11T05:56:28Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-08T09:41:43Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: leagaleasy-mistral-7b-instruct-v0.2-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-mistral-7b-instruct-v0.2-v1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
happybusinessperson/distilroberta-base-finetuned-rightarticles-mlm-epochier
happybusinessperson
2024-03-11T05:50:26Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-11T05:16:32Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-rightarticles-mlm-epochier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-rightarticles-mlm-epochier This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9628 | 1.0 | 806 | 1.7725 | | 1.8591 | 2.0 | 1612 | 1.7438 | | 1.8011 | 3.0 | 2418 | 1.7308 | | 1.7628 | 4.0 | 3224 | 1.7214 | | 1.7353 | 5.0 | 4030 | 1.7147 | | 1.6951 | 6.0 | 4836 | 1.6986 | | 1.6592 | 7.0 | 5642 | 1.6819 | | 1.6549 | 8.0 | 6448 | 1.6776 | | 1.6516 | 9.0 | 7254 | 1.6647 | | 1.6296 | 10.0 | 8060 | 1.6840 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Glow-01/finetuned_bart_large_custom
Glow-01
2024-03-11T05:43:10Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-11T04:18:47Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer metrics: - rouge model-index: - name: finetuned_bart_large_custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_bart_large_custom This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8324 - Rouge1: 39.9143 - Rouge2: 10.7144 - Rougel: 21.1537 - Rougelsum: 35.81 - Gen Len: 131.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 16 | 4.3093 | 39.1367 | 9.9819 | 21.0796 | 35.3746 | 132.0741 | | No log | 2.0 | 32 | 4.2921 | 39.0619 | 9.8356 | 21.7437 | 35.6597 | 131.7037 | | No log | 3.0 | 48 | 4.3876 | 39.5314 | 10.337 | 21.0096 | 35.9973 | 131.2593 | | No log | 4.0 | 64 | 4.4020 | 39.3551 | 9.9689 | 21.4343 | 35.3958 | 131.1481 | | No log | 5.0 | 80 | 4.3744 | 39.7603 | 10.4124 | 21.6535 | 35.4996 | 132.963 | | No log | 6.0 | 96 | 4.4821 | 39.9859 | 11.0712 | 22.2449 | 35.7868 | 132.4074 | | No log | 7.0 | 112 | 4.6017 | 38.765 | 10.3317 | 20.9319 | 34.6675 | 132.2593 | | No log | 8.0 | 128 | 4.4419 | 39.9964 | 10.3341 | 20.9618 | 35.8621 | 130.2222 | | No log | 9.0 | 144 | 4.4990 | 39.8075 | 10.3829 | 21.3509 | 35.9882 | 128.7407 | | No log | 10.0 | 160 | 4.7017 | 38.6152 | 9.9282 | 20.4588 | 34.4487 | 131.9259 | | No log | 11.0 | 176 | 4.5497 | 39.0296 | 9.9429 | 20.8087 | 34.4624 | 132.6296 | | No log | 12.0 | 192 | 4.7301 | 38.8819 | 9.5937 | 20.929 | 34.7983 | 131.4444 | | No log | 13.0 | 208 | 4.5114 | 38.4163 | 9.6869 | 20.373 | 34.1491 | 123.8519 | | No log | 14.0 | 224 | 4.7097 | 38.4294 | 9.5615 | 20.1514 | 35.0332 | 131.7407 | | No log | 15.0 | 240 | 4.6300 | 38.9564 | 9.6386 | 20.0618 | 34.8298 | 129.963 | | No log | 16.0 | 256 | 4.6916 | 38.5582 | 10.136 | 20.8347 | 34.4795 | 129.8519 | | No log | 17.0 | 272 | 4.6959 | 38.3264 | 9.5281 | 20.5576 | 34.6148 | 128.2963 | | No log | 18.0 | 288 | 4.6756 | 37.5569 | 9.123 | 19.8291 | 33.5111 | 126.6667 | | No log | 19.0 | 304 | 4.7579 | 38.5704 | 9.3654 | 20.1826 | 34.8297 | 131.4815 | | No log | 20.0 | 320 | 4.8128 | 40.158 | 10.3889 | 20.9267 | 36.8965 | 130.1852 | | No log | 21.0 | 336 | 4.7659 | 39.4144 | 10.2445 | 20.4763 | 35.328 | 134.2593 | | No log | 22.0 | 352 | 4.7983 | 40.2859 | 11.0388 | 21.1643 | 36.0311 | 131.9259 | | No log | 23.0 | 368 | 4.7954 | 39.2676 | 10.5795 | 21.1116 | 35.3949 | 130.1481 | | No log | 24.0 | 384 | 4.7991 | 39.8126 | 10.3955 | 21.2952 | 35.7538 | 130.5926 | | No log | 25.0 | 400 | 4.8371 | 39.3481 | 10.2857 | 20.9862 | 35.1724 | 125.1481 | | No log | 26.0 | 416 | 4.8589 | 40.0988 | 10.4426 | 21.7284 | 35.7289 | 130.3333 | | No log | 27.0 | 432 | 4.8423 | 39.9233 | 10.3253 | 21.5853 | 36.1194 | 131.1111 | | No log | 28.0 | 448 | 4.8274 | 40.0388 | 10.1713 | 20.991 | 35.3966 | 130.4444 | | No log | 29.0 | 464 | 4.8313 | 39.8516 | 10.6207 | 21.0394 | 35.6627 | 130.8148 | | No log | 30.0 | 480 | 4.8324 | 39.9143 | 10.7144 | 21.1537 | 35.81 | 131.6667 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
tm21cy/albert-emotion-provided-params
tm21cy
2024-03-11T05:40:14Z
18
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-10T22:16:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4283 - Accuracy: 0.6952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.45e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 179 | 1.0664 | 0.6712 | | No log | 2.0 | 358 | 1.1817 | 0.6806 | | 0.118 | 3.0 | 537 | 1.4090 | 0.6681 | | 0.118 | 4.0 | 716 | 1.4375 | 0.6691 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
shazzz/Reinforce_Pixel_Copter
shazzz
2024-03-11T05:38:13Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-05T11:03:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce_Pixel_Copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.90 +/- 15.81 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_lf_signal_it_262
furrutiav
2024-03-11T05:28:43Z
90
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-03-11T05:28:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sadhaklal/custom-cnn-cifar2
sadhaklal
2024-03-11T05:25:20Z
0
0
pytorch
[ "pytorch", "image-classification", "dataset:cifar10", "region:us" ]
image-classification
2024-03-10T08:38:34Z
--- datasets: - cifar10 metrics: - accuracy library_name: pytorch pipeline_tag: image-classification --- # custom-cnn-cifar2 Custom convolutional neural network (CNN) trained on CIFAR-2 (a subset of CIFAR-10 for classifying 'airplane' vs. 'bird'). This model pertains to Exercise 1 of Chapter 8 of the book "Deep Learning with PyTorch" by Eli Stevens, Luca Antiga, and Thomas Viehmann. **Note:** In the exercise, we tried out `(5, 5)` and `(1, 3)` convolution kernel sizes. However, these didn't outperform the baseline network with `(3, 3)` kernel size. Hence, this checkpoint sticks to the `(3, 3)` kernel size. Code: https://github.com/sambitmukherjee/dlwpt-exercises/blob/main/chapter_8/exercise_1.ipynb Experiment tracking: https://wandb.ai/sadhaklal/custom-cnn-cifar2 ## Usage ``` !pip install -q datasets from datasets import load_dataset cifar10 = load_dataset("cifar10") label_map = {0: 0, 2: 1} class_names = ['airplane', 'bird'] cifar2_train = [(example['img'], label_map[example['label']]) for example in cifar10['train'] if example['label'] in [0, 2]] cifar2_val = [(example['img'], label_map[example['label']]) for example in cifar10['test'] if example['label'] in [0, 2]] example = cifar2_val[0] img, label = example import torch from torchvision.transforms import v2 tfms = v2.Compose([ v2.ToImage(), v2.ToDtype(torch.float32, scale=True), v2.Normalize(mean=[0.4915, 0.4823, 0.4468], std=[0.2470, 0.2435, 0.2616]) ]) img = tfms(img) batch = img.unsqueeze(0) import torch.nn as nn import torch.nn.functional as F from huggingface_hub import PyTorchModelHubMixin class Net(nn.Module, PyTorchModelHubMixin): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1, stride=1) self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1, stride=1) self.fc1 = nn.Linear(8 * 8 * 8, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), kernel_size=2, stride=2) # Output shape: (batch_size, 16, 16, 16) out = F.max_pool2d(torch.tanh(self.conv2(out)), kernel_size=2, stride=2) # Output shape: (batch_size, 8, 8, 8) out = out.view(-1, 8 * 8 * 8) # Output shape: (batch_size, 512) out = torch.tanh(self.fc1(out)) # Output shape: (batch_size, 32) out = self.fc2(out) # Output shape: (batch_size, 2) return out model = Net.from_pretrained("sadhaklal/custom-cnn-cifar2") model.eval() with torch.no_grad(): logits = model(batch) pred = logits[0].argmax().item() proba = torch.softmax(logits, dim=1) print(f"Predicted class: {class_names[pred]}") print(f"Predicted class probabilities ('airplane' vs. 'bird'): {proba[0].tolist()}") ``` ## Metric Accuracy on `cifar2_val`: 0.8995
StaAhmed/refined_model
StaAhmed
2024-03-11T05:07:25Z
1
0
peft
[ "peft", "region:us" ]
null
2024-03-10T14:09:00Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
EricValen/rl_course_vizdoom_health_gathering_supreme
EricValen
2024-03-11T05:03:48Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T03:26:31Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.17 +/- 4.02 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r EricValen/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
sarak7/H4_311_769_v3
sarak7
2024-03-11T05:02:19Z
178
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T05:00:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Parth1612/pp_albert_ft_emotions
Parth1612
2024-03-11T04:58:20Z
184
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "dataset:dair-ai/emotion", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T03:44:23Z
--- datasets: - dair-ai/emotion ---
Harshad018/trained-gpt2-tweet-analysis
Harshad018
2024-03-11T04:54:39Z
90
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T04:53:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lole25/zephyr-7b-dpo-qlora
lole25
2024-03-11T04:53:51Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-16T00:14:52Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized base_model: mistralai/Mistral-7B-v0.1 model-index: - name: zephyr-7b-dpo-qlora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-qlora This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6813 - Rewards/chosen: -0.0009 - Rewards/rejected: -0.0252 - Rewards/accuracies: 0.2920 - Rewards/margins: 0.0243 - Logps/rejected: -71.3009 - Logps/chosen: -65.4449 - Logits/rejected: -2.4428 - Logits/chosen: -2.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.69 | 0.26 | 100 | 0.6897 | 0.0232 | 0.0168 | 0.2680 | 0.0064 | -67.1001 | -63.0342 | -2.4904 | -2.4911 | | 0.6869 | 0.52 | 200 | 0.6849 | 0.0066 | -0.0092 | 0.3060 | 0.0159 | -69.7060 | -64.6950 | -2.4556 | -2.4573 | | 0.681 | 0.78 | 300 | 0.6815 | -0.0026 | -0.0264 | 0.2880 | 0.0238 | -71.4280 | -65.6224 | -2.4430 | -2.4446 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.14.6 - Tokenizers 0.15.2
nadika/nepali_complaints_classification_nepbert3
nadika
2024-03-11T04:50:48Z
94
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:Rajan/NepaliBERT", "base_model:finetune:Rajan/NepaliBERT", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T03:46:15Z
--- tags: - generated_from_trainer metrics: - accuracy base_model: Rajan/NepaliBERT model-index: - name: nepali_complaints_classification_nepbert3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nepali_complaints_classification_nepbert3 This model is a fine-tuned version of [Rajan/NepaliBERT](https://huggingface.co/Rajan/NepaliBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2687 - Accuracy: 0.9494 - F1-score: 0.9483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 1.4921 | 0.22 | 500 | 0.8642 | 0.7235 | 0.7143 | | 0.7781 | 0.45 | 1000 | 0.6241 | 0.7974 | 0.7923 | | 0.5865 | 0.67 | 1500 | 0.5342 | 0.8243 | 0.8125 | | 0.4625 | 0.89 | 2000 | 0.4250 | 0.8576 | 0.8553 | | 0.3648 | 1.11 | 2500 | 0.3856 | 0.8759 | 0.8725 | | 0.3001 | 1.34 | 3000 | 0.3424 | 0.8899 | 0.8891 | | 0.2723 | 1.56 | 3500 | 0.3199 | 0.9007 | 0.8981 | | 0.2538 | 1.78 | 4000 | 0.2898 | 0.9085 | 0.9066 | | 0.231 | 2.01 | 4500 | 0.2676 | 0.9203 | 0.9189 | | 0.1478 | 2.23 | 5000 | 0.3029 | 0.9210 | 0.9187 | | 0.1666 | 2.45 | 5500 | 0.2580 | 0.9283 | 0.9271 | | 0.1519 | 2.67 | 6000 | 0.2573 | 0.9308 | 0.9292 | | 0.1498 | 2.9 | 6500 | 0.2746 | 0.9328 | 0.9306 | | 0.1112 | 3.12 | 7000 | 0.2564 | 0.9398 | 0.9389 | | 0.0903 | 3.34 | 7500 | 0.2726 | 0.9403 | 0.9393 | | 0.1036 | 3.57 | 8000 | 0.2664 | 0.9398 | 0.9385 | | 0.1043 | 3.79 | 8500 | 0.2614 | 0.9459 | 0.9447 | | 0.0972 | 4.01 | 9000 | 0.2499 | 0.9453 | 0.9443 | | 0.0663 | 4.23 | 9500 | 0.2643 | 0.9469 | 0.9458 | | 0.0683 | 4.46 | 10000 | 0.2688 | 0.9474 | 0.9462 | | 0.0671 | 4.68 | 10500 | 0.2657 | 0.9491 | 0.9481 | | 0.0605 | 4.9 | 11000 | 0.2687 | 0.9494 | 0.9483 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ekato/FujiiKaze
ekato
2024-03-11T04:44:38Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail", "region:us" ]
text-to-image
2024-03-11T04:44:21Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/1000018412.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: openrail --- # FujiiKaze <Gallery /> ## Download model [Download](/ekato/FujiiKaze/tree/main) them in the Files & versions tab.
automerger/PasticheNeuralsirkrishna-7B
automerger
2024-03-11T04:29:27Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Kukedlc/NeuralSirKrishna-7b", "base_model:finetune:Kukedlc/NeuralSirKrishna-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T04:28:38Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - Kukedlc/NeuralSirKrishna-7b --- # PasticheNeuralsirkrishna-7B PasticheNeuralsirkrishna-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) ## 🧩 Configuration ```yaml models: - model: CorticalStack/pastiche-crown-clown-7b-dare # No parameters necessary for base model - model: Kukedlc/NeuralSirKrishna-7b parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: CorticalStack/pastiche-crown-clown-7b-dare parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/PasticheNeuralsirkrishna-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jhpassion0621/kp-mt5-large
jhpassion0621
2024-03-11T04:20:27Z
30
0
transformers
[ "transformers", "tensorboard", "onnx", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:jhpassion0621/kp-mt5-large", "base_model:quantized:jhpassion0621/kp-mt5-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-06T10:16:02Z
--- license: apache-2.0 base_model: jhpassion0621/kp-mt5-large tags: - generated_from_trainer metrics: - bleu model-index: - name: kp-mt5-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kp-mt5-large This model is a fine-tuned version of [jhpassion0621/kp-mt5-large](https://huggingface.co/jhpassion0621/kp-mt5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5586 - Bleu: 43.3983 - Gen Len: 45.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.59e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss | |:-------------:|:-----:|:------:|:-------:|:-------:|:---------------:| | 1.0364 | 0.29 | 17000 | 32.5573 | 44.7582 | 0.8278 | | 0.8819 | 0.58 | 34000 | 37.1161 | 45.0568 | 0.7062 | | 0.7731 | 0.87 | 51000 | 40.329 | 45.7359 | 0.6188 | | 0.7339 | 1.16 | 68000 | 41.7643 | 45.8618 | 0.5866 | | 0.7093 | 1.45 | 85000 | 42.6878 | 45.5649 | 0.5657 | | 0.6818 | 1.74 | 102000 | 43.2023 | 45.7701 | 0.5609 | | 0.6739 | 2.00 | 117444 | 43.3983 | 45.6585 | 0.5586 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
Deeksha04/PlantDetectTask1
Deeksha04
2024-03-11T04:19:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-02T09:32:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOOwO/eacc_o_1
OwOOwO
2024-03-11T04:18:56Z
89
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T04:16:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nbeerbower/StrangeBru-7B
nbeerbower
2024-03-11T04:14:57Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:nbeerbower/bruphin-theta", "base_model:merge:nbeerbower/bruphin-theta", "base_model:nbeerbower/strange_3236-7B", "base_model:merge:nbeerbower/strange_3236-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T04:12:05Z
--- license: apache-2.0 base_model: - nbeerbower/strange_3236-7B - nbeerbower/bruphin-theta library_name: transformers tags: - mergekit - merge --- # StrangeBru-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [nbeerbower/strange_3236-7B](https://huggingface.co/nbeerbower/strange_3236-7B) * [nbeerbower/bruphin-theta](https://huggingface.co/nbeerbower/bruphin-theta) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/bruphin-theta layer_range: [0, 32] - model: nbeerbower/strange_3236-7B layer_range: [0, 32] merge_method: slerp base_model: nbeerbower/strange_3236-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
wongctroman/fine-tuned-cloudy-sentence-transformer-9
wongctroman
2024-03-11T04:13:52Z
49
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T04:12:08Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-9 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-9') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-9) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 18 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
sarak7/H4_311_769_v1
sarak7
2024-03-11T04:13:28Z
178
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T04:11:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5
alinerodrigues
2024-03-11T04:06:01Z
1
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-10T23:26:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1701 - Wer: 0.0874 - Cer: 0.0290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 34.6826 | 1.0 | 79 | 3.4138 | 1.0 | 1.0 | | 8.8564 | 2.0 | 158 | 3.0441 | 1.0 | 1.0 | | 3.072 | 3.0 | 237 | 2.9375 | 1.0 | 1.0 | | 2.9362 | 4.0 | 316 | 2.9142 | 1.0 | 1.0 | | 2.9362 | 5.0 | 395 | 2.8392 | 1.0 | 1.0 | | 2.8915 | 6.0 | 474 | 1.7012 | 1.0 | 0.4878 | | 2.358 | 7.0 | 553 | 0.6235 | 0.3294 | 0.0845 | | 1.124 | 8.0 | 632 | 0.4194 | 0.1922 | 0.0570 | | 0.6757 | 9.0 | 711 | 0.3398 | 0.1693 | 0.0499 | | 0.6757 | 10.0 | 790 | 0.2962 | 0.1388 | 0.0443 | | 0.5066 | 11.0 | 869 | 0.2657 | 0.1209 | 0.0396 | | 0.4407 | 12.0 | 948 | 0.2451 | 0.1151 | 0.0383 | | 0.4235 | 13.0 | 1027 | 0.2372 | 0.1080 | 0.0367 | | 0.372 | 14.0 | 1106 | 0.2301 | 0.1022 | 0.0359 | | 0.372 | 15.0 | 1185 | 0.2188 | 0.1035 | 0.0365 | | 0.3333 | 16.0 | 1264 | 0.2082 | 0.1048 | 0.0353 | | 0.2817 | 17.0 | 1343 | 0.2067 | 0.0998 | 0.0342 | | 0.2744 | 18.0 | 1422 | 0.2084 | 0.0964 | 0.0338 | | 0.2693 | 19.0 | 1501 | 0.2011 | 0.0966 | 0.0330 | | 0.2693 | 20.0 | 1580 | 0.1975 | 0.0979 | 0.0335 | | 0.2444 | 21.0 | 1659 | 0.1932 | 0.0956 | 0.0334 | | 0.2258 | 22.0 | 1738 | 0.1884 | 0.0924 | 0.0317 | | 0.2348 | 23.0 | 1817 | 0.1875 | 0.0932 | 0.0324 | | 0.2348 | 24.0 | 1896 | 0.1780 | 0.0948 | 0.0323 | | 0.2146 | 25.0 | 1975 | 0.1819 | 0.0935 | 0.0319 | | 0.2157 | 26.0 | 2054 | 0.1809 | 0.0903 | 0.0310 | | 0.1913 | 27.0 | 2133 | 0.1770 | 0.0924 | 0.0316 | | 0.206 | 28.0 | 2212 | 0.1808 | 0.0893 | 0.0314 | | 0.206 | 29.0 | 2291 | 0.1822 | 0.0885 | 0.0313 | | 0.1797 | 30.0 | 2370 | 0.1761 | 0.0903 | 0.0306 | | 0.1918 | 31.0 | 2449 | 0.1786 | 0.0903 | 0.0306 | | 0.1819 | 32.0 | 2528 | 0.1821 | 0.0898 | 0.0308 | | 0.1805 | 33.0 | 2607 | 0.1849 | 0.0885 | 0.0310 | | 0.1805 | 34.0 | 2686 | 0.1817 | 0.0864 | 0.0314 | | 0.1708 | 35.0 | 2765 | 0.1839 | 0.0882 | 0.0316 | | 0.1734 | 36.0 | 2844 | 0.1817 | 0.0872 | 0.0321 | | 0.161 | 37.0 | 2923 | 0.1824 | 0.0906 | 0.0319 | | 0.154 | 38.0 | 3002 | 0.1804 | 0.0885 | 0.0314 | | 0.154 | 39.0 | 3081 | 0.1782 | 0.0864 | 0.0305 | | 0.1604 | 40.0 | 3160 | 0.1751 | 0.0858 | 0.0301 | | 0.1631 | 41.0 | 3239 | 0.1719 | 0.0840 | 0.0298 | | 0.1542 | 42.0 | 3318 | 0.1744 | 0.0858 | 0.0304 | | 0.1542 | 43.0 | 3397 | 0.1742 | 0.0893 | 0.0308 | | 0.1658 | 44.0 | 3476 | 0.1744 | 0.0874 | 0.0299 | | 0.157 | 45.0 | 3555 | 0.1745 | 0.0887 | 0.0299 | | 0.1451 | 46.0 | 3634 | 0.1755 | 0.0861 | 0.0296 | | 0.1512 | 47.0 | 3713 | 0.1737 | 0.0911 | 0.0299 | | 0.1512 | 48.0 | 3792 | 0.1722 | 0.0882 | 0.0295 | | 0.1484 | 49.0 | 3871 | 0.1722 | 0.0837 | 0.0288 | | 0.1343 | 50.0 | 3950 | 0.1744 | 0.0856 | 0.0294 | | 0.1403 | 51.0 | 4029 | 0.1701 | 0.0874 | 0.0290 | | 0.1334 | 52.0 | 4108 | 0.1770 | 0.0877 | 0.0298 | | 0.1334 | 53.0 | 4187 | 0.1720 | 0.0872 | 0.0296 | | 0.1345 | 54.0 | 4266 | 0.1738 | 0.0848 | 0.0287 | | 0.1183 | 55.0 | 4345 | 0.1705 | 0.0866 | 0.0290 | | 0.1328 | 56.0 | 4424 | 0.1738 | 0.0848 | 0.0289 | | 0.1261 | 57.0 | 4503 | 0.1758 | 0.0864 | 0.0297 | | 0.1261 | 58.0 | 4582 | 0.1770 | 0.0824 | 0.0285 | | 0.1405 | 59.0 | 4661 | 0.1766 | 0.0879 | 0.0297 | | 0.1164 | 60.0 | 4740 | 0.1753 | 0.0816 | 0.0286 | | 0.1326 | 61.0 | 4819 | 0.1770 | 0.0861 | 0.0290 | | 0.1326 | 62.0 | 4898 | 0.1725 | 0.0856 | 0.0294 | | 0.1209 | 63.0 | 4977 | 0.1779 | 0.0840 | 0.0292 | | 0.1412 | 64.0 | 5056 | 0.1753 | 0.0832 | 0.0282 | | 0.1226 | 65.0 | 5135 | 0.1764 | 0.0840 | 0.0285 | | 0.1187 | 66.0 | 5214 | 0.1813 | 0.0793 | 0.0276 | | 0.1187 | 67.0 | 5293 | 0.1785 | 0.0798 | 0.0277 | | 0.1182 | 68.0 | 5372 | 0.1771 | 0.0824 | 0.0279 | | 0.1178 | 69.0 | 5451 | 0.1798 | 0.0843 | 0.0285 | | 0.1289 | 70.0 | 5530 | 0.1798 | 0.0866 | 0.0292 | | 0.1321 | 71.0 | 5609 | 0.1803 | 0.0843 | 0.0286 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.13.3
johnnyluhk/ppo-Pyramids_Training
johnnyluhk
2024-03-11T04:04:16Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-03-11T04:04:13Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: johnnyluhk/ppo-Pyramids_Training 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Zhengyi/CRM
Zhengyi
2024-03-11T03:58:51Z
0
52
null
[ "image-to-3d", "arxiv:2403.05034", "license:mit", "region:us" ]
image-to-3d
2024-03-06T06:47:22Z
--- license: mit pipeline_tag: image-to-3d tags: - image-to-3d --- # Convolutional Reconstruction Model Model card for *CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model*. Project Page: https://ml.cs.tsinghua.edu.cn/~zhengyi/CRM/ Arxiv: https://arxiv.org/abs/2403.05034 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/634e15aec1ce28f1de91c470/4Ggn8Vl8zV0kccAF1376o.jpeg) The model contains a diffusion model to generate multi-view images from single input image, another difffusion to generate CCMs, and a UNet-based reconstruction model to get the final textured mesh. ## Citation ``` @article{wang2024crm, title={CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model}, author={Zhengyi Wang and Yikai Wang and Yifei Chen and Chendong Xiang and Shuo Chen and Dajiang Yu and Chongxuan Li and Hang Su and Jun Zhu}, journal={arXiv preprint arXiv:2403.05034}, year={2024} } ```
wongctroman/fine-tuned-cloudy-sentence-transformer-8
wongctroman
2024-03-11T03:56:53Z
47
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T03:55:27Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-8 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-8') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-8) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 18 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
NPCProgrammer/tweet
NPCProgrammer
2024-03-11T03:53:19Z
176
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T03:28:57Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: tweet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tweet This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7335 - Accuracy: 0.6760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.56 | 100 | 0.6407 | 0.6251 | | No log | 1.12 | 200 | 0.7067 | 0.6953 | | No log | 1.68 | 300 | 0.6478 | 0.6796 | | No log | 2.23 | 400 | 0.7657 | 0.6901 | | 0.4528 | 2.79 | 500 | 0.8630 | 0.6733 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
deepapaikar/katzbot-phi2-old
deepapaikar
2024-03-11T03:52:48Z
0
0
peft
[ "peft", "safetensors", "phi", "generated_from_trainer", "custom_code", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-19T00:42:05Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: katzbot-phi2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # katzbot-phi2 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
tsavage68/mistralit2_500_STEPS_1e8_rate_03_beta_DPO
tsavage68
2024-03-11T03:49:18Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T02:12:46Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 tags: - trl - dpo - generated_from_trainer model-index: - name: mistralit2_500_STEPS_1e8_rate_03_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralit2_500_STEPS_1e8_rate_03_beta_DPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6903 - Rewards/chosen: -0.0048 - Rewards/rejected: -0.0113 - Rewards/accuracies: 0.5121 - Rewards/margins: 0.0065 - Logps/rejected: -28.6101 - Logps/chosen: -23.4018 - Logits/rejected: -2.8650 - Logits/chosen: -2.8653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6911 | 0.1 | 50 | 0.6909 | 0.0027 | -0.0025 | 0.4967 | 0.0052 | -28.5807 | -23.3768 | -2.8653 | -2.8655 | | 0.6916 | 0.2 | 100 | 0.6928 | -0.0010 | -0.0023 | 0.4571 | 0.0014 | -28.5802 | -23.3891 | -2.8653 | -2.8655 | | 0.6931 | 0.29 | 150 | 0.6916 | -0.0047 | -0.0087 | 0.4659 | 0.0040 | -28.6014 | -23.4015 | -2.8652 | -2.8654 | | 0.6922 | 0.39 | 200 | 0.6914 | -0.0046 | -0.0090 | 0.4681 | 0.0044 | -28.6024 | -23.4011 | -2.8651 | -2.8654 | | 0.6921 | 0.49 | 250 | 0.6927 | -0.0086 | -0.0103 | 0.4747 | 0.0017 | -28.6067 | -23.4145 | -2.8651 | -2.8653 | | 0.6938 | 0.59 | 300 | 0.6916 | -0.0092 | -0.0132 | 0.4835 | 0.0040 | -28.6163 | -23.4163 | -2.8651 | -2.8654 | | 0.6976 | 0.68 | 350 | 0.6907 | -0.0058 | -0.0116 | 0.4747 | 0.0058 | -28.6111 | -23.4052 | -2.8651 | -2.8654 | | 0.6918 | 0.78 | 400 | 0.6902 | -0.0069 | -0.0137 | 0.4967 | 0.0068 | -28.6182 | -23.4089 | -2.8651 | -2.8653 | | 0.6862 | 0.88 | 450 | 0.6903 | -0.0048 | -0.0113 | 0.5121 | 0.0065 | -28.6101 | -23.4018 | -2.8650 | -2.8653 | | 0.6946 | 0.98 | 500 | 0.6903 | -0.0048 | -0.0113 | 0.5121 | 0.0065 | -28.6101 | -23.4018 | -2.8650 | -2.8653 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
Sumail/Alchemist_02_2b
Sumail
2024-03-11T03:46:06Z
90
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "mergewss]", "mergekit", "lazymergekit", "deepnetguy/gemma-64", "Aspik101/minigemma_ft9", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T03:41:57Z
--- license: apache-2.0 tags: - mergewss] - mergekit - lazymergekit - deepnetguy/gemma-64 - Aspik101/minigemma_ft9 --- # Alchemist_02_2b Alchemist_02_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [deepnetguy/gemma-64](https://huggingface.co/deepnetguy/gemma-64) * [Aspik101/minigemma_ft9](https://huggingface.co/Aspik101/minigemma_ft9) ## 🧩 Configuration ```yaml models: - model: deepnet/SN6-71G5 # no parameters necessary for base model - model: deepnetguy/gemma-64 parameters: density: 0.5 weight: 0.3 - model: Aspik101/minigemma_ft9 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: deepnet/SN6-71G5 parameters: normalize: true dtype: bfloat16 ```
Hemg/bone-fracture-detection-using-x-rays
Hemg
2024-03-11T03:43:36Z
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-11T03:15:41Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: bone-fracture-detection-using-x-rays results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bone-fracture-detection-using-x-rays This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0458 - Accuracy: 0.9769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5407 | 1.0 | 111 | 0.2512 | 0.9143 | | 0.1819 | 2.0 | 222 | 0.1203 | 0.9526 | | 0.1351 | 3.0 | 333 | 0.1183 | 0.9521 | | 0.101 | 4.0 | 444 | 0.0905 | 0.9616 | | 0.0705 | 5.0 | 555 | 0.0958 | 0.9628 | | 0.0658 | 6.0 | 666 | 0.0671 | 0.9729 | | 0.0584 | 7.0 | 777 | 0.0498 | 0.9803 | | 0.0507 | 8.0 | 888 | 0.0633 | 0.9735 | | 0.0508 | 9.0 | 999 | 0.0640 | 0.9797 | | 0.0432 | 10.0 | 1110 | 0.0458 | 0.9769 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ankhamun/x0o0I_i8ee8_I0x0o
ankhamun
2024-03-11T03:29:51Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T03:24:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sumail/Alchemist_01_2b
Sumail
2024-03-11T03:20:35Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "mergewss]", "mergekit", "lazymergekit", "Aspik101/minigemma_ft9", "deepnetguy/gemma-64", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T03:18:07Z
--- license: apache-2.0 tags: - mergewss] - mergekit - lazymergekit - Aspik101/minigemma_ft9 - deepnetguy/gemma-64 --- # Alchemist_01_2b Alchemist_01_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Aspik101/minigemma_ft9](https://huggingface.co/Aspik101/minigemma_ft9) * [deepnetguy/gemma-64](https://huggingface.co/deepnetguy/gemma-64) ## 🧩 Configuration ```yaml slices: - sources: - model: Aspik101/minigemma_ft9 layer_range: [0, 18] - model: deepnetguy/gemma-64 layer_range: [0, 18] merge_method: slerp base_model: Aspik101/minigemma_ft9 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
wongctroman/fine-tuned-cloudy-sentence-transformer-4
wongctroman
2024-03-11T03:19:21Z
45
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T01:39:34Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-4 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-4') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-4) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
wongctroman/fine-tuned-cloudy-sentence-transformer-6
wongctroman
2024-03-11T03:19:05Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T03:11:55Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-6 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-6') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-6) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Anujgr8/wav2vec2-indic-malvi
Anujgr8
2024-03-11T03:18:53Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T03:18:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
automerger/Experiment29Pastiche-7B
automerger
2024-03-11T03:14:08Z
49
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:CorticalStack/pastiche-crown-clown-7b-dare", "base_model:merge:CorticalStack/pastiche-crown-clown-7b-dare", "base_model:yam-peleg/Experiment29-7B", "base_model:merge:yam-peleg/Experiment29-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-10T22:55:51Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - yam-peleg/Experiment29-7B - CorticalStack/pastiche-crown-clown-7b-dare --- # Experiment29Pastiche-7B Experiment29Pastiche-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [yam-peleg/Experiment29-7B](https://huggingface.co/yam-peleg/Experiment29-7B) * [CorticalStack/pastiche-crown-clown-7b-dare](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare) ## 🧩 Configuration ```yaml slices: - sources: - model: yam-peleg/Experiment29-7B layer_range: [0, 32] - model: CorticalStack/pastiche-crown-clown-7b-dare layer_range: [0, 32] merge_method: slerp base_model: yam-peleg/Experiment29-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment29Pastiche-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
wongctroman/fine-tuned-cloudy-sentence-transformer-5
wongctroman
2024-03-11T03:09:34Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T03:08:17Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-5') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-5) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 20, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
johnnyluhk/ppo-SnowballTarget
johnnyluhk
2024-03-11T02:56:23Z
19
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-03-11T02:56:20Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: johnnyluhk/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
essiam/clean_art_cat
essiam
2024-03-11T02:55:52Z
0
1
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-11T02:44:09Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers inference: true base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of 686HenrietteRonnerKnip859 cat --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - essiam/clean_art_cat This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of 686HenrietteRonnerKnip859 cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Reemvn/distilroberta-base
Reemvn
2024-03-11T02:43:33Z
46
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-10T23:41:57Z
--- tags: - generated_from_keras_callback model-index: - name: Reemvn/distilroberta-base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Reemvn/distilroberta-base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0947 - Validation Loss: 0.1512 - Train Accuracy: 0.9455 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.1343 | 0.1610 | 0.945 | 0 | | 0.1097 | 0.1589 | 0.949 | 1 | | 0.0947 | 0.1512 | 0.9455 | 2 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
sarak7/H4_310_769_v1
sarak7
2024-03-11T02:43:04Z
179
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T02:41:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shamekhjr/ppo-LunarLander-v2
shamekhjr
2024-03-11T02:42:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-07T21:56:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.33 +/- 25.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Litzy619/Va0309B1
Litzy619
2024-03-11T02:38:11Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-03-11T00:14:36Z
--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: Va0309B1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Va0309B1 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 20 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7676 | 0.09 | 10 | 2.7256 | | 2.6283 | 0.17 | 20 | 2.4272 | | 2.2188 | 0.26 | 30 | 1.8633 | | 1.6664 | 0.34 | 40 | 1.3024 | | 1.1333 | 0.43 | 50 | 0.7344 | | 0.5536 | 0.51 | 60 | 0.2242 | | 0.1952 | 0.6 | 70 | 0.1010 | | 0.1271 | 0.68 | 80 | 0.0909 | | 0.1187 | 0.77 | 90 | 0.0872 | | 0.1194 | 0.85 | 100 | 0.0851 | | 0.1135 | 0.94 | 110 | 0.0831 | | 0.1119 | 1.02 | 120 | 0.0805 | | 0.1132 | 1.11 | 130 | 0.0794 | | 0.1036 | 1.19 | 140 | 0.0790 | | 0.1083 | 1.28 | 150 | 0.0780 | | 0.1063 | 1.37 | 160 | 0.0765 | | 0.1087 | 1.45 | 170 | 0.0756 | | 0.0969 | 1.54 | 180 | 0.0740 | | 0.1024 | 1.62 | 190 | 0.0738 | | 0.1044 | 1.71 | 200 | 0.0727 | | 0.1013 | 1.79 | 210 | 0.0728 | | 0.0985 | 1.88 | 220 | 0.0724 | | 0.0961 | 1.96 | 230 | 0.0714 | | 0.1015 | 2.05 | 240 | 0.0717 | | 0.096 | 2.13 | 250 | 0.0710 | | 0.0935 | 2.22 | 260 | 0.0709 | | 0.0913 | 2.3 | 270 | 0.0709 | | 0.1026 | 2.39 | 280 | 0.0706 | | 0.0965 | 2.47 | 290 | 0.0705 | | 0.1029 | 2.56 | 300 | 0.0710 | | 0.0961 | 2.65 | 310 | 0.0702 | | 0.0991 | 2.73 | 320 | 0.0704 | | 0.0985 | 2.82 | 330 | 0.0705 | | 0.0929 | 2.9 | 340 | 0.0703 | | 0.0948 | 2.99 | 350 | 0.0706 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
lilyray/albert_emotion
lilyray
2024-03-11T02:28:15Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:lilyray/albert_emotion", "base_model:finetune:lilyray/albert_emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-10T17:30:03Z
--- license: apache-2.0 base_model: lilyray/albert_emotion tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: albert_emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9295 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert_emotion This model is a fine-tuned version of [lilyray/albert_emotion](https://huggingface.co/lilyray/albert_emotion) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2391 - Accuracy: 0.9295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.363600088100325e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 19 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1744 | 1.0 | 4000 | 0.2001 | 0.938 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
baoyun30/gemma-Code-Instruct-Finetune-test
baoyun30
2024-03-11T02:02:41Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T01:58:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asedmammad/Contextual_KTO_Mistral_PairRM-GGUF
asedmammad
2024-03-11T01:54:18Z
83
2
null
[ "gguf", "kto", "dpo", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "rl", "rlaif", "en", "dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset", "arxiv:2402.01306", "base_model:ContextualAI/Contextual_KTO_Mistral_PairRM", "base_model:quantized:ContextualAI/Contextual_KTO_Mistral_PairRM", "license:apache-2.0", "region:us", "conversational" ]
null
2024-03-10T22:07:16Z
--- base_model: ContextualAI/Contextual_KTO_Mistral_PairRM inference: false language: - en license: apache-2.0 tags: - kto - dpo - human feedback - rlhf - preferences - alignment - HALO - halos - rl - rlaif datasets: - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset metrics: - accuracy model_creator: ContextualAI model_name: Contextual KTO Mistral PairRM model_type: mistral prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: Ased Mammad --- # Contextual_KTO_Mistral_PairRM - GGUF - Model creator: [ContextualAI](https://huggingface.co/ContextualAI) - Original model: [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM) <!-- description start --> ## Description This repo contains GGUF format model files for [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Contextual_KTO_Mistral_PairRM.Q2_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes | | [Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [Contextual_KTO_Mistral_PairRM.Q4_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [Contextual_KTO_Mistral_PairRM.Q5_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [Contextual_KTO_Mistral_PairRM.Q6_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [Contextual_KTO_Mistral_PairRM.Q8_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF and below it, a specific filename to download, such as: Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 35` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|user|>\n{prompt}<|assistant|>\n", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- original-model-card start --> This repo contains the model and tokenizer checkpoints for: - model family [<b>mistralai/Mistral-7B-Instruct-v0.2</b>](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - optimized with the loss [<b>KTO</b>](https://twitter.com/winniethexu/status/1732839295365554643) - aligned using the [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset) - via 3 iterations of KTO on one epoch of each training partition, each previous iteration's model serving as the reference for the subsequent. **[03/06/2024]**: We are #2 on the (verified) [Alpaca Eval 2.0 Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) scoring **33.23**! To prompt this model, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added at tokenization time and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. You may also use our tokenizer's `apply_chat_template` if doing inference with `chatml` set or evaluating generations through non-local clients. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) for more details on the methodology. If you found this work useful, feel free to cite [our work](https://arxiv.org/abs/2402.01306): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ``` <!-- original-model-card end -->
jeonsiyun/layoutlmv3-v29-epoch25
jeonsiyun
2024-03-11T01:54:04Z
119
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T01:53:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
coon-hound/lunarlander
coon-hound
2024-03-11T01:47:10Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T01:13:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -276.01 +/- 133.95 name: mean_reward verified: false --- # This is aaron's lunar landing. It landed on the moon succesfully a few times.
ITT-AF/ITT-Yi-Ko-6B-v6.0
ITT-AF
2024-03-11T01:43:50Z
55
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-07T02:52:58Z
--- license: cc-by-nc-4.0 --- ## ITT-AF/ITT-Yi-Ko-6B-v6.0 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
juhwanlee/llmdo-Mistral-7B-case-7
juhwanlee
2024-03-11T01:43:03Z
48
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T00:41:21Z
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
grace-pro/oops_i_did_it_again_eval_hans
grace-pro
2024-03-11T01:40:10Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-11T01:38:55Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - precision - recall - accuracy base_model: mistralai/Mistral-7B-v0.1 model-index: - name: oops_i_did_it_again_eval_hans results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # oops_i_did_it_again_eval_hans This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5556 - Precision: 0.8070 - Recall: 0.2481 - F1-score: 0.3796 - Accuracy: 0.5944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:| | 0.6706 | 1.0 | 4909 | 1.5556 | 0.8070 | 0.2481 | 0.3796 | 0.5944 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_lf_signal_it_32
furrutiav
2024-03-11T01:39:12Z
91
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-03-11T01:38:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wongctroman/fine-tuned-cloudy-sentence-transformer-3
wongctroman
2024-03-11T01:30:02Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-11T01:28:55Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # wongctroman/fine-tuned-cloudy-sentence-transformer-3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3 with parameters: ``` {'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 500, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
xia01ongLi/dummy-model2
xia01ongLi
2024-03-11T01:28:47Z
201
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-11T01:27:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/gptj6b-aisft-hh-randsampler-subset2000
Holarissun
2024-03-11T01:20:53Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-03-11T01:16:40Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: EleutherAI/gpt-j-6b model-index: - name: gptj6b-aisft-hh-randsampler-subset2000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptj6b-aisft-hh-randsampler-subset2000 This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
OwOOwO/eacc_adhoc_mtest_1
OwOOwO
2024-03-11T01:18:23Z
93
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T01:15:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/gptj6b-aisft-hh-seqsampler-subset2000
Holarissun
2024-03-11T01:15:38Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-03-11T01:15:34Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: EleutherAI/gpt-j-6b model-index: - name: gptj6b-aisft-hh-seqsampler-subset2000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptj6b-aisft-hh-seqsampler-subset2000 This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
jeonsiyun/layoutlmv3-v29-epoch50
jeonsiyun
2024-03-11T01:05:59Z
119
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T01:05:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/Hyperion-2.1-Mistral-7B-exl2
bartowski
2024-03-11T01:05:25Z
0
0
transformers
[ "transformers", "text-generation", "en", "dataset:Locutusque/hyperion-v2.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T00:53:53Z
--- library_name: transformers license: apache-2.0 datasets: - Locutusque/hyperion-v2.0 language: - en quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Hyperion-2.1-Mistral-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Locutusque/Hyperion-2.1-Mistral-7B | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2 Hyperion-2.1-Mistral-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Hyperion-2.1-Mistral-7B-exl2`: ```shell mkdir Hyperion-2.1-Mistral-7B-exl2 huggingface-cli download bartowski/Hyperion-2.1-Mistral-7B-exl2 --local-dir Hyperion-2.1-Mistral-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Hyperion-2.1-Mistral-7B-exl2-6_5 huggingface-cli download bartowski/Hyperion-2.1-Mistral-7B-exl2 --revision 6_5 --local-dir Hyperion-2.1-Mistral-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Hyperion-2.1-Mistral-7B-exl2-6.5 huggingface-cli download bartowski/Hyperion-2.1-Mistral-7B-exl2 --revision 6_5 --local-dir Hyperion-2.1-Mistral-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski