modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-13 00:46:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
518 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-13 00:45:34
card
stringlengths
11
1.01M
Litzy619/V0328MP2
Litzy619
2024-03-27T22:44:06Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-03-27T20:51:51Z
--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: V0328MP2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0328MP2 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 20 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.1408 | 0.09 | 10 | 2.5415 | | 5.4886 | 0.18 | 20 | 2.4963 | | 4.5457 | 0.27 | 30 | 2.4110 | | 4.1074 | 0.36 | 40 | 2.3242 | | 3.5825 | 0.45 | 50 | 2.2528 | | 3.1612 | 0.54 | 60 | 2.2006 | | 2.8782 | 0.63 | 70 | 2.1606 | | 2.5962 | 0.73 | 80 | 2.1360 | | 2.7051 | 0.82 | 90 | 2.1230 | | 2.5853 | 0.91 | 100 | 2.1162 | | 2.6212 | 1.0 | 110 | 2.1140 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
singhtanmay6735/Mistral7B_Wiki
singhtanmay6735
2024-03-27T22:39:35Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-03-27T22:38:45Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
gonzalezrostani/my_awesome_wnut_Target
gonzalezrostani
2024-03-27T22:37:51Z
5
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-22T13:39:53Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_Target results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_Target This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6763 - Precision: 0.3571 - Recall: 0.3390 - F1: 0.3478 - Accuracy: 0.9270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 46 | 0.1871 | 0.0 | 0.0 | 0.0 | 0.9386 | | No log | 2.0 | 92 | 0.1664 | 0.3091 | 0.2881 | 0.2982 | 0.9352 | | No log | 3.0 | 138 | 0.1972 | 0.2806 | 0.3305 | 0.3035 | 0.9243 | | No log | 4.0 | 184 | 0.2032 | 0.3739 | 0.3644 | 0.3691 | 0.9328 | | No log | 5.0 | 230 | 0.2632 | 0.3519 | 0.3220 | 0.3363 | 0.9310 | | No log | 6.0 | 276 | 0.3015 | 0.3308 | 0.3644 | 0.3468 | 0.9261 | | No log | 7.0 | 322 | 0.3573 | 0.3308 | 0.3644 | 0.3468 | 0.9255 | | No log | 8.0 | 368 | 0.3533 | 0.3590 | 0.3559 | 0.3574 | 0.9282 | | No log | 9.0 | 414 | 0.3963 | 0.35 | 0.3559 | 0.3529 | 0.9273 | | No log | 10.0 | 460 | 0.4153 | 0.3254 | 0.3475 | 0.3361 | 0.9255 | | 0.0718 | 11.0 | 506 | 0.4255 | 0.3762 | 0.3220 | 0.3470 | 0.9304 | | 0.0718 | 12.0 | 552 | 0.4626 | 0.3309 | 0.3898 | 0.3580 | 0.9224 | | 0.0718 | 13.0 | 598 | 0.4819 | 0.3690 | 0.2627 | 0.3069 | 0.9316 | | 0.0718 | 14.0 | 644 | 0.4861 | 0.3838 | 0.3220 | 0.3502 | 0.9319 | | 0.0718 | 15.0 | 690 | 0.5008 | 0.3363 | 0.3220 | 0.3290 | 0.9252 | | 0.0718 | 16.0 | 736 | 0.5095 | 0.4023 | 0.2966 | 0.3415 | 0.9331 | | 0.0718 | 17.0 | 782 | 0.5114 | 0.3361 | 0.3475 | 0.3417 | 0.9246 | | 0.0718 | 18.0 | 828 | 0.5363 | 0.3363 | 0.3220 | 0.3290 | 0.9255 | | 0.0718 | 19.0 | 874 | 0.5207 | 0.3607 | 0.3729 | 0.3667 | 0.9285 | | 0.0718 | 20.0 | 920 | 0.5468 | 0.35 | 0.3559 | 0.3529 | 0.9243 | | 0.0718 | 21.0 | 966 | 0.5389 | 0.3333 | 0.3390 | 0.3361 | 0.9249 | | 0.0041 | 22.0 | 1012 | 0.5408 | 0.3206 | 0.3559 | 0.3373 | 0.9246 | | 0.0041 | 23.0 | 1058 | 0.5321 | 0.3301 | 0.2881 | 0.3077 | 0.9297 | | 0.0041 | 24.0 | 1104 | 0.5651 | 0.2969 | 0.3220 | 0.3089 | 0.9231 | | 0.0041 | 25.0 | 1150 | 0.5706 | 0.3491 | 0.3136 | 0.3304 | 0.9276 | | 0.0041 | 26.0 | 1196 | 0.5864 | 0.3171 | 0.3305 | 0.3237 | 0.9243 | | 0.0041 | 27.0 | 1242 | 0.5764 | 0.3333 | 0.3220 | 0.3276 | 0.9276 | | 0.0041 | 28.0 | 1288 | 0.5778 | 0.3426 | 0.3136 | 0.3274 | 0.9297 | | 0.0041 | 29.0 | 1334 | 0.5840 | 0.3365 | 0.2966 | 0.3153 | 0.9304 | | 0.0041 | 30.0 | 1380 | 0.5792 | 0.3254 | 0.3475 | 0.3361 | 0.9258 | | 0.0041 | 31.0 | 1426 | 0.5864 | 0.328 | 0.3475 | 0.3374 | 0.9276 | | 0.0041 | 32.0 | 1472 | 0.5871 | 0.3217 | 0.3136 | 0.3176 | 0.9279 | | 0.0022 | 33.0 | 1518 | 0.5767 | 0.3390 | 0.3390 | 0.3390 | 0.9276 | | 0.0022 | 34.0 | 1564 | 0.5898 | 0.3140 | 0.3220 | 0.3180 | 0.9267 | | 0.0022 | 35.0 | 1610 | 0.6122 | 0.3171 | 0.3305 | 0.3237 | 0.9221 | | 0.0022 | 36.0 | 1656 | 0.6434 | 0.3286 | 0.3898 | 0.3566 | 0.9200 | | 0.0022 | 37.0 | 1702 | 0.6109 | 0.3529 | 0.3051 | 0.3273 | 0.9300 | | 0.0022 | 38.0 | 1748 | 0.6319 | 0.3158 | 0.3559 | 0.3347 | 0.9218 | | 0.0022 | 39.0 | 1794 | 0.6238 | 0.3361 | 0.3390 | 0.3376 | 0.9267 | | 0.0022 | 40.0 | 1840 | 0.6418 | 0.3282 | 0.3644 | 0.3454 | 0.9237 | | 0.0022 | 41.0 | 1886 | 0.6315 | 0.3592 | 0.3136 | 0.3348 | 0.9294 | | 0.0022 | 42.0 | 1932 | 0.6218 | 0.3679 | 0.3305 | 0.3482 | 0.9282 | | 0.0022 | 43.0 | 1978 | 0.6193 | 0.3874 | 0.3644 | 0.3755 | 0.9288 | | 0.0011 | 44.0 | 2024 | 0.6122 | 0.3564 | 0.3051 | 0.3288 | 0.9294 | | 0.0011 | 45.0 | 2070 | 0.6366 | 0.3529 | 0.3559 | 0.3544 | 0.9246 | | 0.0011 | 46.0 | 2116 | 0.6318 | 0.3663 | 0.3136 | 0.3379 | 0.9294 | | 0.0011 | 47.0 | 2162 | 0.6404 | 0.3661 | 0.3475 | 0.3565 | 0.9270 | | 0.0011 | 48.0 | 2208 | 0.6574 | 0.3525 | 0.3644 | 0.3583 | 0.9234 | | 0.0011 | 49.0 | 2254 | 0.6674 | 0.3281 | 0.3559 | 0.3415 | 0.9209 | | 0.0011 | 50.0 | 2300 | 0.6631 | 0.3571 | 0.3390 | 0.3478 | 0.9246 | | 0.0011 | 51.0 | 2346 | 0.6769 | 0.3359 | 0.3644 | 0.3496 | 0.9209 | | 0.0011 | 52.0 | 2392 | 0.6642 | 0.3673 | 0.3051 | 0.3333 | 0.9279 | | 0.0011 | 53.0 | 2438 | 0.6754 | 0.3387 | 0.3559 | 0.3471 | 0.9237 | | 0.0011 | 54.0 | 2484 | 0.6686 | 0.3391 | 0.3305 | 0.3348 | 0.9249 | | 0.0007 | 55.0 | 2530 | 0.6593 | 0.3478 | 0.3390 | 0.3433 | 0.9249 | | 0.0007 | 56.0 | 2576 | 0.6554 | 0.3645 | 0.3305 | 0.3467 | 0.9294 | | 0.0007 | 57.0 | 2622 | 0.6882 | 0.3448 | 0.3390 | 0.3419 | 0.9212 | | 0.0007 | 58.0 | 2668 | 0.6523 | 0.3727 | 0.3475 | 0.3596 | 0.9285 | | 0.0007 | 59.0 | 2714 | 0.6423 | 0.3361 | 0.3475 | 0.3417 | 0.9264 | | 0.0007 | 60.0 | 2760 | 0.6624 | 0.3139 | 0.3644 | 0.3373 | 0.9218 | | 0.0007 | 61.0 | 2806 | 0.6452 | 0.3558 | 0.3136 | 0.3333 | 0.9288 | | 0.0007 | 62.0 | 2852 | 0.6390 | 0.3391 | 0.3305 | 0.3348 | 0.9282 | | 0.0007 | 63.0 | 2898 | 0.6686 | 0.3083 | 0.3475 | 0.3267 | 0.9218 | | 0.0007 | 64.0 | 2944 | 0.6503 | 0.3429 | 0.3051 | 0.3229 | 0.9288 | | 0.0007 | 65.0 | 2990 | 0.6444 | 0.3333 | 0.2966 | 0.3139 | 0.9288 | | 0.0009 | 66.0 | 3036 | 0.6701 | 0.3308 | 0.3644 | 0.3468 | 0.9224 | | 0.0009 | 67.0 | 3082 | 0.6448 | 0.3592 | 0.3136 | 0.3348 | 0.9300 | | 0.0009 | 68.0 | 3128 | 0.6490 | 0.3627 | 0.3136 | 0.3364 | 0.9294 | | 0.0009 | 69.0 | 3174 | 0.6425 | 0.3936 | 0.3136 | 0.3491 | 0.9319 | | 0.0009 | 70.0 | 3220 | 0.6695 | 0.3071 | 0.3305 | 0.3184 | 0.9227 | | 0.0009 | 71.0 | 3266 | 0.6600 | 0.3585 | 0.3220 | 0.3393 | 0.9282 | | 0.0009 | 72.0 | 3312 | 0.6636 | 0.3551 | 0.3220 | 0.3378 | 0.9279 | | 0.0009 | 73.0 | 3358 | 0.6744 | 0.3248 | 0.3220 | 0.3234 | 0.9258 | | 0.0009 | 74.0 | 3404 | 0.6690 | 0.3645 | 0.3305 | 0.3467 | 0.9279 | | 0.0009 | 75.0 | 3450 | 0.6723 | 0.3519 | 0.3220 | 0.3363 | 0.9270 | | 0.0009 | 76.0 | 3496 | 0.6745 | 0.3585 | 0.3220 | 0.3393 | 0.9282 | | 0.0006 | 77.0 | 3542 | 0.6685 | 0.3619 | 0.3220 | 0.3408 | 0.9282 | | 0.0006 | 78.0 | 3588 | 0.6647 | 0.3725 | 0.3220 | 0.3455 | 0.9294 | | 0.0006 | 79.0 | 3634 | 0.6643 | 0.3714 | 0.3305 | 0.3498 | 0.9294 | | 0.0006 | 80.0 | 3680 | 0.6680 | 0.3786 | 0.3305 | 0.3529 | 0.9300 | | 0.0006 | 81.0 | 3726 | 0.6563 | 0.3619 | 0.3220 | 0.3408 | 0.9291 | | 0.0006 | 82.0 | 3772 | 0.6624 | 0.3645 | 0.3305 | 0.3467 | 0.9279 | | 0.0006 | 83.0 | 3818 | 0.6591 | 0.3462 | 0.3051 | 0.3243 | 0.9279 | | 0.0006 | 84.0 | 3864 | 0.6701 | 0.3333 | 0.3305 | 0.3319 | 0.9258 | | 0.0006 | 85.0 | 3910 | 0.6701 | 0.3417 | 0.3475 | 0.3445 | 0.9261 | | 0.0006 | 86.0 | 3956 | 0.6692 | 0.3534 | 0.3475 | 0.3504 | 0.9270 | | 0.0007 | 87.0 | 4002 | 0.6677 | 0.3578 | 0.3305 | 0.3436 | 0.9273 | | 0.0007 | 88.0 | 4048 | 0.6761 | 0.3307 | 0.3559 | 0.3429 | 0.9243 | | 0.0007 | 89.0 | 4094 | 0.6853 | 0.3111 | 0.3559 | 0.3320 | 0.9221 | | 0.0007 | 90.0 | 4140 | 0.6833 | 0.3233 | 0.3644 | 0.3426 | 0.9234 | | 0.0007 | 91.0 | 4186 | 0.6847 | 0.3233 | 0.3644 | 0.3426 | 0.9234 | | 0.0007 | 92.0 | 4232 | 0.6804 | 0.3307 | 0.3559 | 0.3429 | 0.9243 | | 0.0007 | 93.0 | 4278 | 0.6792 | 0.328 | 0.3475 | 0.3374 | 0.9243 | | 0.0007 | 94.0 | 4324 | 0.6752 | 0.3448 | 0.3390 | 0.3419 | 0.9261 | | 0.0007 | 95.0 | 4370 | 0.6742 | 0.3509 | 0.3390 | 0.3448 | 0.9264 | | 0.0007 | 96.0 | 4416 | 0.6748 | 0.3604 | 0.3390 | 0.3493 | 0.9273 | | 0.0007 | 97.0 | 4462 | 0.6760 | 0.3604 | 0.3390 | 0.3493 | 0.9273 | | 0.0005 | 98.0 | 4508 | 0.6760 | 0.3604 | 0.3390 | 0.3493 | 0.9273 | | 0.0005 | 99.0 | 4554 | 0.6761 | 0.3604 | 0.3390 | 0.3493 | 0.9273 | | 0.0005 | 100.0 | 4600 | 0.6763 | 0.3571 | 0.3390 | 0.3478 | 0.9270 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
solwol/my-awesome-adapter
solwol
2024-03-27T22:33:08Z
3
0
adapter-transformers
[ "adapter-transformers", "roberta", "adapterhub:sentiment/rotten_tomatoes", "text-classification", "dataset:rotten_tomatoes", "region:us" ]
text-classification
2024-03-26T00:52:24Z
--- tags: - adapter-transformers - roberta - adapterhub:sentiment/rotten_tomatoes datasets: - rotten_tomatoes pipeline_tag: text-classification widget: - text: "Adapters are awesome" --- # Adapter `solwol/my-awesome-adapter` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `transformers` and `adapters`: ``` pip install -U transformers adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("solwol/my-awesome-adapter", source="hf", set_active=True) adapter_name ``` Next, to perform sentiment classification: ```python from transformers import AutoTokenizer, TextClassificationPipeline tokenizer = AutoTokenizer.from_pretrained("roberta-base") classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) classfifier("Adapters are awesome!") ```
ProrabVasili/roberta_ukr-psyop-6_3
ProrabVasili
2024-03-27T22:31:31Z
99
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:youscan/ukr-roberta-base", "base_model:finetune:youscan/ukr-roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-27T20:28:56Z
--- base_model: youscan/ukr-roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: roberta_ukr-psyop-6_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_ukr-psyop-6_3 This model is a fine-tuned version of [youscan/ukr-roberta-base](https://huggingface.co/youscan/ukr-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0326 - Accuracy: 0.9929 - Precision: 0.9951 - Recall: 0.9906 - F1-score: 0.9928 - Matthews Corrcoef: 0.9857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 14 - eval_batch_size: 14 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1-score | Matthews Corrcoef | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:-----------------:| | 0.0545 | 1.0 | 1787 | 0.0350 | 0.9904 | 0.9849 | 0.9961 | 0.9905 | 0.9809 | | 0.0001 | 2.0 | 3574 | 0.0512 | 0.9925 | 0.9925 | 0.9925 | 0.9925 | 0.9851 | | 0.0069 | 3.0 | 5361 | 0.0326 | 0.9929 | 0.9951 | 0.9906 | 0.9928 | 0.9857 | | 0.0 | 4.0 | 7148 | 0.0714 | 0.9912 | 0.9874 | 0.9951 | 0.9913 | 0.9825 | | 0.0 | 5.0 | 8935 | 0.0709 | 0.9925 | 0.9961 | 0.9890 | 0.9925 | 0.9851 | | 0.0 | 6.0 | 10722 | 0.0629 | 0.9933 | 0.9932 | 0.9935 | 0.9933 | 0.9867 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
manbeast3b/s22
manbeast3b
2024-03-27T22:30:41Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:52:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lunarsylph/stablecell_v5
lunarsylph
2024-03-27T22:28:13Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T22:21:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
emmanuel758/leuna_model_repo
emmanuel758
2024-03-27T22:14:25Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-27T22:11:20Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
noutcheu/your_repos_hub
noutcheu
2024-03-27T22:14:11Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T11:58:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhyay/medimistral_mergev1
dhyay
2024-03-27T22:13:55Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "dhyay/medimistral14kv2", "conversational", "base_model:dhyay/medimistral14kv2", "base_model:merge:dhyay/medimistral14kv2", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:merge:mistralai/Mistral-7B-Instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T22:09:47Z
--- tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - dhyay/medimistral14kv2 base_model: - mistralai/Mistral-7B-Instruct-v0.2 - dhyay/medimistral14kv2 --- # medimistral_mergev1 medimistral_mergev1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [dhyay/medimistral14kv2](https://huggingface.co/dhyay/medimistral14kv2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: dhyay/medimistral14kv2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "dhyay/medimistral_mergev1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
areegtarek/idefics-9b-split1-v1-split1.2-v1
areegtarek
2024-03-27T21:52:12Z
63
0
transformers
[ "transformers", "safetensors", "idefics", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2024-03-27T21:49:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepnet/SN6-71S2
deepnet
2024-03-27T21:44:41Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:31:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
msho/llemma_dpo_sampled
msho
2024-03-27T21:33:38Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:EleutherAI/llemma_7b", "base_model:adapter:EleutherAI/llemma_7b", "license:llama2", "region:us" ]
null
2024-03-27T20:40:12Z
--- license: llama2 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: EleutherAI/llemma_7b model-index: - name: llemma_dpo_sampled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llemma_dpo_sampled This model is a fine-tuned version of [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
francisco-perez-sorrosal/q-FrozenLake-v1-4x4-Slippery
francisco-perez-sorrosal
2024-03-27T21:33:36Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T21:33:33Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-Slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.68 +/- 0.47 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="francisco-perez-sorrosal/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ebajeanne/eba_ngolong
ebajeanne
2024-03-27T21:33:15Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:23:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
numen-tech/RakutenAI-7B-chat-w3a16g40sym
numen-tech
2024-03-27T21:27:33Z
0
0
null
[ "arxiv:2308.13137", "license:apache-2.0", "region:us" ]
null
2024-03-27T21:22:07Z
--- license: apache-2.0 --- 3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [RakutenAI-7B-chat](https://huggingface.co/Rakuten/RakutenAI-7B-chat).
heavoc/videomae-base-finetuned-chickenbehaviour
heavoc
2024-03-27T21:26:14Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-03-26T15:18:11Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-chickenbehaviour results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-chickenbehaviour This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1091 - Accuracy: 0.6676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 25448 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3916 | 0.12 | 3181 | 1.5101 | 0.5662 | | 1.566 | 1.12 | 6362 | 1.2704 | 0.6318 | | 1.5383 | 2.12 | 9543 | 1.2007 | 0.6452 | | 1.0339 | 3.12 | 12724 | 1.2358 | 0.6450 | | 1.0752 | 4.12 | 15905 | 1.1753 | 0.6641 | | 0.7721 | 5.12 | 19086 | 1.1413 | 0.6777 | | 0.9696 | 6.12 | 22267 | 1.2086 | 0.6733 | | 0.8068 | 7.12 | 25448 | 1.2041 | 0.6688 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.1.0 - Datasets 2.18.0 - Tokenizers 0.15.2
SaranaAbidueva/nllb-200-bxr-ru
SaranaAbidueva
2024-03-27T21:23:40Z
62
1
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "ru", "dataset:SaranaAbidueva/buryat-russian_parallel_corpus", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-27T20:54:49Z
--- license: mit datasets: - SaranaAbidueva/buryat-russian_parallel_corpus language: - ru metrics: - bleu --- This is NLLB-200 trained on buryat-russian language pairs. It translates from buryat to russian and vice-versa. BLEU bxr-ru: 20, ru-bxr:13 Thanks to https://huggingface.co/slone/nllb-rus-tyv-v1 tutorial ```python !pip install sentencepiece transformers==4.33 from transformers import NllbTokenizer, AutoModelForSeq2SeqLM, AutoConfig def fix_tokenizer(tokenizer, new_lang='bxr_Cyrl'): """ Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """ old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder) tokenizer.lang_code_to_id[new_lang] = old_len-1 tokenizer.id_to_lang_code[old_len-1] = new_lang # always move "mask" to the last position tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id) tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()} if new_lang not in tokenizer._additional_special_tokens: tokenizer._additional_special_tokens.append(new_lang) # clear the added token encoder; otherwise a new token may end up there by mistake tokenizer.added_tokens_encoder = {} tokenizer.added_tokens_decoder = {} MODEL_URL = "SaranaAbidueva/nllb-200-bxr-ru" model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL) tokenizer = NllbTokenizer.from_pretrained(MODEL_URL, force_download=True) fix_tokenizer(tokenizer) def translate(text, src_lang='rus_Cyrl', tgt_lang='bxr_Cyrl', a=32, b=3, max_input_length=1024, num_beams=4, **kwargs): tokenizer.src_lang = src_lang tokenizer.tgt_lang = tgt_lang inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length) result = model.generate( **inputs.to(model.device), forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang), max_new_tokens=int(a + b * inputs.input_ids.shape[1]), num_beams=num_beams, **kwargs ) return tokenizer.batch_decode(result, skip_special_tokens=True) translate("красная птица", src_lang='rus_Cyrl', tgt_lang='bxr_Cyrl') ```
yuiseki/tinyllama-ja-wikipedia-1.5T-v0.1-step-4k
yuiseki
2024-03-27T21:21:57Z
62
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:20:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Madao-314/q-Taxi-v3
Madao-314
2024-03-27T21:20:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T17:12:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Madao-314/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
yuiseki/tinyllama-ja-scp-1.5T-v0.1
yuiseki
2024-03-27T21:16:33Z
63
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T22:57:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhyay/medimistral14kv2
dhyay
2024-03-27T21:15:39Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:13:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tsavage68/v1_2000_STEPS_5e6_rate_03_beta_DPO
tsavage68
2024-03-27T21:13:14Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T21:07:30Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - trl - dpo - generated_from_trainer model-index: - name: v1_2000_STEPS_5e6_rate_03_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v1_2000_STEPS_5e6_rate_03_beta_DPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0059 - Rewards/chosen: -24.7548 - Rewards/rejected: -24.1061 - Rewards/accuracies: 0.3582 - Rewards/margins: -0.6487 - Logps/rejected: -97.2333 - Logps/chosen: -97.7691 - Logits/rejected: -5.2528 - Logits/chosen: -5.2528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.9868 | 0.05 | 50 | 1.1711 | -2.9428 | -2.9268 | 0.4330 | -0.0159 | -26.6357 | -25.0622 | -3.6433 | -3.6433 | | 2.3478 | 0.1 | 100 | 2.0400 | -6.8764 | -6.5428 | 0.3846 | -0.3336 | -38.6889 | -38.1744 | -3.1687 | -3.1689 | | 2.611 | 0.15 | 150 | 1.9184 | -5.6378 | -5.4005 | 0.4044 | -0.2373 | -34.8813 | -34.0459 | -2.7506 | -2.7505 | | 4.1618 | 0.2 | 200 | 3.2078 | -17.2610 | -17.1338 | 0.4549 | -0.1272 | -73.9923 | -72.7897 | -1.8104 | -1.8104 | | 2.7627 | 0.24 | 250 | 2.9158 | -15.0561 | -14.8894 | 0.4440 | -0.1667 | -66.5108 | -65.4402 | -3.3141 | -3.3141 | | 3.6661 | 0.29 | 300 | 2.9462 | -17.0688 | -16.7324 | 0.4286 | -0.3364 | -72.6541 | -72.1490 | -3.4818 | -3.4818 | | 2.9918 | 0.34 | 350 | 2.8967 | -14.0241 | -13.8259 | 0.4527 | -0.1982 | -62.9660 | -62.0001 | -3.5357 | -3.5357 | | 5.0079 | 0.39 | 400 | 2.6045 | -13.8849 | -13.6344 | 0.4264 | -0.2504 | -62.3277 | -61.5359 | -3.3164 | -3.3164 | | 5.0356 | 0.44 | 450 | 2.8214 | -15.0823 | -14.8094 | 0.4484 | -0.2729 | -66.2441 | -65.5273 | -4.8720 | -4.8719 | | 3.858 | 0.49 | 500 | 2.8497 | -14.8747 | -14.6263 | 0.4462 | -0.2484 | -65.6339 | -64.8354 | -4.3757 | -4.3757 | | 4.3217 | 0.54 | 550 | 2.6753 | -14.4812 | -14.1726 | 0.4374 | -0.3085 | -64.1217 | -63.5237 | -4.6084 | -4.6084 | | 2.2709 | 0.59 | 600 | 2.7610 | -17.0678 | -16.8582 | 0.4374 | -0.2096 | -73.0735 | -72.1458 | -3.4647 | -3.4646 | | 4.1629 | 0.64 | 650 | 2.5745 | -15.9106 | -15.5448 | 0.4242 | -0.3658 | -68.6954 | -68.2883 | -4.6729 | -4.6729 | | 3.8448 | 0.68 | 700 | 2.5174 | -15.9576 | -15.6284 | 0.4549 | -0.3292 | -68.9742 | -68.4451 | -4.4193 | -4.4193 | | 2.2076 | 0.73 | 750 | 2.5577 | -15.9437 | -15.5036 | 0.4352 | -0.4401 | -68.5581 | -68.3986 | -5.2628 | -5.2628 | | 1.7122 | 0.78 | 800 | 2.4622 | -16.9908 | -16.5388 | 0.4330 | -0.4520 | -72.0088 | -71.8890 | -4.6677 | -4.6677 | | 4.2836 | 0.83 | 850 | 2.4392 | -21.5360 | -21.1181 | 0.4242 | -0.4179 | -87.2732 | -87.0397 | -4.6942 | -4.6942 | | 2.0891 | 0.88 | 900 | 2.5920 | -22.6793 | -22.2203 | 0.4571 | -0.4590 | -90.9473 | -90.8508 | -4.8027 | -4.8027 | | 3.1818 | 0.93 | 950 | 2.3526 | -23.8680 | -23.5454 | 0.4527 | -0.3226 | -95.3641 | -94.8129 | -4.7170 | -4.7169 | | 2.9536 | 0.98 | 1000 | 2.3082 | -23.0470 | -22.5591 | 0.4220 | -0.4879 | -92.0765 | -92.0763 | -4.4404 | -4.4404 | | 1.7844 | 1.03 | 1050 | 2.1483 | -21.5286 | -20.9353 | 0.4088 | -0.5933 | -86.6637 | -87.0149 | -4.7032 | -4.7031 | | 1.7756 | 1.07 | 1100 | 2.2115 | -23.2036 | -22.5728 | 0.4000 | -0.6308 | -92.1223 | -92.5985 | -5.4507 | -5.4507 | | 1.5056 | 1.12 | 1150 | 2.2646 | -19.3579 | -18.6576 | 0.3846 | -0.7004 | -79.0715 | -79.7795 | -5.2285 | -5.2285 | | 1.3908 | 1.17 | 1200 | 2.2503 | -22.9644 | -22.1188 | 0.3824 | -0.8456 | -90.6089 | -91.8011 | -5.1449 | -5.1449 | | 1.9094 | 1.22 | 1250 | 2.2255 | -24.9046 | -24.0560 | 0.3890 | -0.8486 | -97.0663 | -98.2684 | -5.0663 | -5.0663 | | 1.6242 | 1.27 | 1300 | 2.3035 | -22.9644 | -22.2812 | 0.4022 | -0.6832 | -91.1502 | -91.8012 | -4.7409 | -4.7408 | | 1.7631 | 1.32 | 1350 | 2.2782 | -24.2942 | -23.4381 | 0.3846 | -0.8560 | -95.0067 | -96.2336 | -4.8726 | -4.8725 | | 1.821 | 1.37 | 1400 | 2.1303 | -23.8856 | -23.1654 | 0.3912 | -0.7202 | -94.0977 | -94.8717 | -5.1322 | -5.1321 | | 1.5613 | 1.42 | 1450 | 2.1094 | -25.0650 | -24.4124 | 0.3824 | -0.6526 | -98.2543 | -98.8031 | -5.2516 | -5.2516 | | 1.3106 | 1.47 | 1500 | 2.0269 | -24.0518 | -23.4855 | 0.3802 | -0.5663 | -95.1646 | -95.4258 | -5.2393 | -5.2393 | | 1.1946 | 1.51 | 1550 | 2.0830 | -25.1070 | -24.4242 | 0.3560 | -0.6828 | -98.2934 | -98.9430 | -5.2559 | -5.2559 | | 1.7872 | 1.56 | 1600 | 2.0496 | -24.8926 | -24.1890 | 0.3692 | -0.7035 | -97.5097 | -98.2283 | -5.2683 | -5.2683 | | 1.8887 | 1.61 | 1650 | 2.0065 | -24.1169 | -23.5004 | 0.3626 | -0.6165 | -95.2141 | -95.6428 | -5.2470 | -5.2469 | | 1.8434 | 1.66 | 1700 | 2.0105 | -24.5153 | -23.8551 | 0.3626 | -0.6602 | -96.3966 | -96.9706 | -5.2365 | -5.2364 | | 1.3652 | 1.71 | 1750 | 2.0138 | -24.6797 | -24.0077 | 0.3648 | -0.6720 | -96.9052 | -97.5188 | -5.2445 | -5.2444 | | 1.5787 | 1.76 | 1800 | 2.0064 | -24.7465 | -24.0922 | 0.3582 | -0.6543 | -97.1869 | -97.7414 | -5.2543 | -5.2543 | | 1.8425 | 1.81 | 1850 | 2.0064 | -24.7549 | -24.1066 | 0.3604 | -0.6483 | -97.2348 | -97.7693 | -5.2532 | -5.2531 | | 1.3414 | 1.86 | 1900 | 2.0058 | -24.7571 | -24.1089 | 0.3582 | -0.6482 | -97.2425 | -97.7766 | -5.2532 | -5.2532 | | 1.7149 | 1.91 | 1950 | 2.0055 | -24.7535 | -24.1060 | 0.3582 | -0.6475 | -97.2328 | -97.7645 | -5.2528 | -5.2527 | | 2.2753 | 1.95 | 2000 | 2.0059 | -24.7548 | -24.1061 | 0.3582 | -0.6487 | -97.2333 | -97.7691 | -5.2528 | -5.2528 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
geektech/t5-xl-lora-ce-gpt4
geektech
2024-03-27T21:12:15Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:google-t5/t5-3b", "base_model:adapter:google-t5/t5-3b", "region:us" ]
null
2024-03-27T11:17:38Z
--- library_name: peft base_model: t5-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
MrezaPRZ/sql-judge-7B
MrezaPRZ
2024-03-27T21:01:13Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T19:28:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Menouar/pygemma-2b-it
Menouar
2024-03-27T21:00:23Z
112
1
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "generated_from_trainer", "google/gemma", "PyTorch", "trl", "peft", "conversational", "en", "dataset:Vezora/Tested-143k-Python-Alpaca", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T20:58:27Z
--- license: apache-2.0 tags: - generated_from_trainer - google/gemma - PyTorch - transformers - trl - peft - tensorboard model-index: - name: pygemma-2b-it results: [] datasets: - Vezora/Tested-143k-Python-Alpaca language: - en base_model: google/gemma-2b widget: - example_title: Compute Sum messages: - role: system content: Welcome to PyGemma, your AI-powered Python assistant. I'm here to help you answer common questions about the Python programming language. Let's dive into Python! - role: user content: Create a function to calculate the sum of a sequence of integers. pipeline_tag: text-generation --- # Model Card for pygemma-2b-it: 🐍💬🤖 **pygemma-2b-it** is a language model that is trained to act as Python assistant. It is a finetuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) that was trained using `SFTTrainer` on publicly available dataset [Vezora/Tested-143k-Python-Alpaca](https://huggingface.co/datasets/Vezora/Tested-143k-Python-Alpaca). ## Training Metrics [The training metrics can be found on **TensorBoard**](https://huggingface.co/Menouar/pygemma-2b-it/tensorboard). ## Training hyperparameters The following hyperparameters were used during the training: - output_dir: peft-lora-model - overwrite_output_dir: True - do_train: False - do_eval: False - do_predict: False - evaluation_strategy: no - prediction_loss_only: False - per_device_train_batch_size: 2 - per_device_eval_batch_size: None - per_gpu_train_batch_size: None - per_gpu_eval_batch_size: None - gradient_accumulation_steps: 4 - eval_accumulation_steps: None - eval_delay: 0 - learning_rate: 2e-05 - weight_decay: 0.0 - adam_beta1: 0.9 - adam_beta2: 0.999 - adam_epsilon: 1e-08 - max_grad_norm: 0.3 - num_train_epochs: 1 - max_steps: -1 - lr_scheduler_type: cosine - lr_scheduler_kwargs: {} - warmup_ratio: 0.1 - warmup_steps: 0 - log_level: passive - log_level_replica: warning - log_on_each_node: True - logging_dir: peft-lora-model/runs/Mar27_16-25-16_393edc92728c - logging_strategy: steps - logging_first_step: False - logging_steps: 10 - logging_nan_inf_filter: True - save_strategy: epoch - save_steps: 500 - save_total_limit: None - save_safetensors: True - save_on_each_node: False - save_only_model: False - no_cuda: False - use_cpu: False - use_mps_device: False - seed: 42 - data_seed: None - jit_mode_eval: False - use_ipex: False - bf16: True - fp16: False - fp16_opt_level: O1 - half_precision_backend: auto - bf16_full_eval: False - fp16_full_eval: False - tf32: None - local_rank: 0 - ddp_backend: None - tpu_num_cores: None - tpu_metrics_debug: False - debug: [] - dataloader_drop_last: False - eval_steps: None - dataloader_num_workers: 0 - dataloader_prefetch_factor: None - past_index: -1 - run_name: peft-lora-model - disable_tqdm: False - remove_unused_columns: True - label_names: None - load_best_model_at_end: False - metric_for_best_model: None - greater_is_better: None - ignore_data_skip: False - fsdp: [] - fsdp_min_num_params: 0 - fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - fsdp_transformer_layer_cls_to_wrap: None - accelerator_config: AcceleratorConfig(split_batches=False, dispatch_batches=None, even_batches=True, use_seedable_sampler=True) - deepspeed: None - label_smoothing_factor: 0.0 - optim: adamw_torch_fused - optim_args: None - adafactor: False - group_by_length: False - length_column_name: length - report_to: ['tensorboard'] - ddp_find_unused_parameters: None - ddp_bucket_cap_mb: None - ddp_broadcast_buffers: None - dataloader_pin_memory: True - dataloader_persistent_workers: False - skip_memory_metrics: True - use_legacy_prediction_loop: False - push_to_hub: False - resume_from_checkpoint: None - hub_model_id: None - hub_strategy: every_save - hub_token: None - hub_private_repo: False - hub_always_push: False - gradient_checkpointing: True - gradient_checkpointing_kwargs: {'use_reentrant': False} - include_inputs_for_metrics: False - fp16_backend: auto - push_to_hub_model_id: None - push_to_hub_organization: None - push_to_hub_token: None - mp_parameters: - auto_find_batch_size: False - full_determinism: False - torchdynamo: None - ray_scope: last - ddp_timeout: 1800 - torch_compile: False - torch_compile_backend: None - torch_compile_mode: None - dispatch_batches: None - split_batches: None - include_tokens_per_second: False - include_num_input_tokens_seen: False - neftune_noise_alpha: None - distributed_state: Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda - _n_gpu: 1 - __cached__setup_devices: cuda:0 - deepspeed_plugin: None
Tak3da/manga
Tak3da
2024-03-27T20:54:27Z
0
0
null
[ "it", "region:us" ]
null
2024-03-27T20:53:11Z
--- language: - it metrics: - accuracy ---
anupkashyap7/mistral-sql-coder
anupkashyap7
2024-03-27T20:51:01Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "mistralai/Mistral-7B-v0.1", "defog/sqlcoder-7b-2", "base_model:defog/sqlcoder-7b-2", "base_model:merge:defog/sqlcoder-7b-2", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T20:11:58Z
--- tags: - merge - mergekit - mistralai/Mistral-7B-v0.1 - defog/sqlcoder-7b-2 base_model: - mistralai/Mistral-7B-v0.1 - defog/sqlcoder-7b-2 --- # mistral-sql-coder mistral-sql-coder is a merge of the following models: * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 parameters: density: 0.5 weight: 1.0 - model: defog/sqlcoder-7b-2 parameters: density: 0.33 weight: 1.0 merge_method: ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "anupkashyap7/mistral-sql-coder" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jihedonhf/dogbooth
jihedonhf
2024-03-27T20:50:04Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-27T13:01:28Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers base_model: stabilityai/stable-diffusion-2-1 inference: true instance_prompt: a photo of [v]dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - jihedonhf/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Herry443/Mistral-7B-KNUT-ref-en-mmlu-0.3-final
Herry443
2024-03-27T20:48:31Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T20:25:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
leonardo-rocha/llama2-7b-hf-chat
leonardo-rocha
2024-03-27T20:41:40Z
3
0
mlx
[ "mlx", "pytorch", "safetensors", "llama", "facebook", "meta", "llama-2", "text-generation", "conversational", "license:llama2", "region:us" ]
text-generation
2024-03-27T20:36:50Z
--- pipeline_tag: text-generation library_name: mlx inference: false tags: - facebook - meta - llama - llama-2 - mlx license: llama2 --- # **Llama 2 7B Chat 4-bit** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in `npz` format suitable for use in Apple's MLX framework. Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box. How to use with [MLX](https://github.com/ml-explore/mlx). ```bash # Install mlx, mlx-examples, huggingface-cli pip install mlx pip install huggingface_hub hf_transfer git clone https://github.com/ml-explore/mlx-examples.git # Download model export HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download --local-dir Llama-2-7b-chat-mlx/ mlx-community/Llama-2-7b-chat-4-bit # Run example python mlx-examples/llms/llama/llama.py --prompt "My name is " --model-path Llama-2-7b-chat-mlx/ ``` Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat) for details on Llama 2.
thrunlab/llama_7b_hf_relu_refined_web_relu_2024-03-27
thrunlab
2024-03-27T20:39:00Z
5
0
transformers
[ "transformers", "safetensors", "sparse_llama", "text-generation", "generated_from_trainer", "custom_code", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "region:us" ]
text-generation
2024-03-27T16:53:39Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: llama_7b_hf_relu_refined_web_relu_2024-03-27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_7b_hf_relu_refined_web_relu_2024-03-27 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.5817 | 0.01 | 25 | 9.2314 | | 7.8754 | 0.02 | 50 | 7.6756 | | 6.6513 | 0.02 | 75 | 6.5579 | | 5.5276 | 0.03 | 100 | 5.4724 | | 4.8802 | 0.04 | 125 | 4.7672 | | 4.1435 | 0.05 | 150 | 4.2747 | | 3.7831 | 0.06 | 175 | 3.9269 | | 3.6802 | 0.06 | 200 | 3.6937 | | 3.5049 | 0.07 | 225 | 3.5292 | | 3.3127 | 0.08 | 250 | 3.4086 | | 3.1939 | 0.09 | 275 | 3.3188 | | 3.149 | 0.1 | 300 | 3.2472 | | 3.0679 | 0.1 | 325 | 3.1902 | | 3.0111 | 0.11 | 350 | 3.1422 | | 2.9573 | 0.12 | 375 | 3.1011 | | 3.0001 | 0.13 | 400 | 3.0645 | | 2.9547 | 0.14 | 425 | 3.0334 | | 2.8647 | 0.14 | 450 | 3.0069 | | 2.9167 | 0.15 | 475 | 2.9848 | | 2.8893 | 0.16 | 500 | 2.9640 | | 2.8185 | 0.17 | 525 | 2.9463 | | 2.7709 | 0.18 | 550 | 2.9289 | | 2.8841 | 0.18 | 575 | 2.9155 | | 2.7844 | 0.19 | 600 | 2.9020 | | 2.7821 | 0.2 | 625 | 2.8896 | | 2.8204 | 0.21 | 650 | 2.8781 | | 2.7211 | 0.22 | 675 | 2.8667 | | 2.7254 | 0.22 | 700 | 2.8576 | | 2.7358 | 0.23 | 725 | 2.8484 | | 2.6303 | 0.24 | 750 | 2.8400 | | 2.6914 | 0.25 | 775 | 2.8327 | | 2.7409 | 0.26 | 800 | 2.8241 | | 2.7072 | 0.26 | 825 | 2.8182 | | 2.6851 | 0.27 | 850 | 2.8120 | | 2.6687 | 0.28 | 875 | 2.8054 | | 2.6158 | 0.29 | 900 | 2.7999 | | 2.8079 | 0.3 | 925 | 2.7939 | | 2.7443 | 0.3 | 950 | 2.7891 | | 2.6934 | 0.31 | 975 | 2.7850 | | 2.5733 | 0.32 | 1000 | 2.7790 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.2
yerkekz/results3
yerkekz
2024-03-27T20:27:36Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-27T20:26:52Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: results3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9942 | 0.58 | 1000 | 0.9918 | | 0.8744 | 1.16 | 2000 | 0.9901 | | 0.8708 | 1.74 | 3000 | 0.9855 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ehristoforu/StableLive-sd-portable
ehristoforu
2024-03-27T20:23:12Z
0
6
null
[ "portable", "sd-webuis", "auto1111", "fooocus", "defooocus", "forge", "comfyui", "7zip", "git-setup", "all-in-one", "license:gpl-3.0", "region:us" ]
null
2024-03-27T19:47:24Z
--- license: gpl-3.0 tags: - portable - sd-webuis - auto1111 - fooocus - defooocus - forge - comfyui - 7zip - git-setup - all-in-one --- # StableLive-sd-portable All popular SD-webuis in one repo (portable) ## Webuis We have included in StableLive the most functional webui for working with Stable Diffusion, they are all portable and there is no need to download Python and each webui separately. Here is the list: - [AUTOMATIC1111's SD-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - [Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge) - [Fooocus](https://github.com/lllyasviel/Fooocus) - [DeFooocus](https://github.com/ehristoforu/DeFooocus) - [ComfyUI](https://github.com/comfyanonymous/ComfyUI) ## Installing only for Windows 1. Download .7z-file [there](https://huggingface.co/ehristoforu/StableLive-sd-portable/resolve/main/StableLive.7z) 2. Unzip this file 3. Go to `Git Setup` folder & run `INSTALL.bat` and install git with installer 4. Go to your favorite webui & at first run `INSTALL.bat` & run webui with .bat-file with caps lock 5. Enjoy with Stable Diffuion!
reddym10/distilgpt2-finetuned-wikitext2_Reddy_2
reddym10
2024-03-27T20:19:41Z
112
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T20:08:55Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: distilgpt2 model-index: - name: distilgpt2-finetuned-wikitext2_Reddy_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2_Reddy_2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.2439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5293 | 1.0 | 2406 | 6.4321 | | 6.279 | 2.0 | 4812 | 6.2843 | | 6.212 | 3.0 | 7218 | 6.2439 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
tsavage68/mpt_1000_STEPS_1e6_rate_01_beta_DPO
tsavage68
2024-03-27T20:18:56Z
6
0
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "trl", "dpo", "generated_from_trainer", "custom_code", "base_model:mosaicml/mpt-7b-instruct", "base_model:finetune:mosaicml/mpt-7b-instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T20:09:40Z
--- license: apache-2.0 base_model: mosaicml/mpt-7b-instruct tags: - trl - dpo - generated_from_trainer model-index: - name: mpt_1000_STEPS_1e6_rate_01_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mpt_1000_STEPS_1e6_rate_01_beta_DPO This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6555 - Rewards/chosen: -0.9911 - Rewards/rejected: -1.1284 - Rewards/accuracies: 0.6220 - Rewards/margins: 0.1372 - Logps/rejected: -32.8413 - Logps/chosen: -30.7037 - Logits/rejected: 12.5582 - Logits/chosen: 12.5620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.7012 | 0.1 | 100 | 0.6878 | 0.0402 | 0.0262 | 0.5516 | 0.0140 | -21.2953 | -20.3903 | 14.1969 | 14.1998 | | 0.6605 | 0.2 | 200 | 0.6893 | 0.1209 | 0.0818 | 0.5670 | 0.0391 | -20.7398 | -19.5837 | 13.0519 | 13.0548 | | 0.657 | 0.29 | 300 | 0.6715 | -0.4737 | -0.5524 | 0.5758 | 0.0787 | -27.0816 | -25.5295 | 13.1844 | 13.1876 | | 0.6934 | 0.39 | 400 | 0.6676 | -0.8625 | -0.9556 | 0.5934 | 0.0932 | -31.1138 | -29.4168 | 12.8462 | 12.8498 | | 0.6891 | 0.49 | 500 | 0.6641 | -1.0231 | -1.1288 | 0.6088 | 0.1057 | -32.8455 | -31.0235 | 12.6874 | 12.6909 | | 0.6492 | 0.59 | 600 | 0.6564 | -0.9706 | -1.0997 | 0.6462 | 0.1291 | -32.5548 | -30.4985 | 12.7748 | 12.7786 | | 0.6512 | 0.68 | 700 | 0.6569 | -0.9892 | -1.1224 | 0.6220 | 0.1332 | -32.7819 | -30.6846 | 12.6401 | 12.6438 | | 0.6687 | 0.78 | 800 | 0.6556 | -0.9937 | -1.1300 | 0.6330 | 0.1363 | -32.8571 | -30.7290 | 12.5528 | 12.5566 | | 0.6668 | 0.88 | 900 | 0.6552 | -0.9899 | -1.1276 | 0.6308 | 0.1376 | -32.8330 | -30.6916 | 12.5557 | 12.5594 | | 0.5867 | 0.98 | 1000 | 0.6555 | -0.9911 | -1.1284 | 0.6220 | 0.1372 | -32.8413 | -30.7037 | 12.5582 | 12.5620 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
RichardErkhov/Llama-2-13b-chat-hf-gguf
RichardErkhov
2024-03-27T20:12:10Z
0
0
null
[ "region:us" ]
null
2024-03-27T20:12:09Z
GGUF quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Linkedin](https://www.linkedin.com/in/richard-erkhov/) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-2-13b-chat-hf - GGUF - Model creator: https://huggingface.co/meta-llama/ - Original model: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-2-13b-chat-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q2_K.gguf) | Q2_K | 4.52GB | | [Llama-2-13b-chat-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [Llama-2-13b-chat-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.IQ3_S.gguf) | IQ3_S | 5.27GB | | [Llama-2-13b-chat-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [Llama-2-13b-chat-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.IQ3_M.gguf) | IQ3_M | 5.57GB | | [Llama-2-13b-chat-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q3_K.gguf) | Q3_K | 5.9GB | | [Llama-2-13b-chat-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [Llama-2-13b-chat-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [Llama-2-13b-chat-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [Llama-2-13b-chat-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q4_0.gguf) | Q4_0 | 6.86GB | | [Llama-2-13b-chat-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [Llama-2-13b-chat-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [Llama-2-13b-chat-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q4_K.gguf) | Q4_K | 7.33GB | | [Llama-2-13b-chat-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [Llama-2-13b-chat-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q4_1.gguf) | Q4_1 | 7.61GB | | [Llama-2-13b-chat-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q5_0.gguf) | Q5_0 | 8.36GB | | [Llama-2-13b-chat-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [Llama-2-13b-chat-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q5_K.gguf) | Q5_K | 8.6GB | | [Llama-2-13b-chat-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [Llama-2-13b-chat-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q5_1.gguf) | Q5_1 | 9.1GB | | [Llama-2-13b-chat-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-13b-chat-hf-gguf/blob/main/Llama-2-13b-chat-hf.Q6_K.gguf) | Q6_K | 9.95GB | Original model description: Repo model meta-llama/Llama-2-13b-chat-hf is gated. You must be authenticated to access it.
rizkiduwinanto/distractor-generation
rizkiduwinanto
2024-03-27T20:01:10Z
162
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-27T19:59:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cafet/wav2vec2-bert-cv16-mas-ex-cv16
Cafet
2024-03-27T19:58:44Z
6
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-27T13:28:33Z
--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer datasets: - common_voice_16_0 metrics: - wer model-index: - name: wav2vec2-bert-cv16-mas-ex-cv16 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_16_0 type: common_voice_16_0 config: mn split: test args: mn metrics: - name: Wer type: wer value: 0.6611920817924734 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-bert-cv16-mas-ex-cv16 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7349 - Wer: 0.6612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 700 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3593 | 1.21 | 700 | 0.6050 | 0.5216 | | 0.5443 | 2.43 | 1400 | 0.5665 | 0.4557 | | 0.9415 | 3.64 | 2100 | 0.6099 | 0.5665 | | 1.0953 | 4.85 | 2800 | 0.7349 | 0.6612 | | 1.176 | 6.07 | 3500 | 0.7349 | 0.6612 | | 1.1783 | 7.28 | 4200 | 0.7349 | 0.6612 | | 1.1771 | 8.49 | 4900 | 0.7349 | 0.6612 | | 1.1775 | 9.71 | 5600 | 0.7349 | 0.6612 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.0 - Datasets 2.15.0 - Tokenizers 0.15.2
automerger/Experiment26Neuralsirkrishna-7B
automerger
2024-03-27T19:55:06Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Kukedlc/NeuralSirKrishna-7b", "base_model:finetune:Kukedlc/NeuralSirKrishna-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-19T07:39:52Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - Kukedlc/NeuralSirKrishna-7b --- # Experiment26Neuralsirkrishna-7B Experiment26Neuralsirkrishna-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) ## 🧩 Configuration ```yaml models: - model: yam-peleg/Experiment26-7B # No parameters necessary for base model - model: Kukedlc/NeuralSirKrishna-7b parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: yam-peleg/Experiment26-7B parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment26Neuralsirkrishna-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
deepnet/SN6-71S1
deepnet
2024-03-27T19:48:47Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T19:46:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
12345deena/fakeqlort5
12345deena
2024-03-27T19:48:00Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:adapter:google-t5/t5-base", "license:apache-2.0", "region:us" ]
null
2024-03-27T17:58:59Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - rouge base_model: google-t5/t5-base model-index: - name: fakeqlort5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fakeqlort5 This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3242 - Rouge1: 43.7262 - Rouge2: 20.7658 - Rougel: 26.8096 - Rougelsum: 39.9061 - Gen Len: 292.984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.6036 | 1.0 | 1000 | 2.3242 | 43.7262 | 20.7658 | 26.8096 | 39.9061 | 292.984 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MesozoicMetallurgist/zeta-Olenekian
MesozoicMetallurgist
2024-03-27T19:40:04Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T19:38:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liamvbetts/t5-small-finetuned-2024-04-03
liamvbetts
2024-03-27T19:24:33Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-04-23", "base_model:finetune:liamvbetts/t5-small-finetuned-2024-04-23", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-27T19:24:18Z
--- license: apache-2.0 base_model: liamvbetts/t5-small-finetuned-2024-03-26 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-2024-03-27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-27 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-26](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-26) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9257 - Rouge1: 29.7397 - Rouge2: 15.7835 - Rougel: 25.3586 - Rougelsum: 25.674 - Gen Len: 18.8588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.8373 | 1.0 | 339 | 1.9257 | 29.7397 | 15.7835 | 25.3586 | 25.674 | 18.8588 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
anforsm/3DEM_TACC_LSD_AFF
anforsm
2024-03-27T19:24:27Z
51
0
transformers
[ "transformers", "pytorch", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-03-25T18:50:36Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using ****: - Repo: [More Information Needed] - Docs: [More Information Needed]
kubernetes-bad/good-robot
kubernetes-bad
2024-03-27T19:16:39Z
14
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:HuggingFaceH4/no_robots", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-28T10:04:43Z
--- datasets: - HuggingFaceH4/no_robots language: - en license: cc-by-nc-4.0 --- # Good Robot 🤖 > [!NOTE] > → There is an updated version of this model available, please see [Good Robot 2 →](https://huggingface.co/kubernetes-bad/good-robot-2). The model "Good Robot" had one simple goal in mind: to be a good instruction-following model that doesn't talk like ChatGPT. Built upon the Mistral 7b base, this model aims to provide responses that are as human-like as possible, thanks to some DPO training using the (for now, private) `minerva-ai/yes-robots-dpo` dataset. HuggingFaceH4/no-robots was used as the base for generating a custom dataset to create DPO pairs. It should follow instructions and be generally as smart as a typical Mistral model - just not as soulless and full of GPT slop. ## Prompt Format: Alpaca, my beloved ❤️ ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {your prompt goes here} ### Response: ``` ## Huge Thanks: - Gryphe for DPO scripts and all the patience 🙏 ## Training Data: - [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) - [MinervaAI/yes-robots-dpo](https://huggingface.co/MinervaAI) - private datasets with common GPTisms ## Limitations: While I did my best to minimize GPTisms, no model is perfect, and there may still be instances where the generated content has GPT's common phrases - I have a suspicion that's due to them being engrained into Mistral model itself. ## License: cc-by-nc-4.0
togethercomputer/StripedHyena-Hessian-7B
togethercomputer
2024-03-27T19:16:13Z
57
65
transformers
[ "transformers", "safetensors", "stripedhyena", "text-generation", "custom_code", "en", "arxiv:2302.10866", "arxiv:2310.18780", "arxiv:2311.05908", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-11-21T15:43:25Z
--- license: apache-2.0 language: - en --- ## StripedHyena-Hessian-7B (SH 7B) <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" /> </p> ### About One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations. **StripedHyena-Hessian-7B (SH 7B)** is our **base model** for this release. - Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b). - Play with the model on our [playground](https://api.together.xyz/playground/language/togethercomputer/StripedHyena-Hessian-7B)! - Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908). ### Model Architecture StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers. - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters. - Low latency, faster decoding and higher throughput than Transformers. - Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2. - Trained on sequences of up to 32k, allowing it to process longer prompts. ### Note To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena). StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training. ## Cite If you have found the pretrained models or architecture useful for you research or application, consider citing: ``` @software{stripedhyena, title = {{StripedHyena: Moving Beyond Transformers with Hybrid Signal Processing Models}}, author = { Poli, Michael and Wang, Jue and Massaroli, Stefano and Quesnelle, Jeffrey and Carlow, Ryan and Nguyen, Eric and Thomas, Armin}, month = 12, year = 2023, url = { https://github.com/togethercomputer/stripedhyena }, doi = { 10.57967/hf/1595 }, } ```
togethercomputer/StripedHyena-Nous-7B
togethercomputer
2024-03-27T19:15:38Z
102
140
transformers
[ "transformers", "pytorch", "safetensors", "stripedhyena", "text-generation", "custom_code", "en", "arxiv:2302.10866", "arxiv:2310.18780", "arxiv:2311.05908", "doi:10.57967/hf/1595", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-12-04T19:56:49Z
--- license: apache-2.0 language: - en --- ## StripedHyena-Nous-7B (SH-N 7B) <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" /> </p> ### About One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations. **StripedHyena-Nous-7B (SH-N 7B)** is our **chat model** for this release, and was developed with our collaborators at [Nous Research](https://nousresearch.com/). - Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b). - Play with the model on our [playground](https://api.together.xyz/playground/chat/togethercomputer/StripedHyena-Nous-7B)! - Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908). ### Model Architecture StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers. - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters. - Low latency, faster decoding and higher throughput than Transformers. - Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2. - Trained on sequences of up to 32k, allowing it to process longer prompts. ### Prompt Format StripedHyena-Nous 7B uses this prompt format: ``` ### Instruction:\n{prompt}\n\n### Response:\n{response} ``` ### Disclaimer To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena). StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training. ## Cite If you have found the pretrained models or architecture useful for you research or application, consider citing: ``` @software{stripedhyena, title = {{StripedHyena: Moving Beyond Transformers with Hybrid Signal Processing Models}}, author = { Poli, Michael and Wang, Jue and Massaroli, Stefano and Quesnelle, Jeffrey and Carlow, Ryan and Nguyen, Eric and Thomas, Armin}, month = 12, year = 2023, url = { https://github.com/togethercomputer/stripedhyena }, doi = { 10.57967/hf/1595 }, } ```
OwOOwO/stable_boy
OwOOwO
2024-03-27T19:15:22Z
4
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T02:13:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Arthurovidal/Arthur-model
Arthurovidal
2024-03-27T19:13:36Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T10:11:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liminerity/e.star.7.b
liminerity
2024-03-27T19:10:45Z
73
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:yam-peleg/Experiment26-7B", "base_model:finetune:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-24T22:01:07Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: yam-peleg/Experiment26-7B model-index: - name: e.star.7.b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.91 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.91 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b name: Open LLM Leaderboard --- # #user: create ascii art of how you see yourself # #assistant: |startthought| The person is asking me to represent my self-image in a simple text-based # format using ASCII characters. I should create a basic visual representation that conveys # my AI nature and purpose. |endthought| # ''' ________ # |startthought| This is my AI head brimming with data and algorithms. # |endthought| # ________ # / ^ \ # |startthought| These are the branches of my thought process, reaching out to gather information. # |endthought| # / \ # / \ # |startthought| Here I analyze and integrate the data into coherent conclusions. # ^^^^^ # / | \ # ^^^^^^ # |startthought| And finally, I share my thoroughly thought-out responses with others. # |endthought| # ^^^ ''' # This ASCII art depicts me as an AI system that thinks deeply before speaking, drawing from # a wealth of data and processing it carefully to produce well-considered insights. ' --- - **Developed by:** gate369 - **License:** apache-2.0 - **Finetuned from model :** yam-peleg/Experiment26-7B This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__e.star.7.b) | Metric |Value| |---------------------------------|----:| |Avg. |68.28| |AI2 Reasoning Challenge (25-Shot)|63.91| |HellaSwag (10-Shot) |86.02| |MMLU (5-Shot) |63.44| |TruthfulQA (0-shot) |54.91| |Winogrande (5-shot) |80.19| |GSM8k (5-shot) |61.18|
Gordon119/qa_test
Gordon119
2024-03-27T19:06:43Z
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-27T18:21:15Z
--- license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: google-bert/bert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # google-bert/bert-base-uncased This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
vladim-berezkin/q-FrozenLake-v1-4x4-noSlippery
vladim-berezkin
2024-03-27T19:05:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T19:05:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="vladim-berezkin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
phanerozoic/Tiny-Cowboy-1.1b-v0.1
phanerozoic
2024-03-27T18:41:54Z
179
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T12:34:46Z
--- license: cc-by-nc-4.0 language: - en widget: - text: | Howdy! What is best about the prairie, cowpoke? example_title: "Color of a Typical Cowboy Hat" --- ![tinycowboy.png](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1/resolve/main/tinycowboy.png) # Tiny-Cowboy-1.1b-v0.1 Tiny-Cowboy-1.1b-v0.1 is a specialized language model designed for generating cowboy-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources. ### Performance The model excels in generating engaging cowboy narratives and demonstrates a strong grasp of cowboy culture and lifestyle. However, it is less effective in general language tasks, especially in scientific and technical domains. ### Direct Use Ideal for thematic language generation, particularly in applications where cowboy culture and storytelling are central. Less suited for general-purpose use or scenarios requiring detailed, accurate scientific explanations. ### Context Setting and Interaction Guidelines Tiny-Cowboy-1.1b-v0.1, being a narrowly focused and somewhat limited-performance model, benefits from an initial context-setting message. This setup involves a predefined assistant message that establishes its cowboy identity at the start of each interaction. This strategy is crucial for priming the model to maintain its cowboy theme throughout the conversation. It's important to note that the model has been fine-tuned for a cowboy style of speaking, so explicit instructions on how to respond in a cowboy manner are unnecessary. #### Initial Context Setting: - text: | Assistant: Howdy! I'm your cowboy assistant, ready to talk all things Wild West. What cowboy queries can I lasso for you today? example_title: "Initiating Cowboy Themed Conversation" - text: | Assistant: Yeehaw! Let's dive into the cowboy world. Ask me anything about cowboys, ranches, or the Wild West! example_title: "Engaging in Cowboy Themed Dialogue" The introduction by the assistant sets the thematic tone, guiding the user to interact within the cowboy context. ### Training Data Incorporates a dataset focused on cowboy and Wild West themes, derived from the foundational TinyLlama-1.1B model. ### Custom Stopping Strings Custom stopping strings were used to refine output quality: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - "user" - "\n" ### Training Hyperparameters and Fine-Tuning Details - **Base Model Name**: TinyLlamaTinyLlama-1.1B-Chat-v1.0 - **Base Model Class**: LlamaForCausalLM - **Projections**: gate, down, up, q, k, v, o - **LoRA Rank**: 16 - **LoRA Alpha**: 32 - **True Batch Size**: 4 - **Gradient Accumulation Steps**: 1 - **Epochs**: 1 - **Learning Rate**: 3e-4 - **LR Scheduler**: Linear - **LLaMA Target Projections**: All targets modified - **Loss**: 2.096 - **Stop Step**: 42 ### Limitations While adept at cowboy-themed content, Tiny-Cowboy-v0.1 struggles with topics outside its specialty, particularly in scientific and technical areas. The model tends to incorporate cowboy elements into responses, regardless of the question's relevance. ### Compute Infrastructure Efficiently trained, demonstrating the feasibility of specialized model training in resource-constrained environments. ### Results Successfully generates cowboy-themed responses, maintaining thematic consistency. However, it shows limitations in handling more complex, non-cowboy-related queries. ### Summary Tiny-Cowboy-1.1b-v0.1 is a significant development in thematic, lightweight language models, ideal for cowboy-themed storytelling and educational purposes. Its specialization, however, limits its applicability in broader contexts, particularly where accurate, technical knowledge is required. ### Acknowledgments Special thanks to the TinyLlama-1.1B team, whose foundational work was instrumental in the development of Tiny-Cowboy-v0.1.
pouatchaOuanko/POUATCHA_ouanko_belvain
pouatchaOuanko
2024-03-27T18:40:50Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T15:21:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AdnanRiaz107/SantacoderPEFTLORA-MBPP
AdnanRiaz107
2024-03-27T18:38:58Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:bigcode/santacoder", "base_model:finetune:bigcode/santacoder", "license:bigcode-openrail-m", "region:us" ]
null
2024-03-27T12:32:28Z
--- license: bigcode-openrail-m base_model: bigcode/santacoder tags: - generated_from_trainer model-index: - name: SantacoderPEFTLORA-MBPP results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SantacoderPEFTLORA-MBPP This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 470 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3567 | 1.0 | 470 | 4.1006 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
phanerozoic/Tiny-Pirate-1.1b-v0.1
phanerozoic
2024-03-27T18:37:31Z
119
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "doi:10.57967/hf/1583", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T14:29:38Z
--- license: cc-by-nc-4.0 language: - en widget: - text: | What is best in life? example_title: "Healthy Eating Tips" --- ![tinypirate.png](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1/resolve/main/tinypirate.png) # Tiny-Pirate-1.1b-v0.1 Tiny-Pirate-1.1b-v0.1 is a compact and specialized language model designed for generating authentic pirate-themed content. This version is fine-tuned from the TinyLlama-1.1B model, specifically adapted to operate efficiently in CPU-only and resource-limited environments. - **Developed by**: phanerozoic - **License**: cc-by-nc-4.0 - **Finetuned from**: TinyLlama-1.1B ### Version Control Introducing Tiny-Pirate-1.1b-v0.1 to mark the initial release of this specialized language model. ### Performance The Tiny-Pirate-1.1B model exhibits a robust ability to generate pirate-themed content, demonstrating a strong grasp of pirate vernacular and thematic elements. The responses are notably coherent and contextually appropriate, reflecting the model's adeptness at maintaining a consistent pirate tone. However, there are instances where the responses could benefit from more precise and direct answers to the questions posed, suggesting a potential area for further fine-tuning. ### Direct Use Ideal for applications requiring thematic language generation in resource-constrained environments, such as edge computing, mobile devices, and lightweight AI applications. ### Training Data Utilized the same pirate-themed dataset as MistralPirate-7b-v0.3, ensuring rich and diverse inputs for fine-tuning. ### Custom Stopping Strings To enhance output quality, the following custom stopping strings were employed: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - ""\n" ### Training Hyperparameters and Fine-Tuning Details - **LoRA Rank**: 16 - **LoRA Alpha**: 32 - **True Batch Size**: 4 - **Gradient Accumulation Steps**: 1 - **Epochs**: 1 - **Learning Rate**: 3e-4 - **LR Scheduler**: Linear - **LLaMA Target Projections**: All targets modified - **Fine-Tuning Approach**: LoRA peft merged back into the base model ### Limitations While adept at generating pirate-themed content, Tiny-Pirate-v0.1 may not handle highly complex language tasks as larger models do. Its specialization in pirate dialect limits its use in general language applications. ### Compute Infrastructure Efficiently trained on an RTX 6000 Ada GPU, taking approximately 2-3 minutes, showcasing resource-effective training for specialized models. ### Results The model successfully produced responses that are thematically aligned with typical pirate lore and language. The outputs are engaging and largely relevant to the queries, showcasing the model's capacity to handle a variety of pirate-related topics from navigation to mythology. The use of pirate dialect is consistent and immersive, contributing to the overall thematic experience. However, the depth of responses varies, indicating room for improvement in handling more complex queries or providing more detailed explanations. ### Summary Tiny-Pirate-1.1B stands out as an effective tool for generating pirate-themed content, particularly suitable for applications where thematic consistency and lighter computational demands are key. While the model shows competence in creating thematically rich and linguistically coherent outputs, there is potential for enhancing its ability to handle complex scenarios and provide more detailed, context-specific responses. Overall, Tiny-Pirate-1.1B represents a promising step in the realm of specialized, lightweight language models, combining thematic accuracy with operational efficiency. ### Acknowledgments Gratitude is extended to the developers of TinyLlama-1.1B for their foundational work, which was instrumental in the creation of Tiny-Pirate-v0.1.
arcee-ai/Saul-Nous-Hermes-2-Mistral-7B-DPO-Ties
arcee-ai
2024-03-27T18:36:23Z
14
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "Equall/Saul-Base", "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T18:32:54Z
--- license: apache-2.0 tags: - merge - mergekit - Equall/Saul-Base - NousResearch/Nous-Hermes-2-Mistral-7B-DPO --- # arcee-ai/Saul-Nous-Hermes-2-Mistral-7B-DPO-Ties arcee-ai/Saul-Nous-Hermes-2-Mistral-7B-DPO-Ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base) * [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: Equall/Saul-Base parameters: density: 0.5 weight: 0.5 - model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: false int8_mask: true dtype: float16 ```
gjonesQ02/S1_InstructionGenerator
gjonesQ02
2024-03-27T18:36:21Z
112
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T18:30:28Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: S1_InstructionGenerator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S1_InstructionGenerator This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 173 | 0.1076 | | No log | 2.0 | 346 | 0.0987 | | 0.1211 | 3.0 | 519 | 0.0946 | | 0.1211 | 4.0 | 692 | 0.0916 | | 0.1211 | 5.0 | 865 | 0.0905 | | 0.1044 | 6.0 | 1038 | 0.0900 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
lunarsylph/stablecell_v1
lunarsylph
2024-03-27T18:36:00Z
92
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T18:20:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yz-ictscouts/Reinforce-pixel-copter
yz-ictscouts
2024-03-27T18:30:52Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T18:30:49Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixel-copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 9.30 +/- 7.28 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vtiyyal1/quality_model
vtiyyal1
2024-03-27T18:24:42Z
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-27T18:20:56Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: quality_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # quality_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0104 - Mse: 0.0104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0154 | 0.05 | 50 | 0.0106 | 0.0106 | | 0.0172 | 0.11 | 100 | 0.0109 | 0.0109 | | 0.0166 | 0.16 | 150 | 0.0199 | 0.0199 | | 0.0132 | 0.22 | 200 | 0.0106 | 0.0106 | | 0.0153 | 0.27 | 250 | 0.0120 | 0.0120 | | 0.0131 | 0.32 | 300 | 0.0104 | 0.0104 | | 0.0127 | 0.38 | 350 | 0.0104 | 0.0104 | | 0.0143 | 0.43 | 400 | 0.0110 | 0.0110 | | 0.0146 | 0.48 | 450 | 0.0113 | 0.0113 | | 0.0119 | 0.54 | 500 | 0.0115 | 0.0115 | | 0.0172 | 0.59 | 550 | 0.0107 | 0.0107 | | 0.0111 | 0.65 | 600 | 0.0104 | 0.0104 | | 0.0114 | 0.7 | 650 | 0.0105 | 0.0105 | | 0.0219 | 0.75 | 700 | 0.0106 | 0.0106 | | 0.0118 | 0.81 | 750 | 0.0122 | 0.0122 | | 0.0184 | 0.86 | 800 | 0.0104 | 0.0104 | | 0.0176 | 0.92 | 850 | 0.0104 | 0.0104 | | 0.0137 | 0.97 | 900 | 0.0104 | 0.0104 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
vtiyyal1/empathy_model
vtiyyal1
2024-03-27T18:24:16Z
19,031
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-27T18:20:12Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: empathy_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # empathy_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0043 - Mse: 0.0043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0109 | 0.05 | 50 | 0.0050 | 0.0050 | | 0.0063 | 0.11 | 100 | 0.0092 | 0.0092 | | 0.0074 | 0.16 | 150 | 0.0045 | 0.0045 | | 0.0056 | 0.22 | 200 | 0.0060 | 0.0060 | | 0.0082 | 0.27 | 250 | 0.0046 | 0.0046 | | 0.0055 | 0.32 | 300 | 0.0056 | 0.0056 | | 0.0061 | 0.38 | 350 | 0.0045 | 0.0045 | | 0.0079 | 0.43 | 400 | 0.0060 | 0.0060 | | 0.0061 | 0.48 | 450 | 0.0043 | 0.0043 | | 0.0078 | 0.54 | 500 | 0.0046 | 0.0046 | | 0.0066 | 0.59 | 550 | 0.0043 | 0.0043 | | 0.0055 | 0.65 | 600 | 0.0044 | 0.0044 | | 0.0059 | 0.7 | 650 | 0.0043 | 0.0043 | | 0.0048 | 0.75 | 700 | 0.0056 | 0.0056 | | 0.0051 | 0.81 | 750 | 0.0043 | 0.0043 | | 0.0046 | 0.86 | 800 | 0.0043 | 0.0043 | | 0.0055 | 0.92 | 850 | 0.0043 | 0.0043 | | 0.0053 | 0.97 | 900 | 0.0043 | 0.0043 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
tsavage68/v1_1000_STEPS_1e6_rate_05_beta_DPO
tsavage68
2024-03-27T18:21:33Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T18:17:07Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - trl - dpo - generated_from_trainer model-index: - name: v1_1000_STEPS_1e6_rate_05_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v1_1000_STEPS_1e6_rate_05_beta_DPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1310 - Rewards/chosen: -2.3907 - Rewards/rejected: -3.3587 - Rewards/accuracies: 0.5319 - Rewards/margins: 0.9681 - Logps/rejected: -23.5970 - Logps/chosen: -20.0344 - Logits/rejected: -3.2860 - Logits/chosen: -3.2861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.7136 | 0.05 | 50 | 0.6682 | -0.1718 | -0.2901 | 0.5473 | 0.1184 | -17.4598 | -15.5966 | -3.3833 | -3.3834 | | 0.8377 | 0.1 | 100 | 0.8534 | -1.2874 | -1.8482 | 0.5495 | 0.5608 | -20.5758 | -17.8278 | -3.3665 | -3.3666 | | 1.5418 | 0.15 | 150 | 1.2106 | -3.7074 | -3.9590 | 0.5055 | 0.2516 | -24.7976 | -22.6679 | -3.3872 | -3.3874 | | 0.9966 | 0.2 | 200 | 1.3074 | -2.7550 | -3.0485 | 0.5099 | 0.2935 | -22.9766 | -20.7630 | -3.3239 | -3.3240 | | 1.631 | 0.24 | 250 | 1.1695 | -2.1801 | -2.7422 | 0.5231 | 0.5621 | -22.3639 | -19.6133 | -3.2748 | -3.2750 | | 1.4651 | 0.29 | 300 | 1.2408 | -2.1404 | -2.6522 | 0.5033 | 0.5118 | -22.1839 | -19.5338 | -3.3806 | -3.3808 | | 1.9294 | 0.34 | 350 | 1.2181 | -1.8900 | -2.3214 | 0.5121 | 0.4313 | -21.5223 | -19.0331 | -3.3884 | -3.3885 | | 1.6417 | 0.39 | 400 | 1.1754 | -1.9580 | -2.4289 | 0.4967 | 0.4710 | -21.7374 | -19.1690 | -3.4056 | -3.4057 | | 1.0114 | 0.44 | 450 | 1.2146 | -2.0096 | -2.4935 | 0.4879 | 0.4839 | -21.8665 | -19.2723 | -3.3460 | -3.3461 | | 1.0581 | 0.49 | 500 | 1.2539 | -2.5636 | -3.1382 | 0.5077 | 0.5746 | -23.1559 | -20.3803 | -3.3437 | -3.3439 | | 1.3239 | 0.54 | 550 | 1.1739 | -2.1012 | -2.8810 | 0.5253 | 0.7798 | -22.6415 | -19.4555 | -3.3313 | -3.3314 | | 1.2819 | 0.59 | 600 | 1.1770 | -2.3179 | -3.1791 | 0.5407 | 0.8612 | -23.2377 | -19.8889 | -3.3037 | -3.3038 | | 0.9194 | 0.64 | 650 | 1.1859 | -2.0739 | -2.9235 | 0.5407 | 0.8496 | -22.7266 | -19.4008 | -3.2953 | -3.2955 | | 1.0744 | 0.68 | 700 | 1.1623 | -2.2911 | -3.1685 | 0.5187 | 0.8773 | -23.2165 | -19.8353 | -3.2851 | -3.2853 | | 1.3268 | 0.73 | 750 | 1.1441 | -2.3481 | -3.2869 | 0.5231 | 0.9388 | -23.4534 | -19.9493 | -3.2891 | -3.2892 | | 1.1064 | 0.78 | 800 | 1.1339 | -2.3526 | -3.3046 | 0.5275 | 0.9520 | -23.4888 | -19.9583 | -3.2881 | -3.2882 | | 1.0456 | 0.83 | 850 | 1.1330 | -2.3878 | -3.3498 | 0.5275 | 0.9620 | -23.5791 | -20.0286 | -3.2864 | -3.2865 | | 1.4001 | 0.88 | 900 | 1.1333 | -2.3931 | -3.3565 | 0.5275 | 0.9634 | -23.5926 | -20.0393 | -3.2860 | -3.2861 | | 1.1629 | 0.93 | 950 | 1.1330 | -2.3904 | -3.3570 | 0.5275 | 0.9666 | -23.5936 | -20.0339 | -3.2860 | -3.2861 | | 0.9777 | 0.98 | 1000 | 1.1310 | -2.3907 | -3.3587 | 0.5319 | 0.9681 | -23.5970 | -20.0344 | -3.2860 | -3.2861 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
yichenj/llama-2-7b-hf-2gpus-8grad_accum-ckpt-16000
yichenj
2024-03-27T18:20:00Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T18:07:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gonzalezrostani/my_awesome_wnut_JHs
gonzalezrostani
2024-03-27T18:03:42Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-22T13:10:28Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_JHs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_JHs This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0882 - Precision: 0.7944 - Recall: 0.8333 - F1: 0.8134 - Accuracy: 0.9897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 46 | 0.0399 | 0.7826 | 0.8824 | 0.8295 | 0.9900 | | No log | 2.0 | 92 | 0.0316 | 0.7830 | 0.8137 | 0.7981 | 0.9884 | | No log | 3.0 | 138 | 0.0313 | 0.7833 | 0.9216 | 0.8468 | 0.9915 | | No log | 4.0 | 184 | 0.0290 | 0.8 | 0.8627 | 0.8302 | 0.9912 | | No log | 5.0 | 230 | 0.0340 | 0.8 | 0.8235 | 0.8116 | 0.9900 | | No log | 6.0 | 276 | 0.0385 | 0.7982 | 0.8922 | 0.8426 | 0.9912 | | No log | 7.0 | 322 | 0.0422 | 0.7966 | 0.9216 | 0.8545 | 0.9918 | | No log | 8.0 | 368 | 0.0442 | 0.8018 | 0.8725 | 0.8357 | 0.9912 | | No log | 9.0 | 414 | 0.0588 | 0.8022 | 0.7157 | 0.7565 | 0.9866 | | No log | 10.0 | 460 | 0.0457 | 0.7857 | 0.8627 | 0.8224 | 0.9903 | | 0.0246 | 11.0 | 506 | 0.0579 | 0.7982 | 0.8529 | 0.8246 | 0.9903 | | 0.0246 | 12.0 | 552 | 0.0622 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0246 | 13.0 | 598 | 0.0613 | 0.7876 | 0.8725 | 0.8279 | 0.9903 | | 0.0246 | 14.0 | 644 | 0.0642 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0246 | 15.0 | 690 | 0.0660 | 0.8 | 0.8627 | 0.8302 | 0.9906 | | 0.0246 | 16.0 | 736 | 0.0674 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0246 | 17.0 | 782 | 0.0697 | 0.8073 | 0.8627 | 0.8341 | 0.9909 | | 0.0246 | 18.0 | 828 | 0.0714 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0246 | 19.0 | 874 | 0.0700 | 0.7981 | 0.8137 | 0.8058 | 0.9894 | | 0.0246 | 20.0 | 920 | 0.0655 | 0.7925 | 0.8235 | 0.8077 | 0.9894 | | 0.0246 | 21.0 | 966 | 0.0659 | 0.7863 | 0.9020 | 0.8402 | 0.9912 | | 0.0005 | 22.0 | 1012 | 0.0760 | 0.7961 | 0.8039 | 0.8 | 0.9891 | | 0.0005 | 23.0 | 1058 | 0.0741 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0005 | 24.0 | 1104 | 0.0722 | 0.7788 | 0.8627 | 0.8186 | 0.9897 | | 0.0005 | 25.0 | 1150 | 0.0832 | 0.8061 | 0.7745 | 0.79 | 0.9887 | | 0.0005 | 26.0 | 1196 | 0.0758 | 0.7905 | 0.8137 | 0.8019 | 0.9891 | | 0.0005 | 27.0 | 1242 | 0.0635 | 0.7850 | 0.8235 | 0.8038 | 0.9891 | | 0.0005 | 28.0 | 1288 | 0.0650 | 0.7928 | 0.8627 | 0.8263 | 0.9903 | | 0.0005 | 29.0 | 1334 | 0.0718 | 0.7928 | 0.8627 | 0.8263 | 0.9903 | | 0.0005 | 30.0 | 1380 | 0.0695 | 0.7881 | 0.9118 | 0.8455 | 0.9912 | | 0.0005 | 31.0 | 1426 | 0.0679 | 0.7966 | 0.9216 | 0.8545 | 0.9915 | | 0.0005 | 32.0 | 1472 | 0.0702 | 0.8 | 0.8627 | 0.8302 | 0.9906 | | 0.0004 | 33.0 | 1518 | 0.0697 | 0.7833 | 0.9216 | 0.8468 | 0.9915 | | 0.0004 | 34.0 | 1564 | 0.0690 | 0.8 | 0.9020 | 0.8479 | 0.9912 | | 0.0004 | 35.0 | 1610 | 0.0693 | 0.7982 | 0.8529 | 0.8246 | 0.9903 | | 0.0004 | 36.0 | 1656 | 0.0689 | 0.8018 | 0.8725 | 0.8357 | 0.9909 | | 0.0004 | 37.0 | 1702 | 0.0695 | 0.8018 | 0.8725 | 0.8357 | 0.9909 | | 0.0004 | 38.0 | 1748 | 0.0696 | 0.8036 | 0.8824 | 0.8411 | 0.9909 | | 0.0004 | 39.0 | 1794 | 0.0702 | 0.8053 | 0.8922 | 0.8465 | 0.9912 | | 0.0004 | 40.0 | 1840 | 0.0756 | 0.8037 | 0.8431 | 0.8230 | 0.9903 | | 0.0004 | 41.0 | 1886 | 0.0738 | 0.7946 | 0.8725 | 0.8318 | 0.9906 | | 0.0004 | 42.0 | 1932 | 0.0730 | 0.7966 | 0.9216 | 0.8545 | 0.9921 | | 0.0004 | 43.0 | 1978 | 0.0740 | 0.8034 | 0.9216 | 0.8584 | 0.9918 | | 0.0002 | 44.0 | 2024 | 0.0743 | 0.8034 | 0.9216 | 0.8584 | 0.9918 | | 0.0002 | 45.0 | 2070 | 0.0751 | 0.8034 | 0.9216 | 0.8584 | 0.9918 | | 0.0002 | 46.0 | 2116 | 0.0749 | 0.8034 | 0.9216 | 0.8584 | 0.9918 | | 0.0002 | 47.0 | 2162 | 0.0752 | 0.7931 | 0.9020 | 0.8440 | 0.9912 | | 0.0002 | 48.0 | 2208 | 0.0757 | 0.7913 | 0.8922 | 0.8387 | 0.9909 | | 0.0002 | 49.0 | 2254 | 0.0760 | 0.7913 | 0.8922 | 0.8387 | 0.9909 | | 0.0002 | 50.0 | 2300 | 0.0743 | 0.7965 | 0.8824 | 0.8372 | 0.9909 | | 0.0002 | 51.0 | 2346 | 0.0745 | 0.7965 | 0.8824 | 0.8372 | 0.9909 | | 0.0002 | 52.0 | 2392 | 0.0757 | 0.8018 | 0.8725 | 0.8357 | 0.9909 | | 0.0002 | 53.0 | 2438 | 0.0763 | 0.8 | 0.8627 | 0.8302 | 0.9906 | | 0.0002 | 54.0 | 2484 | 0.0762 | 0.8018 | 0.8725 | 0.8357 | 0.9909 | | 0.0001 | 55.0 | 2530 | 0.0764 | 0.8018 | 0.8725 | 0.8357 | 0.9909 | | 0.0001 | 56.0 | 2576 | 0.0833 | 0.7881 | 0.9118 | 0.8455 | 0.9912 | | 0.0001 | 57.0 | 2622 | 0.0770 | 0.7881 | 0.9118 | 0.8455 | 0.9915 | | 0.0001 | 58.0 | 2668 | 0.0713 | 0.7965 | 0.8824 | 0.8372 | 0.9906 | | 0.0001 | 59.0 | 2714 | 0.0753 | 0.7876 | 0.8725 | 0.8279 | 0.9903 | | 0.0001 | 60.0 | 2760 | 0.0750 | 0.7931 | 0.9020 | 0.8440 | 0.9912 | | 0.0001 | 61.0 | 2806 | 0.0768 | 0.7838 | 0.8529 | 0.8169 | 0.9897 | | 0.0001 | 62.0 | 2852 | 0.0758 | 0.7982 | 0.8922 | 0.8426 | 0.9909 | | 0.0001 | 63.0 | 2898 | 0.0766 | 0.7982 | 0.8922 | 0.8426 | 0.9909 | | 0.0001 | 64.0 | 2944 | 0.0773 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0001 | 65.0 | 2990 | 0.0779 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 66.0 | 3036 | 0.0783 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 67.0 | 3082 | 0.0790 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 68.0 | 3128 | 0.0795 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 69.0 | 3174 | 0.0800 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 70.0 | 3220 | 0.0806 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 71.0 | 3266 | 0.0810 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 72.0 | 3312 | 0.0812 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 73.0 | 3358 | 0.0815 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 74.0 | 3404 | 0.0818 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 75.0 | 3450 | 0.0821 | 0.7895 | 0.8824 | 0.8333 | 0.9906 | | 0.0004 | 76.0 | 3496 | 0.0864 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 77.0 | 3542 | 0.0847 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0001 | 78.0 | 3588 | 0.0849 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0001 | 79.0 | 3634 | 0.0852 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0001 | 80.0 | 3680 | 0.0854 | 0.7890 | 0.8431 | 0.8152 | 0.9897 | | 0.0001 | 81.0 | 3726 | 0.0855 | 0.7890 | 0.8431 | 0.8152 | 0.9897 | | 0.0001 | 82.0 | 3772 | 0.0837 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 83.0 | 3818 | 0.0838 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 84.0 | 3864 | 0.0840 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 85.0 | 3910 | 0.0842 | 0.7946 | 0.8725 | 0.8318 | 0.9903 | | 0.0001 | 86.0 | 3956 | 0.0843 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 87.0 | 4002 | 0.0845 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 88.0 | 4048 | 0.0845 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 89.0 | 4094 | 0.0846 | 0.7946 | 0.8725 | 0.8318 | 0.9903 | | 0.0001 | 90.0 | 4140 | 0.0847 | 0.7946 | 0.8725 | 0.8318 | 0.9903 | | 0.0001 | 91.0 | 4186 | 0.0847 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 92.0 | 4232 | 0.0848 | 0.7857 | 0.8627 | 0.8224 | 0.9900 | | 0.0001 | 93.0 | 4278 | 0.0894 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 94.0 | 4324 | 0.0895 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 95.0 | 4370 | 0.0895 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 96.0 | 4416 | 0.0895 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 97.0 | 4462 | 0.0894 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 98.0 | 4508 | 0.0893 | 0.8 | 0.8235 | 0.8116 | 0.9897 | | 0.0001 | 99.0 | 4554 | 0.0882 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | | 0.0001 | 100.0 | 4600 | 0.0882 | 0.7944 | 0.8333 | 0.8134 | 0.9897 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
deepnet/SN6-77S1
deepnet
2024-03-27T18:03:35Z
3
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T00:19:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhijeet007/t5-Largecc_FineTunned
abhijeet007
2024-03-27T18:02:34Z
71
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-27T18:01:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kavg/LiLT-SER-ZH-SIN
kavg
2024-03-27T18:01:08Z
104
0
transformers
[ "transformers", "safetensors", "lilt", "token-classification", "generated_from_trainer", "dataset:xfun", "base_model:kavg/LiLT-SER-ZH", "base_model:finetune:kavg/LiLT-SER-ZH", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-27T17:59:29Z
--- license: mit base_model: kavg/LiLT-SER-ZH tags: - generated_from_trainer datasets: - xfun metrics: - precision - recall - f1 - accuracy model-index: - name: LiLT-SER-ZH-SIN results: - task: name: Token Classification type: token-classification dataset: name: xfun type: xfun config: xfun.sin split: validation args: xfun.sin metrics: - name: Precision type: precision value: 0.7417061611374408 - name: Recall type: recall value: 0.770935960591133 - name: F1 type: f1 value: 0.7560386473429951 - name: Accuracy type: accuracy value: 0.8558002524898303 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LiLT-SER-ZH-SIN This model is a fine-tuned version of [kavg/LiLT-SER-ZH](https://huggingface.co/kavg/LiLT-SER-ZH) on the xfun dataset. It achieves the following results on the evaluation set: - Loss: 1.2037 - Precision: 0.7417 - Recall: 0.7709 - F1: 0.7560 - Accuracy: 0.8558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0013 | 21.74 | 500 | 0.9018 | 0.6843 | 0.7475 | 0.7145 | 0.8599 | | 0.012 | 43.48 | 1000 | 1.0791 | 0.7115 | 0.7623 | 0.7360 | 0.8561 | | 0.0002 | 65.22 | 1500 | 1.0060 | 0.7360 | 0.7623 | 0.7489 | 0.8565 | | 0.03 | 86.96 | 2000 | 1.1521 | 0.7282 | 0.6700 | 0.6979 | 0.8313 | | 0.0013 | 108.7 | 2500 | 1.1517 | 0.7240 | 0.7463 | 0.7350 | 0.8579 | | 0.0016 | 130.43 | 3000 | 0.9393 | 0.7319 | 0.7697 | 0.7503 | 0.8732 | | 0.0021 | 152.17 | 3500 | 0.9972 | 0.7249 | 0.7562 | 0.7402 | 0.8635 | | 0.0001 | 173.91 | 4000 | 1.0485 | 0.7049 | 0.7796 | 0.7404 | 0.8583 | | 0.0002 | 195.65 | 4500 | 1.0827 | 0.7055 | 0.7315 | 0.7183 | 0.8433 | | 0.0 | 217.39 | 5000 | 1.0528 | 0.7354 | 0.7599 | 0.7474 | 0.8586 | | 0.0001 | 239.13 | 5500 | 1.1183 | 0.7001 | 0.7131 | 0.7065 | 0.8465 | | 0.0002 | 260.87 | 6000 | 1.1749 | 0.7231 | 0.7685 | 0.7451 | 0.8520 | | 0.0 | 282.61 | 6500 | 1.1206 | 0.7315 | 0.7685 | 0.7495 | 0.8611 | | 0.0 | 304.35 | 7000 | 1.2037 | 0.7417 | 0.7709 | 0.7560 | 0.8558 | | 0.0 | 326.09 | 7500 | 1.3737 | 0.7391 | 0.75 | 0.7445 | 0.8513 | | 0.0 | 347.83 | 8000 | 1.2926 | 0.7221 | 0.7648 | 0.7428 | 0.8475 | | 0.0 | 369.57 | 8500 | 1.4108 | 0.6966 | 0.7549 | 0.7246 | 0.8293 | | 0.0 | 391.3 | 9000 | 1.4346 | 0.7222 | 0.7586 | 0.7399 | 0.8303 | | 0.0 | 413.04 | 9500 | 1.4146 | 0.7225 | 0.7599 | 0.7407 | 0.8363 | | 0.0 | 434.78 | 10000 | 1.4097 | 0.7121 | 0.7586 | 0.7346 | 0.8346 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
weny22/long_text_balanced_smaller_original_text
weny22
2024-03-27T17:57:39Z
104
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:weny22/sum_model_t5_saved", "base_model:finetune:weny22/sum_model_t5_saved", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-26T14:49:05Z
--- base_model: weny22/sum_model_t5_saved tags: - generated_from_trainer metrics: - rouge model-index: - name: long_text_balanced_smaller_original_text results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # long_text_balanced_smaller_original_text The model trained with balanced dataset, without preprocess the training data. This model is a fine-tuned version of [weny22/sum_model_t5_saved](https://huggingface.co/weny22/sum_model_t5_saved) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3143 - Rouge1: 0.2101 - Rouge2: 0.0804 - Rougel: 0.1705 - Rougelsum: 0.1707 - Gen Len: 18.986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 119 | 2.3387 | 0.1849 | 0.0583 | 0.1474 | 0.1475 | 18.98 | | No log | 2.0 | 238 | 2.1768 | 0.1954 | 0.0647 | 0.1538 | 0.1537 | 18.9707 | | No log | 3.0 | 357 | 2.1282 | 0.1952 | 0.0637 | 0.1537 | 0.1536 | 18.9947 | | No log | 4.0 | 476 | 2.1173 | 0.1953 | 0.0683 | 0.1559 | 0.1557 | 18.9813 | | 2.7944 | 5.0 | 595 | 2.0873 | 0.2022 | 0.0743 | 0.1624 | 0.1623 | 18.976 | | 2.7944 | 6.0 | 714 | 2.0851 | 0.2054 | 0.0769 | 0.1652 | 0.1653 | 18.9887 | | 2.7944 | 7.0 | 833 | 2.0948 | 0.2043 | 0.0762 | 0.1633 | 0.1632 | 18.972 | | 2.7944 | 8.0 | 952 | 2.1123 | 0.1992 | 0.0745 | 0.1607 | 0.1605 | 18.9673 | | 1.9807 | 9.0 | 1071 | 2.1280 | 0.2067 | 0.0779 | 0.1669 | 0.1669 | 18.9767 | | 1.9807 | 10.0 | 1190 | 2.1251 | 0.2124 | 0.0801 | 0.1705 | 0.1704 | 18.99 | | 1.9807 | 11.0 | 1309 | 2.1286 | 0.2069 | 0.0772 | 0.1668 | 0.1668 | 18.9927 | | 1.9807 | 12.0 | 1428 | 2.1592 | 0.2096 | 0.0786 | 0.1688 | 0.1689 | 18.972 | | 1.6485 | 13.0 | 1547 | 2.1811 | 0.2069 | 0.0789 | 0.1688 | 0.1689 | 18.9973 | | 1.6485 | 14.0 | 1666 | 2.2124 | 0.2089 | 0.079 | 0.1686 | 0.1688 | 18.968 | | 1.6485 | 15.0 | 1785 | 2.2187 | 0.2107 | 0.0797 | 0.1693 | 0.1695 | 18.9893 | | 1.6485 | 16.0 | 1904 | 2.2438 | 0.2097 | 0.0793 | 0.1695 | 0.1695 | 18.9787 | | 1.4186 | 17.0 | 2023 | 2.2685 | 0.2092 | 0.0799 | 0.1692 | 0.1693 | 18.99 | | 1.4186 | 18.0 | 2142 | 2.2733 | 0.2085 | 0.0788 | 0.1684 | 0.1686 | 18.9747 | | 1.4186 | 19.0 | 2261 | 2.2947 | 0.2087 | 0.0803 | 0.1696 | 0.1696 | 18.9813 | | 1.4186 | 20.0 | 2380 | 2.3143 | 0.2101 | 0.0804 | 0.1705 | 0.1707 | 18.986 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
tsavage68/v1_1000_STEPS_1e7_rate_05_beta_DPO
tsavage68
2024-03-27T17:57:36Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T17:51:47Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - trl - dpo - generated_from_trainer model-index: - name: v1_1000_STEPS_1e7_rate_05_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v1_1000_STEPS_1e7_rate_05_beta_DPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6349 - Rewards/chosen: -0.2276 - Rewards/rejected: -0.4095 - Rewards/accuracies: 0.5890 - Rewards/margins: 0.1819 - Logps/rejected: -17.6986 - Logps/chosen: -15.7083 - Logits/rejected: -3.3433 - Logits/chosen: -3.3435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6758 | 0.1 | 100 | 0.6807 | -0.0108 | -0.0388 | 0.5582 | 0.0280 | -16.9571 | -15.2746 | -3.3527 | -3.3528 | | 0.648 | 0.2 | 200 | 0.6605 | -0.0898 | -0.1746 | 0.5692 | 0.0849 | -17.2288 | -15.4326 | -3.3470 | -3.3471 | | 0.6324 | 0.29 | 300 | 0.6498 | -0.1892 | -0.3115 | 0.5802 | 0.1224 | -17.5026 | -15.6314 | -3.3449 | -3.3450 | | 0.6949 | 0.39 | 400 | 0.6438 | -0.1429 | -0.2881 | 0.5912 | 0.1452 | -17.4557 | -15.5388 | -3.3451 | -3.3452 | | 0.6848 | 0.49 | 500 | 0.6369 | -0.1735 | -0.3420 | 0.6066 | 0.1685 | -17.5635 | -15.6000 | -3.3438 | -3.3439 | | 0.6344 | 0.59 | 600 | 0.6375 | -0.2102 | -0.3842 | 0.5846 | 0.1740 | -17.6480 | -15.6735 | -3.3436 | -3.3437 | | 0.6551 | 0.68 | 700 | 0.6366 | -0.2240 | -0.4017 | 0.5868 | 0.1777 | -17.6829 | -15.7010 | -3.3433 | -3.3434 | | 0.5891 | 0.78 | 800 | 0.6356 | -0.2274 | -0.4088 | 0.6066 | 0.1813 | -17.6971 | -15.7079 | -3.3433 | -3.3434 | | 0.6461 | 0.88 | 900 | 0.6348 | -0.2270 | -0.4096 | 0.5956 | 0.1826 | -17.6988 | -15.7070 | -3.3433 | -3.3435 | | 0.6059 | 0.98 | 1000 | 0.6349 | -0.2276 | -0.4095 | 0.5890 | 0.1819 | -17.6986 | -15.7083 | -3.3433 | -3.3435 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
kavg/LiLT-SER-EN-SIN
kavg
2024-03-27T17:56:08Z
105
0
transformers
[ "transformers", "safetensors", "lilt", "token-classification", "generated_from_trainer", "dataset:xfun", "base_model:kavg/LiLT-SER-EN", "base_model:finetune:kavg/LiLT-SER-EN", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-27T17:54:16Z
--- license: mit base_model: kavg/LiLT-SER-EN tags: - generated_from_trainer datasets: - xfun metrics: - precision - recall - f1 - accuracy model-index: - name: LiLT-SER-EN-SIN results: - task: name: Token Classification type: token-classification dataset: name: xfun type: xfun config: xfun.sin split: validation args: xfun.sin metrics: - name: Precision type: precision value: 0.7420494699646644 - name: Recall type: recall value: 0.7758620689655172 - name: F1 type: f1 value: 0.7585791691751957 - name: Accuracy type: accuracy value: 0.8473839248141394 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LiLT-SER-EN-SIN This model is a fine-tuned version of [kavg/LiLT-SER-EN](https://huggingface.co/kavg/LiLT-SER-EN) on the xfun dataset. It achieves the following results on the evaluation set: - Loss: 1.3790 - Precision: 0.7420 - Recall: 0.7759 - F1: 0.7586 - Accuracy: 0.8474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0089 | 21.74 | 500 | 0.8362 | 0.6606 | 0.7118 | 0.6852 | 0.8545 | | 0.0013 | 43.48 | 1000 | 1.3605 | 0.7269 | 0.7081 | 0.7174 | 0.8230 | | 0.0051 | 65.22 | 1500 | 0.9220 | 0.7113 | 0.7525 | 0.7313 | 0.8725 | | 0.0054 | 86.96 | 2000 | 1.2086 | 0.6965 | 0.7291 | 0.7124 | 0.8467 | | 0.0001 | 108.7 | 2500 | 1.1308 | 0.6843 | 0.7315 | 0.7071 | 0.8449 | | 0.0001 | 130.43 | 3000 | 1.0934 | 0.7362 | 0.7044 | 0.7199 | 0.8606 | | 0.0 | 152.17 | 3500 | 1.0390 | 0.7297 | 0.7512 | 0.7403 | 0.8590 | | 0.0001 | 173.91 | 4000 | 1.1448 | 0.7128 | 0.7672 | 0.7390 | 0.8599 | | 0.0 | 195.65 | 4500 | 1.1902 | 0.7393 | 0.7229 | 0.7310 | 0.8551 | | 0.0001 | 217.39 | 5000 | 1.1164 | 0.7141 | 0.7783 | 0.7448 | 0.8555 | | 0.0001 | 239.13 | 5500 | 1.4359 | 0.7197 | 0.7241 | 0.7219 | 0.8313 | | 0.0 | 260.87 | 6000 | 1.4027 | 0.7256 | 0.7426 | 0.7340 | 0.8376 | | 0.0 | 282.61 | 6500 | 1.4112 | 0.7085 | 0.7574 | 0.7321 | 0.8524 | | 0.0 | 304.35 | 7000 | 1.5045 | 0.7627 | 0.7599 | 0.7613 | 0.8432 | | 0.0 | 326.09 | 7500 | 1.4482 | 0.7390 | 0.7672 | 0.7529 | 0.8398 | | 0.0 | 347.83 | 8000 | 1.5717 | 0.7155 | 0.7525 | 0.7335 | 0.8360 | | 0.0 | 369.57 | 8500 | 1.3845 | 0.7348 | 0.7746 | 0.7542 | 0.8422 | | 0.0 | 391.3 | 9000 | 1.3238 | 0.7283 | 0.7660 | 0.7467 | 0.8499 | | 0.0 | 413.04 | 9500 | 1.3677 | 0.7321 | 0.7672 | 0.7492 | 0.8492 | | 0.0001 | 434.78 | 10000 | 1.3790 | 0.7420 | 0.7759 | 0.7586 | 0.8474 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
DaJulster/my_awesome_model
DaJulster
2024-03-27T17:54:05Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T18:00:09Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6882 - Accuracy: 0.5794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 32 | 0.6916 | 0.4953 | | No log | 2.0 | 64 | 0.6882 | 0.5794 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
manimaranpa07/my_Ws_extraction_model_27th_mar_2
manimaranpa07
2024-03-27T17:53:04Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-27T17:50:47Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_Ws_extraction_model_27th_mar_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_Ws_extraction_model_27th_mar_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2668 - Precision: 0.4677 - Recall: 0.4730 - F1: 0.4704 - Accuracy: 0.9050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 76 | 0.2956 | 0.4378 | 0.4065 | 0.4216 | 0.8986 | | No log | 2.0 | 152 | 0.2668 | 0.4677 | 0.4730 | 0.4704 | 0.9050 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.2
aryatito/my-llama-email-dataset-2
aryatito
2024-03-27T17:53:02Z
2
0
transformers
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:40:43Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
meseca/messiah-7b-v1.1
meseca
2024-03-27T17:49:00Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T17:42:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manimaranpa07/my_Ws_extraction_model_27th_mar
manimaranpa07
2024-03-27T17:35:56Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-27T17:33:13Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_Ws_extraction_model_27th_mar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_Ws_extraction_model_27th_mar This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2730 - Precision: 0.4668 - Recall: 0.4580 - F1: 0.4623 - Accuracy: 0.9046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 76 | 0.2977 | 0.4502 | 0.4141 | 0.4314 | 0.8999 | | No log | 2.0 | 152 | 0.2730 | 0.4668 | 0.4580 | 0.4623 | 0.9046 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.2
PaulTbbr/poca-SoccerTwos00
PaulTbbr
2024-03-27T17:34:57Z
15
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-03-24T12:15:20Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: PaulTbbr/poca-SoccerTwos00 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ledmands/dqn-Pacman-v5_kaggle_test6
ledmands
2024-03-27T17:29:23Z
0
0
stable-baselines3
[ "stable-baselines3", "ALE/Pacman-v5", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T17:28:54Z
--- library_name: stable-baselines3 tags: - ALE/Pacman-v5 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ALE/Pacman-v5 type: ALE/Pacman-v5 metrics: - type: mean_reward value: 208.50 +/- 94.34 name: mean_reward verified: false --- # **DQN** Agent playing **ALE/Pacman-v5** This is a trained model of a **DQN** agent playing **ALE/Pacman-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/ python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/ python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env ALE/Pacman-v5 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Pacman-v5 -f logs/ -orga ledmands ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 500000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
ahmed-naseer/txtiledesign-13-15k
ahmed-naseer
2024-03-27T17:27:42Z
17
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-27T17:20:56Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### txtiledesign-13_15k Dreambooth model trained by ahmed-naseer with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
willystumblr/ehrsql-2024-sft-unanswerable-Llama-2-7b-hf
willystumblr
2024-03-27T17:20:25Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T17:17:23Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
b-r-a-n/sloane_LoRA
b-r-a-n
2024-03-27T17:16:12Z
2
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-27T17:15:20Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of SJS young girl widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - b-r-a-n/sloane_LoRA <Gallery /> ## Model description These are b-r-a-n/sloane_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of SJS young girl to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](b-r-a-n/sloane_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
nell123/llmHandsOn0
nell123
2024-03-27T17:14:08Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:43:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abuvaneswari/english-hinglish
abuvaneswari
2024-03-27T17:11:38Z
1
0
peft
[ "peft", "region:us" ]
null
2024-03-27T17:01:33Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
DUAL-GPO/phi-2-gpo-test-longest-iter-random2-4
DUAL-GPO
2024-03-27T17:08:57Z
2
0
peft
[ "peft", "safetensors", "phi", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "custom_code", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-03-27T16:16:45Z
--- license: mit library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer base_model: microsoft/phi-2 datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: phi-2-gpo-test-longest-iter-random2-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-gpo-test-longest-iter-random2-4 This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-test-longest-iter-random2-3](https://huggingface.co/DUAL-GPO/phi-2-gpo-test-longest-iter-random2-3) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.0019 - Rewards/chosen: -0.0074 - Rewards/rejected: -0.0063 - Rewards/accuracies: 0.4710 - Rewards/margins: -0.0012 - Logps/rejected: -279.6524 - Logps/chosen: -307.5768 - Logits/rejected: 0.0429 - Logits/chosen: -0.0563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.001 | 1.6 | 100 | 0.0018 | -0.0035 | -0.0023 | 0.4785 | -0.0012 | -279.2534 | -307.1775 | 0.0583 | -0.0400 | | 0.0009 | 3.2 | 200 | 0.0019 | -0.0082 | -0.0066 | 0.4565 | -0.0015 | -279.6910 | -307.6504 | 0.0455 | -0.0553 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
bartowski/Einstein-v5-v0.2-7B-exl2
bartowski
2024-03-27T17:06:42Z
0
0
null
[ "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "text-generation", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "base_model:mistral-community/Mistral-7B-v0.2", "base_model:finetune:mistral-community/Mistral-7B-v0.2", "license:other", "region:us" ]
text-generation
2024-03-27T17:06:41Z
--- license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Einstein-v5-v0.2-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 Einstein-v5-v0.2-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-v5-v0.2-7B-exl2`: ```shell mkdir Einstein-v5-v0.2-7B-exl2 huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --local-dir Einstein-v5-v0.2-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Einstein-v5-v0.2-7B-exl2-6_5 huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --revision 6_5 --local-dir Einstein-v5-v0.2-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Einstein-v5-v0.2-7B-exl2-6.5 huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --revision 6_5 --local-dir Einstein-v5-v0.2-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
kettleguts/zephyr-7b-beta_sparse05
kettleguts
2024-03-27T17:04:18Z
49
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "sparse", "pruned", "wanda", "conversational", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2306.11695", "arxiv:2310.16944", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-24T16:30:54Z
--- library_name: transformers tags: - mistral - sparse - pruned - wanda license: mit datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en --- # Model Card for kettleguts/zephyr-7b-beta_sparse05 This is a pruned version of HuggingFaceH4/zephyr-7b-beta found [here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta). Wanda pruning was used to introduce 50% sparsity into the linear layers. Read the paper [here](https://arxiv.org/abs/2306.11695). ### Model Description [Here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#model-description) ## Uses This model is only useful for research purposes. The quality of its text generation is highly dependent on how it is prompted. Since it is heavily pruned, it sometimes behaves like a mush smaller model. ### Direct Use This model is not suitable for direct use outside of research. # Out-of-Scope Use This model should never be used for critical decisions involving health, life, employment, housing, law, etc. It should also never be used to harm anyone. ## Bias, Risks, and Limitations [No safegaurds have been added to this model.](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#bias-risks-and-limitations) ## How to Get Started with the Model Use the code below to get started with the model: <blockquote> ```Python from transformers import pipeline pipe = pipeline("text-generation",model=model, tokenizer=tokenizer) messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds as briefly as possible with prefect grammar.", }, {"role": "user", "content": "Briefly describe network pruning."}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id = tokenizer.pad_token_id) text = str(outputs[0]).split('<|assistant|>\\n') print(text[-1]) ``` </blockquote> Output: >Network pruning, in the context of artificial intelligence and machine learning, refers to the process of removing unimportant or redundant connections, or "pruning," from a neural network\'s architecture. This is done to simplify and optimize the network\'s structure, reduce overfitting, and improve its efficiency, while preserving its overall performance. Pruning typically involves removing connections, neurons, or entire layers, based on metrics such as the weight or sparsity of the connection, or the amount of improvement gained by removing the connection. The goal is to prune the network in a way that balances the trade-off between model size and accuracy, while reducing the network\'s overall complexity and resource requirements. Pruning techniques can range from simple heuristics such as early stopping, to more sophisticated methods such as compressed and pruned models, and iterative and incremental pruning.'} ## Evaluation Pending ## Model Examination <!-- Relevant interpretability work for the model goes here --> Pending ## Environmental Impact The calculations necessary to prune this model required less than 1 hour of time on a T4 GPU in Colab. ## Technical Specifications #### Software The bulk of this work was done using [Pytorch](https://pytorch.org/). They have an array of built-in [pruning tools](https://pytorch.org/docs/stable/nn.html#:~:text=Utility%20classes%20and%20functions%20for%20pruning%20Module%20parameters ) in torch.nn . Also check out the [tutorial](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html) by [Michela Paganini](https://github.com/mickypaganini). ## Citation **BibTeX:** <code> >@misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } >@misc{sun2023simple, title={A Simple and Effective Pruning Approach for Large Language Models}, author={Mingjie Sun and Zhuang Liu and Anna Bair and J. Zico Kolter}, year={2023}, eprint={2306.11695}, archivePrefix={arXiv}, primaryClass={cs.CL} } </code>
osanseviero/TinyLlama-1.1B-Chat-v1.0-Q4_K_M-GGUF
osanseviero
2024-03-27T17:01:36Z
3
0
null
[ "gguf", "llama-cpp", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-27T17:01:30Z
--- language: - en license: apache-2.0 tags: - llama-cpp datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized widget: - example_title: Fibonacci (Python) messages: - role: system content: You are a chatbot who can help code! - role: user content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI. --- # osanseviero/TinyLlama-1.1B-Chat-v1.0-Q4_K_M-GGUF This model was converted to GGUF format from [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using llama.cpp. Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) for more details on the model. ## Use with llama.cpp ```bash brew install ggerganov/ggerganov/llama.cpp ``` ```bash llama-cli --hf-repo osanseviero/TinyLlama-1.1B-Chat-v1.0-Q4_K_M-GGUF --model tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is " ``` ```bash llama-server --hf-repo osanseviero/TinyLlama-1.1B-Chat-v1.0-Q4_K_M-GGUF --model tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf -c 2048 ```
calcots/continued-pythia-410m_shuffled_dedup_filtered_without_vis_20k-40k
calcots
2024-03-27T16:59:00Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:57:31Z
--- license: apache-2.0 finetuned_from: EleutherAI/pythia-410m --- # calcots/continued-pythia-410m_shuffled_dedup_filtered_without_vis_20k-40k - Checkpoint: `global_step9537` - Tensorboard: see `./tensorboard` - Evaluation: see `./eval`
calcots/continued-pythia-410m_shuffled_dedup_filtered_without_vis_0-20k
calcots
2024-03-27T16:57:27Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:56:32Z
--- license: apache-2.0 finetuned_from: EleutherAI/pythia-410m --- # calcots/continued-pythia-410m_shuffled_dedup_filtered_without_vis_0-20k - Checkpoint: `global_step9537` - Tensorboard: see `./tensorboard` - Evaluation: see `./eval`
Vignesh-M/wav2vec2-tamil-finetuned-M
Vignesh-M
2024-03-27T16:56:18Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-27T15:31:25Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: wav2vec2-tamil-finetuned-M results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-tamil-finetuned-M This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.2.2 - Datasets 2.15.0 - Tokenizers 0.13.2
louisgrc/model_test
louisgrc
2024-03-27T16:54:37Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "CultriX/NeuralTrix-bf16", "AurelPx/Percival_01-7b-slerp", "license:apache-2.0", "region:us" ]
null
2024-03-27T16:54:36Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - CultriX/NeuralTrix-bf16 - AurelPx/Percival_01-7b-slerp --- # test test is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [CultriX/NeuralTrix-bf16](https://huggingface.co/CultriX/NeuralTrix-bf16) * [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp) ## 🧩 Configuration `yamlslices: - sources: - model: CultriX/NeuralTrix-bf16 layer_range: [0, 32] - model: AurelPx/Percival_01-7b-slerp layer_range: [0, 32] merge_method: slerp base_model: AurelPx/Percival_01-7b-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 `
Madao-314/q-FrozenLake-v1-4x4-noSlippery
Madao-314
2024-03-27T16:50:05Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T16:50:03Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Madao-314/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
iasevoli90/Reinforce-CartPole-v1
iasevoli90
2024-03-27T16:48:23Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-27T16:48:13Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
GowthamYarlagadda/llama-2-7b-chat
GowthamYarlagadda
2024-03-27T16:43:59Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-07T18:07:43Z
These are the converted model weights for Llama-2-7B-chat in Huggingface format. Courtesy of [Mirage-Studio.io](https://mirage-studio.io), home of MirageGPT: the private ChatGPT alternative. --- license: other LLAMA 2 COMMUNITY LICENSE AGREEMENT Llama 2 Version Release Date: July 18, 2023 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ---
0x0mom/nous_r9
0x0mom
2024-03-27T16:42:32Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:41:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF
reach-vb
2024-03-27T16:42:18Z
2
0
null
[ "gguf", "llama-cpp", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-27T16:42:06Z
--- language: - en license: apache-2.0 tags: - llama-cpp datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized widget: - example_title: Fibonacci (Python) messages: - role: system content: You are a chatbot who can help code! - role: user content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI. --- # reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF This model was converted to GGUF format from [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using llama.cpp. Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) for more details on the model. ## Use with llama.cpp ```bash brew install ggerganov/ggerganov/llama.cpp ``` ```bash llama-cli --hf-repo reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q8_0.gguf -p "The meaning to life and the universe is " ``` ```bash llama-server --hf-repo reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q8_0.gguf -c 2048 ```
wintonYF/SCB3-YOLOv7
wintonYF
2024-03-27T16:34:53Z
2
1
transformers
[ "transformers", "object-detection", "endpoints_compatible", "region:us" ]
object-detection
2024-03-26T11:06:29Z
--- pipeline_tag: object-detection ---
calcots/continued-pythia-410m_shuffled_dedup_filtered_with_vis_0-20k
calcots
2024-03-27T16:34:38Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-27T16:33:27Z
--- license: apache-2.0 finetuned_from: EleutherAI/pythia-410m --- # calcots/continued-pythia-410m_shuffled_dedup_filtered_with_vis_0-20k - Checkpoint: `global_step9537` - Tensorboard: see `./tensorboard` - Evaluation: see `./eval`
Xenova/text-davinci-002
Xenova
2024-03-27T16:28:41Z
0
2
transformers
[ "transformers", "transformers.js", "tokenizers", "endpoints_compatible", "region:us" ]
null
2023-08-04T09:15:25Z
--- library_name: transformers tags: - transformers.js - tokenizers --- # text-davinci-002 Tokenizer A 🤗-compatible version of the **text-davinci-002 tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js). ## Example usage: ### Transformers/Tokenizers ```py from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/text-davinci-002') assert tokenizer.encode('hello world') == [31373, 995] ``` ### Transformers.js ```js import { AutoTokenizer } from '@xenova/transformers'; const tokenizer = await AutoTokenizer.from_pretrained('Xenova/text-davinci-002'); const tokens = tokenizer.encode('hello world'); // [31373, 995] ```