modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Yuhan123/ppo-1-lr-1e-6-2025-04-16-07-44-51
Yuhan123
2025-04-30T19:39:03Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:36:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbaldassarri/internlm_internlm3-8b-instruct-autoround-int4-gs64-sym
fbaldassarri
2025-04-30T19:38:00Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "gptq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-30T19:35:36Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - gptq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Symmetrical Quantization - Method WoQ: SignRound (AutoRound algorithm) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoround-int4-gs64-sym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
rbelanec/train_boolq_1745950279
rbelanec
2025-04-30T19:36:07Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "ia3", "generated_from_trainer", "dataset:super_glue", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2025-04-30T12:08:23Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - llama-factory - ia3 - generated_from_trainer datasets: - super_glue model-index: - name: train_boolq_1745950279 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_boolq_1745950279 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 0.1734 - Num Input Tokens Seen: 37097424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.3747 | 0.0943 | 200 | 0.6101 | 186768 | | 0.4927 | 0.1886 | 400 | 0.4053 | 369808 | | 0.4778 | 0.2829 | 600 | 0.3221 | 554928 | | 0.1419 | 0.3772 | 800 | 0.2830 | 746560 | | 0.1589 | 0.4715 | 1000 | 0.2755 | 932848 | | 0.2319 | 0.5658 | 1200 | 0.2659 | 1116128 | | 0.3277 | 0.6601 | 1400 | 0.2518 | 1299664 | | 0.1762 | 0.7544 | 1600 | 0.2510 | 1481856 | | 0.2509 | 0.8487 | 1800 | 0.2417 | 1672160 | | 0.2078 | 0.9430 | 2000 | 0.2433 | 1860608 | | 0.2891 | 1.0372 | 2200 | 0.2413 | 2047984 | | 0.2778 | 1.1315 | 2400 | 0.2378 | 2230960 | | 0.1893 | 1.2258 | 2600 | 0.2350 | 2417664 | | 0.2807 | 1.3201 | 2800 | 0.2313 | 2600368 | | 0.075 | 1.4144 | 3000 | 0.2307 | 2786848 | | 0.3852 | 1.5087 | 3200 | 0.2272 | 2972672 | | 0.1396 | 1.6030 | 3400 | 0.2217 | 3154640 | | 0.2861 | 1.6973 | 3600 | 0.2177 | 3339328 | | 0.2532 | 1.7916 | 3800 | 0.2164 | 3522384 | | 0.1929 | 1.8859 | 4000 | 0.2157 | 3712352 | | 0.0955 | 1.9802 | 4200 | 0.2144 | 3899328 | | 0.2602 | 2.0745 | 4400 | 0.2125 | 4085888 | | 0.2049 | 2.1688 | 4600 | 0.2130 | 4271936 | | 0.1363 | 2.2631 | 4800 | 0.2087 | 4456320 | | 0.2189 | 2.3574 | 5000 | 0.2121 | 4638512 | | 0.2196 | 2.4517 | 5200 | 0.2060 | 4830688 | | 0.1345 | 2.5460 | 5400 | 0.2042 | 5016480 | | 0.1343 | 2.6403 | 5600 | 0.2048 | 5204048 | | 0.1826 | 2.7346 | 5800 | 0.2032 | 5383984 | | 0.1884 | 2.8289 | 6000 | 0.2015 | 5574016 | | 0.18 | 2.9231 | 6200 | 0.2020 | 5761616 | | 0.1142 | 3.0174 | 6400 | 0.2032 | 5948128 | | 0.2058 | 3.1117 | 6600 | 0.1978 | 6134304 | | 0.2255 | 3.2060 | 6800 | 0.1971 | 6319616 | | 0.3024 | 3.3003 | 7000 | 0.1976 | 6505744 | | 0.1922 | 3.3946 | 7200 | 0.1954 | 6692208 | | 0.1627 | 3.4889 | 7400 | 0.1989 | 6875616 | | 0.168 | 3.5832 | 7600 | 0.1959 | 7059472 | | 0.2102 | 3.6775 | 7800 | 0.1963 | 7243472 | | 0.2102 | 3.7718 | 8000 | 0.1962 | 7428048 | | 0.0869 | 3.8661 | 8200 | 0.1937 | 7611184 | | 0.0836 | 3.9604 | 8400 | 0.1958 | 7796112 | | 0.2088 | 4.0547 | 8600 | 0.1955 | 7979520 | | 0.1416 | 4.1490 | 8800 | 0.1923 | 8167776 | | 0.0892 | 4.2433 | 9000 | 0.1918 | 8355856 | | 0.2403 | 4.3376 | 9200 | 0.1918 | 8543120 | | 0.1519 | 4.4319 | 9400 | 0.1874 | 8727088 | | 0.1449 | 4.5262 | 9600 | 0.1881 | 8914992 | | 0.1177 | 4.6205 | 9800 | 0.1884 | 9095040 | | 0.1718 | 4.7148 | 10000 | 0.1881 | 9283072 | | 0.0807 | 4.8091 | 10200 | 0.1880 | 9467600 | | 0.1235 | 4.9033 | 10400 | 0.1869 | 9653456 | | 0.1196 | 4.9976 | 10600 | 0.1884 | 9841232 | | 0.1011 | 5.0919 | 10800 | 0.1873 | 10025504 | | 0.2014 | 5.1862 | 11000 | 0.1906 | 10216464 | | 0.3322 | 5.2805 | 11200 | 0.1863 | 10402448 | | 0.1154 | 5.3748 | 11400 | 0.1855 | 10586976 | | 0.1315 | 5.4691 | 11600 | 0.1855 | 10770896 | | 0.1692 | 5.5634 | 11800 | 0.1837 | 10959424 | | 0.2054 | 5.6577 | 12000 | 0.1832 | 11146816 | | 0.2848 | 5.7520 | 12200 | 0.1838 | 11328528 | | 0.2313 | 5.8463 | 12400 | 0.1821 | 11515600 | | 0.156 | 5.9406 | 12600 | 0.1819 | 11697056 | | 0.0869 | 6.0349 | 12800 | 0.1828 | 11884336 | | 0.086 | 6.1292 | 13000 | 0.1824 | 12074128 | | 0.1994 | 6.2235 | 13200 | 0.1821 | 12258064 | | 0.1672 | 6.3178 | 13400 | 0.1822 | 12443248 | | 0.0723 | 6.4121 | 13600 | 0.1824 | 12626480 | | 0.2084 | 6.5064 | 13800 | 0.1825 | 12813808 | | 0.1675 | 6.6007 | 14000 | 0.1833 | 12998256 | | 0.1439 | 6.6950 | 14200 | 0.1806 | 13180928 | | 0.1053 | 6.7893 | 14400 | 0.1791 | 13364368 | | 0.2958 | 6.8835 | 14600 | 0.1790 | 13552272 | | 0.1603 | 6.9778 | 14800 | 0.1788 | 13735904 | | 0.1454 | 7.0721 | 15000 | 0.1784 | 13924000 | | 0.179 | 7.1664 | 15200 | 0.1792 | 14113184 | | 0.146 | 7.2607 | 15400 | 0.1789 | 14295568 | | 0.1527 | 7.3550 | 15600 | 0.1785 | 14480560 | | 0.2474 | 7.4493 | 15800 | 0.1778 | 14664736 | | 0.1638 | 7.5436 | 16000 | 0.1778 | 14852128 | | 0.1835 | 7.6379 | 16200 | 0.1792 | 15033840 | | 0.1075 | 7.7322 | 16400 | 0.1793 | 15219136 | | 0.0843 | 7.8265 | 16600 | 0.1799 | 15404160 | | 0.149 | 7.9208 | 16800 | 0.1789 | 15589632 | | 0.0567 | 8.0151 | 17000 | 0.1808 | 15781760 | | 0.2924 | 8.1094 | 17200 | 0.1785 | 15967648 | | 0.1977 | 8.2037 | 17400 | 0.1769 | 16155248 | | 0.2233 | 8.2980 | 17600 | 0.1764 | 16343648 | | 0.1287 | 8.3923 | 17800 | 0.1774 | 16523360 | | 0.0674 | 8.4866 | 18000 | 0.1779 | 16709008 | | 0.1918 | 8.5809 | 18200 | 0.1769 | 16893648 | | 0.0863 | 8.6752 | 18400 | 0.1785 | 17079824 | | 0.3264 | 8.7694 | 18600 | 0.1769 | 17265072 | | 0.3518 | 8.8637 | 18800 | 0.1784 | 17445904 | | 0.0965 | 8.9580 | 19000 | 0.1781 | 17631504 | | 0.1756 | 9.0523 | 19200 | 0.1752 | 17818512 | | 0.0614 | 9.1466 | 19400 | 0.1781 | 18005200 | | 0.0944 | 9.2409 | 19600 | 0.1798 | 18190416 | | 0.1835 | 9.3352 | 19800 | 0.1783 | 18373200 | | 0.3181 | 9.4295 | 20000 | 0.1765 | 18556672 | | 0.1796 | 9.5238 | 20200 | 0.1783 | 18742816 | | 0.1772 | 9.6181 | 20400 | 0.1766 | 18930224 | | 0.1901 | 9.7124 | 20600 | 0.1750 | 19115456 | | 0.0838 | 9.8067 | 20800 | 0.1759 | 19296016 | | 0.1084 | 9.9010 | 21000 | 0.1762 | 19482416 | | 0.0814 | 9.9953 | 21200 | 0.1760 | 19668640 | | 0.1337 | 10.0896 | 21400 | 0.1755 | 19860880 | | 0.1011 | 10.1839 | 21600 | 0.1756 | 20052672 | | 0.1261 | 10.2782 | 21800 | 0.1763 | 20236224 | | 0.086 | 10.3725 | 22000 | 0.1763 | 20421632 | | 0.1666 | 10.4668 | 22200 | 0.1751 | 20608320 | | 0.0767 | 10.5611 | 22400 | 0.1761 | 20788112 | | 0.1524 | 10.6554 | 22600 | 0.1766 | 20969744 | | 0.0645 | 10.7496 | 22800 | 0.1749 | 21151648 | | 0.2225 | 10.8439 | 23000 | 0.1744 | 21335600 | | 0.0844 | 10.9382 | 23200 | 0.1759 | 21522352 | | 0.0813 | 11.0325 | 23400 | 0.1758 | 21709568 | | 0.1755 | 11.1268 | 23600 | 0.1755 | 21894592 | | 0.0949 | 11.2211 | 23800 | 0.1746 | 22079344 | | 0.1361 | 11.3154 | 24000 | 0.1765 | 22269152 | | 0.0845 | 11.4097 | 24200 | 0.1749 | 22451760 | | 0.1977 | 11.5040 | 24400 | 0.1755 | 22639312 | | 0.0848 | 11.5983 | 24600 | 0.1756 | 22821728 | | 0.1916 | 11.6926 | 24800 | 0.1762 | 23005696 | | 0.0669 | 11.7869 | 25000 | 0.1750 | 23192112 | | 0.0942 | 11.8812 | 25200 | 0.1748 | 23373840 | | 0.1216 | 11.9755 | 25400 | 0.1757 | 23559968 | | 0.1098 | 12.0698 | 25600 | 0.1745 | 23743680 | | 0.2496 | 12.1641 | 25800 | 0.1749 | 23931472 | | 0.0835 | 12.2584 | 26000 | 0.1759 | 24118800 | | 0.1723 | 12.3527 | 26200 | 0.1755 | 24308976 | | 0.0358 | 12.4470 | 26400 | 0.1747 | 24493584 | | 0.1194 | 12.5413 | 26600 | 0.1747 | 24679264 | | 0.0989 | 12.6355 | 26800 | 0.1749 | 24861136 | | 0.1756 | 12.7298 | 27000 | 0.1739 | 25046496 | | 0.2861 | 12.8241 | 27200 | 0.1754 | 25230592 | | 0.1567 | 12.9184 | 27400 | 0.1747 | 25411904 | | 0.2287 | 13.0127 | 27600 | 0.1748 | 25595280 | | 0.2106 | 13.1070 | 27800 | 0.1746 | 25777696 | | 0.0829 | 13.2013 | 28000 | 0.1749 | 25963552 | | 0.0886 | 13.2956 | 28200 | 0.1760 | 26150464 | | 0.1782 | 13.3899 | 28400 | 0.1759 | 26335552 | | 0.2754 | 13.4842 | 28600 | 0.1742 | 26524096 | | 0.1365 | 13.5785 | 28800 | 0.1734 | 26713392 | | 0.0954 | 13.6728 | 29000 | 0.1747 | 26900464 | | 0.2517 | 13.7671 | 29200 | 0.1736 | 27087040 | | 0.0939 | 13.8614 | 29400 | 0.1735 | 27270960 | | 0.0904 | 13.9557 | 29600 | 0.1744 | 27457936 | | 0.1425 | 14.0500 | 29800 | 0.1755 | 27639216 | | 0.3097 | 14.1443 | 30000 | 0.1743 | 27829056 | | 0.1465 | 14.2386 | 30200 | 0.1748 | 28019840 | | 0.1265 | 14.3329 | 30400 | 0.1735 | 28205616 | | 0.1129 | 14.4272 | 30600 | 0.1747 | 28390464 | | 0.1108 | 14.5215 | 30800 | 0.1743 | 28571424 | | 0.11 | 14.6157 | 31000 | 0.1736 | 28758128 | | 0.205 | 14.7100 | 31200 | 0.1743 | 28942096 | | 0.13 | 14.8043 | 31400 | 0.1747 | 29127440 | | 0.0576 | 14.8986 | 31600 | 0.1751 | 29310016 | | 0.0371 | 14.9929 | 31800 | 0.1735 | 29497520 | | 0.1972 | 15.0872 | 32000 | 0.1748 | 29680160 | | 0.0715 | 15.1815 | 32200 | 0.1746 | 29872080 | | 0.1121 | 15.2758 | 32400 | 0.1753 | 30060048 | | 0.2687 | 15.3701 | 32600 | 0.1751 | 30243024 | | 0.1339 | 15.4644 | 32800 | 0.1744 | 30433968 | | 0.0611 | 15.5587 | 33000 | 0.1745 | 30617936 | | 0.0225 | 15.6530 | 33200 | 0.1744 | 30802960 | | 0.0618 | 15.7473 | 33400 | 0.1746 | 30985296 | | 0.0761 | 15.8416 | 33600 | 0.1747 | 31168496 | | 0.1043 | 15.9359 | 33800 | 0.1740 | 31350688 | | 0.1142 | 16.0302 | 34000 | 0.1743 | 31530704 | | 0.0584 | 16.1245 | 34200 | 0.1745 | 31718960 | | 0.1624 | 16.2188 | 34400 | 0.1746 | 31901696 | | 0.0988 | 16.3131 | 34600 | 0.1747 | 32092528 | | 0.2405 | 16.4074 | 34800 | 0.1741 | 32279920 | | 0.2101 | 16.5017 | 35000 | 0.1740 | 32461952 | | 0.0511 | 16.5959 | 35200 | 0.1739 | 32647696 | | 0.0933 | 16.6902 | 35400 | 0.1745 | 32828656 | | 0.0327 | 16.7845 | 35600 | 0.1745 | 33016320 | | 0.0571 | 16.8788 | 35800 | 0.1741 | 33202224 | | 0.1662 | 16.9731 | 36000 | 0.1745 | 33385424 | | 0.0461 | 17.0674 | 36200 | 0.1745 | 33572672 | | 0.0935 | 17.1617 | 36400 | 0.1745 | 33759120 | | 0.1082 | 17.2560 | 36600 | 0.1738 | 33946224 | | 0.0801 | 17.3503 | 36800 | 0.1743 | 34137504 | | 0.1475 | 17.4446 | 37000 | 0.1747 | 34322448 | | 0.1563 | 17.5389 | 37200 | 0.1741 | 34506880 | | 0.3343 | 17.6332 | 37400 | 0.1739 | 34692032 | | 0.1329 | 17.7275 | 37600 | 0.1743 | 34873984 | | 0.174 | 17.8218 | 37800 | 0.1743 | 35058576 | | 0.0348 | 17.9161 | 38000 | 0.1739 | 35245152 | | 0.0545 | 18.0104 | 38200 | 0.1747 | 35431232 | | 0.0686 | 18.1047 | 38400 | 0.1749 | 35615248 | | 0.017 | 18.1990 | 38600 | 0.1747 | 35798688 | | 0.2448 | 18.2933 | 38800 | 0.1746 | 35984224 | | 0.1179 | 18.3876 | 39000 | 0.1750 | 36168064 | | 0.0444 | 18.4818 | 39200 | 0.1748 | 36351216 | | 0.0805 | 18.5761 | 39400 | 0.1740 | 36537456 | | 0.0369 | 18.6704 | 39600 | 0.1744 | 36723376 | | 0.0285 | 18.7647 | 39800 | 0.1744 | 36910256 | | 0.0956 | 18.8590 | 40000 | 0.1744 | 37097424 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
fbaldassarri/internlm_internlm3-8b-instruct-autoround-int4-gs64-asym
fbaldassarri
2025-04-30T19:33:04Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "gptq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-30T19:30:38Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - gptq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Asymmetrical Quantization - Method WoQ: SignRound (AutoRound algorithm) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoround-int4-gs64-asym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
Rithvik762/gemma_medical
Rithvik762
2025-04-30T19:32:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "region:us" ]
null
2025-04-30T19:15:22Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
rbelanec/train_multirc_1745950264
rbelanec
2025-04-30T19:32:40Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prompt-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-04-30T05:26:16Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prompt-tuning - generated_from_trainer model-index: - name: train_multirc_1745950264 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_multirc_1745950264 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the multirc dataset. It achieves the following results on the evaluation set: - Loss: 0.3193 - Num Input Tokens Seen: 75778784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.3 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:-----:|:---------------:|:-----------------:| | 0.4564 | 0.0326 | 200 | 0.4853 | 378944 | | 0.3723 | 0.0653 | 400 | 0.3704 | 758192 | | 0.3559 | 0.0979 | 600 | 0.3773 | 1141408 | | 0.3491 | 0.1305 | 800 | 0.3604 | 1518336 | | 0.3909 | 0.1631 | 1000 | 0.3607 | 1901264 | | 0.3915 | 0.1958 | 1200 | 0.3488 | 2279552 | | 0.3655 | 0.2284 | 1400 | 0.3567 | 2668256 | | 0.375 | 0.2610 | 1600 | 0.3505 | 3047328 | | 0.4324 | 0.2937 | 1800 | 0.4502 | 3429984 | | 0.3761 | 0.3263 | 2000 | 0.4696 | 3814576 | | 0.3441 | 0.3589 | 2200 | 0.3639 | 4190352 | | 0.3423 | 0.3915 | 2400 | 0.3468 | 4567440 | | 0.3596 | 0.4242 | 2600 | 0.3570 | 4944384 | | 0.3747 | 0.4568 | 2800 | 0.3450 | 5325216 | | 0.3322 | 0.4894 | 3000 | 0.4611 | 5698896 | | 0.3631 | 0.5221 | 3200 | 0.3524 | 6074432 | | 0.357 | 0.5547 | 3400 | 0.3523 | 6454208 | | 0.3337 | 0.5873 | 3600 | 0.3636 | 6831056 | | 0.3765 | 0.6200 | 3800 | 0.3738 | 7209536 | | 0.3645 | 0.6526 | 4000 | 0.4008 | 7593024 | | 0.3501 | 0.6852 | 4200 | 0.3490 | 7977072 | | 0.3648 | 0.7178 | 4400 | 0.3519 | 8353296 | | 0.3577 | 0.7505 | 4600 | 0.3439 | 8733232 | | 0.3691 | 0.7831 | 4800 | 0.3404 | 9113632 | | 0.3779 | 0.8157 | 5000 | 0.3713 | 9487952 | | 0.3499 | 0.8484 | 5200 | 0.3861 | 9861104 | | 0.3244 | 0.8810 | 5400 | 0.3939 | 10239088 | | 0.3561 | 0.9136 | 5600 | 0.3590 | 10619840 | | 0.4045 | 0.9462 | 5800 | 0.3432 | 10994720 | | 0.3709 | 0.9789 | 6000 | 0.3536 | 11376976 | | 0.3859 | 1.0114 | 6200 | 0.3500 | 11758656 | | 0.4941 | 1.0440 | 6400 | 0.3453 | 12144016 | | 0.373 | 1.0767 | 6600 | 0.3695 | 12531776 | | 0.3413 | 1.1093 | 6800 | 0.3387 | 12905136 | | 0.4019 | 1.1419 | 7000 | 0.3544 | 13278096 | | 0.3835 | 1.1746 | 7200 | 0.4184 | 13651520 | | 0.3365 | 1.2072 | 7400 | 0.3427 | 14034784 | | 0.3167 | 1.2398 | 7600 | 0.3402 | 14415120 | | 0.3442 | 1.2725 | 7800 | 0.3339 | 14794784 | | 0.3655 | 1.3051 | 8000 | 0.3357 | 15176240 | | 0.3667 | 1.3377 | 8200 | 0.3359 | 15548080 | | 0.3315 | 1.3703 | 8400 | 0.3355 | 15926832 | | 0.3966 | 1.4030 | 8600 | 0.3776 | 16305344 | | 0.3021 | 1.4356 | 8800 | 0.3357 | 16686528 | | 0.3295 | 1.4682 | 9000 | 0.3456 | 17073648 | | 0.3563 | 1.5009 | 9200 | 0.3692 | 17457952 | | 0.3424 | 1.5335 | 9400 | 0.3505 | 17831104 | | 0.3343 | 1.5661 | 9600 | 0.3346 | 18215168 | | 0.3255 | 1.5987 | 9800 | 0.3486 | 18592816 | | 0.3551 | 1.6314 | 10000 | 0.3564 | 18972864 | | 0.3223 | 1.6640 | 10200 | 0.3309 | 19350160 | | 0.3489 | 1.6966 | 10400 | 0.3513 | 19735024 | | 0.2748 | 1.7293 | 10600 | 0.3290 | 20108768 | | 0.3823 | 1.7619 | 10800 | 0.3371 | 20489424 | | 0.3643 | 1.7945 | 11000 | 0.3394 | 20870832 | | 0.3802 | 1.8271 | 11200 | 0.3317 | 21240960 | | 0.3923 | 1.8598 | 11400 | 0.3403 | 21615744 | | 0.3803 | 1.8924 | 11600 | 0.3289 | 21991984 | | 0.3738 | 1.9250 | 11800 | 0.3350 | 22366624 | | 0.3144 | 1.9577 | 12000 | 0.3476 | 22746000 | | 0.3205 | 1.9903 | 12200 | 0.3514 | 23122688 | | 0.3868 | 2.0228 | 12400 | 0.3390 | 23494112 | | 0.3732 | 2.0555 | 12600 | 0.3319 | 23876160 | | 0.3627 | 2.0881 | 12800 | 0.3286 | 24261904 | | 0.3366 | 2.1207 | 13000 | 0.3427 | 24643776 | | 0.3541 | 2.1534 | 13200 | 0.3607 | 25020496 | | 0.3233 | 2.1860 | 13400 | 0.3272 | 25391072 | | 0.3341 | 2.2186 | 13600 | 0.3402 | 25762416 | | 0.3562 | 2.2512 | 13800 | 0.3296 | 26139456 | | 0.3395 | 2.2839 | 14000 | 0.3311 | 26511344 | | 0.3766 | 2.3165 | 14200 | 0.3433 | 26891616 | | 0.3558 | 2.3491 | 14400 | 0.3296 | 27274960 | | 0.3056 | 2.3818 | 14600 | 0.4714 | 27652224 | | 0.3646 | 2.4144 | 14800 | 0.3235 | 28033168 | | 0.3433 | 2.4470 | 15000 | 0.3337 | 28414784 | | 0.3656 | 2.4796 | 15200 | 0.3255 | 28787168 | | 0.3642 | 2.5123 | 15400 | 0.3283 | 29164512 | | 0.3603 | 2.5449 | 15600 | 0.3529 | 29545056 | | 0.3109 | 2.5775 | 15800 | 0.3305 | 29922176 | | 0.3171 | 2.6102 | 16000 | 0.3256 | 30304336 | | 0.3595 | 2.6428 | 16200 | 0.3384 | 30688608 | | 0.2938 | 2.6754 | 16400 | 0.3256 | 31067744 | | 0.3434 | 2.7081 | 16600 | 0.3326 | 31455328 | | 0.3638 | 2.7407 | 16800 | 0.3246 | 31833136 | | 0.3623 | 2.7733 | 17000 | 0.3232 | 32213296 | | 0.3703 | 2.8059 | 17200 | 0.3346 | 32588128 | | 0.3248 | 2.8386 | 17400 | 0.3223 | 32971552 | | 0.3379 | 2.8712 | 17600 | 0.3280 | 33356064 | | 0.3434 | 2.9038 | 17800 | 0.3244 | 33739984 | | 0.288 | 2.9365 | 18000 | 0.3247 | 34121824 | | 0.3275 | 2.9691 | 18200 | 0.3452 | 34498368 | | 0.2901 | 3.0016 | 18400 | 0.3241 | 34866272 | | 0.3238 | 3.0343 | 18600 | 0.3236 | 35258768 | | 0.3617 | 3.0669 | 18800 | 0.3654 | 35644416 | | 0.3463 | 3.0995 | 19000 | 0.3730 | 36017808 | | 0.3218 | 3.1321 | 19200 | 0.3243 | 36393536 | | 0.3652 | 3.1648 | 19400 | 0.3416 | 36770432 | | 0.3161 | 3.1974 | 19600 | 0.3264 | 37152448 | | 0.3487 | 3.2300 | 19800 | 0.3428 | 37532496 | | 0.3284 | 3.2627 | 20000 | 0.3321 | 37910480 | | 0.3752 | 3.2953 | 20200 | 0.3265 | 38286080 | | 0.3216 | 3.3279 | 20400 | 0.3320 | 38664512 | | 0.3357 | 3.3606 | 20600 | 0.3293 | 39053472 | | 0.3281 | 3.3932 | 20800 | 0.3502 | 39432032 | | 0.3459 | 3.4258 | 21000 | 0.3226 | 39812704 | | 0.3314 | 3.4584 | 21200 | 0.3277 | 40191088 | | 0.422 | 3.4911 | 21400 | 0.3282 | 40567216 | | 0.3273 | 3.5237 | 21600 | 0.3569 | 40947696 | | 0.2933 | 3.5563 | 21800 | 0.3224 | 41330624 | | 0.4252 | 3.5890 | 22000 | 0.3256 | 41708800 | | 0.3007 | 3.6216 | 22200 | 0.3229 | 42087824 | | 0.324 | 3.6542 | 22400 | 0.3247 | 42461936 | | 0.3796 | 3.6868 | 22600 | 0.3240 | 42843696 | | 0.2865 | 3.7195 | 22800 | 0.3220 | 43221120 | | 0.3379 | 3.7521 | 23000 | 0.3212 | 43597776 | | 0.3036 | 3.7847 | 23200 | 0.3242 | 43979312 | | 0.3774 | 3.8174 | 23400 | 0.3328 | 44354480 | | 0.3691 | 3.8500 | 23600 | 0.3250 | 44727696 | | 0.361 | 3.8826 | 23800 | 0.3240 | 45108608 | | 0.3729 | 3.9152 | 24000 | 0.3245 | 45482928 | | 0.3025 | 3.9479 | 24200 | 0.3228 | 45861584 | | 0.3069 | 3.9805 | 24400 | 0.3221 | 46243072 | | 0.338 | 4.0131 | 24600 | 0.3205 | 46619680 | | 0.3089 | 4.0457 | 24800 | 0.3210 | 47007360 | | 0.2882 | 4.0783 | 25000 | 0.3292 | 47391600 | | 0.3406 | 4.1109 | 25200 | 0.3224 | 47768320 | | 0.3066 | 4.1436 | 25400 | 0.3206 | 48143424 | | 0.4355 | 4.1762 | 25600 | 0.3240 | 48524368 | | 0.3275 | 4.2088 | 25800 | 0.3205 | 48899856 | | 0.2758 | 4.2415 | 26000 | 0.3293 | 49280208 | | 0.3199 | 4.2741 | 26200 | 0.3263 | 49658080 | | 0.3332 | 4.3067 | 26400 | 0.3221 | 50034848 | | 0.3795 | 4.3393 | 26600 | 0.3269 | 50413376 | | 0.3959 | 4.3720 | 26800 | 0.3216 | 50793248 | | 0.3603 | 4.4046 | 27000 | 0.3215 | 51170976 | | 0.321 | 4.4372 | 27200 | 0.3250 | 51559504 | | 0.3445 | 4.4699 | 27400 | 0.3210 | 51928704 | | 0.3458 | 4.5025 | 27600 | 0.3215 | 52297776 | | 0.2784 | 4.5351 | 27800 | 0.3208 | 52669472 | | 0.3428 | 4.5677 | 28000 | 0.3275 | 53045856 | | 0.3063 | 4.6004 | 28200 | 0.3216 | 53429232 | | 0.2983 | 4.6330 | 28400 | 0.3279 | 53810560 | | 0.319 | 4.6656 | 28600 | 0.3227 | 54191536 | | 0.3985 | 4.6983 | 28800 | 0.3220 | 54572176 | | 0.3425 | 4.7309 | 29000 | 0.3200 | 54952896 | | 0.236 | 4.7635 | 29200 | 0.3214 | 55327776 | | 0.3071 | 4.7961 | 29400 | 0.3208 | 55708896 | | 0.3007 | 4.8288 | 29600 | 0.3212 | 56085712 | | 0.3351 | 4.8614 | 29800 | 0.3227 | 56467376 | | 0.3147 | 4.8940 | 30000 | 0.3314 | 56841328 | | 0.3661 | 4.9267 | 30200 | 0.3220 | 57227184 | | 0.2419 | 4.9593 | 30400 | 0.3258 | 57605632 | | 0.3353 | 4.9919 | 30600 | 0.3199 | 57987472 | | 0.335 | 5.0245 | 30800 | 0.3206 | 58367056 | | 0.3406 | 5.0571 | 31000 | 0.3249 | 58746720 | | 0.3194 | 5.0897 | 31200 | 0.3243 | 59124272 | | 0.3977 | 5.1224 | 31400 | 0.3198 | 59504688 | | 0.26 | 5.1550 | 31600 | 0.3204 | 59875840 | | 0.3123 | 5.1876 | 31800 | 0.3194 | 60247360 | | 0.3155 | 5.2202 | 32000 | 0.3232 | 60622464 | | 0.3201 | 5.2529 | 32200 | 0.3215 | 61006768 | | 0.3427 | 5.2855 | 32400 | 0.3200 | 61386992 | | 0.3274 | 5.3181 | 32600 | 0.3216 | 61770000 | | 0.3653 | 5.3508 | 32800 | 0.3199 | 62154640 | | 0.2958 | 5.3834 | 33000 | 0.3205 | 62541664 | | 0.2897 | 5.4160 | 33200 | 0.3193 | 62912976 | | 0.3029 | 5.4486 | 33400 | 0.3222 | 63289520 | | 0.3473 | 5.4813 | 33600 | 0.3204 | 63668416 | | 0.2732 | 5.5139 | 33800 | 0.3205 | 64043792 | | 0.3241 | 5.5465 | 34000 | 0.3200 | 64433840 | | 0.2618 | 5.5792 | 34200 | 0.3214 | 64808624 | | 0.2975 | 5.6118 | 34400 | 0.3199 | 65182704 | | 0.326 | 5.6444 | 34600 | 0.3209 | 65562192 | | 0.3478 | 5.6771 | 34800 | 0.3232 | 65940816 | | 0.3652 | 5.7097 | 35000 | 0.3198 | 66326768 | | 0.3207 | 5.7423 | 35200 | 0.3226 | 66705744 | | 0.3184 | 5.7749 | 35400 | 0.3231 | 67084928 | | 0.3183 | 5.8076 | 35600 | 0.3237 | 67462064 | | 0.2891 | 5.8402 | 35800 | 0.3237 | 67846112 | | 0.3224 | 5.8728 | 36000 | 0.3218 | 68221552 | | 0.2998 | 5.9055 | 36200 | 0.3216 | 68606416 | | 0.3256 | 5.9381 | 36400 | 0.3235 | 68980176 | | 0.3077 | 5.9707 | 36600 | 0.3209 | 69349984 | | 0.2706 | 6.0033 | 36800 | 0.3216 | 69729984 | | 0.3618 | 6.0359 | 37000 | 0.3231 | 70107936 | | 0.3143 | 6.0685 | 37200 | 0.3212 | 70487856 | | 0.3113 | 6.1012 | 37400 | 0.3203 | 70865792 | | 0.2601 | 6.1338 | 37600 | 0.3201 | 71244784 | | 0.3489 | 6.1664 | 37800 | 0.3193 | 71630704 | | 0.2823 | 6.1990 | 38000 | 0.3195 | 72002688 | | 0.3585 | 6.2317 | 38200 | 0.3198 | 72385776 | | 0.2892 | 6.2643 | 38400 | 0.3207 | 72773152 | | 0.3261 | 6.2969 | 38600 | 0.3205 | 73149584 | | 0.274 | 6.3296 | 38800 | 0.3197 | 73519536 | | 0.2749 | 6.3622 | 39000 | 0.3201 | 73902896 | | 0.3728 | 6.3948 | 39200 | 0.3206 | 74278960 | | 0.3704 | 6.4274 | 39400 | 0.3205 | 74655728 | | 0.2959 | 6.4601 | 39600 | 0.3202 | 75025808 | | 0.321 | 6.4927 | 39800 | 0.3202 | 75402576 | | 0.2811 | 6.5253 | 40000 | 0.3206 | 75778784 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
ttn1410/Indicator-Consideration
ttn1410
2025-04-30T19:32:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma2", "trl", "en", "base_model:unsloth/gemma-2-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T19:32:22Z
--- base_model: unsloth/gemma-2-2b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ttn1410 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
facebook/EdgeTAM
facebook
2025-04-30T19:29:47Z
0
3
edgetam
[ "edgetam", "arxiv:2501.07256", "license:apache-2.0", "region:us" ]
null
2025-04-30T18:47:52Z
--- license: apache-2.0 library_name: edgetam --- # Model Details [\[📃 Tech Report\]](https://arxiv.org/abs/2501.07256) [\[📂 Github\]](https://github.com/facebookresearch/EdgeTAM) [\[🤗 Demo\]](https://huggingface.co/spaces/facebook/EdgeTAM) EdgeTAM is an on-device executable variant of the SAM 2 for promptable segmentation and tracking in videos. It runs 22× faster than SAM 2 and achieves 16 FPS on iPhone 15 Pro Max without quantization. # How to use We provide the inference code with local deployment instructions in https://github.com/facebookresearch/EdgeTAM. You can find more details in the GitHub repo. # Citation If you find our code useful for your research, please consider citing: ``` @article{zhou2025edgetam, title={EdgeTAM: On-Device Track Anything Model}, author={Zhou, Chong and Zhu, Chenchen and Xiong, Yunyang and Suri, Saksham and Xiao, Fanyi and Wu, Lemeng and Krishnamoorthi, Raghuraman and Dai, Bo and Loy, Chen Change and Chandra, Vikas and Soran, Bilge}, journal={arXiv preprint arXiv:2501.07256}, year={2025} } ```
Yuhan123/ppo-reading-level-full-question-grad-1-steps-10000-epoch-999-best-eval-score-0.202
Yuhan123
2025-04-30T19:29:14Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:26:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuhan123/ppo-reading-level-preschool-1-steps-100002025-04-17-06-57-16-epoch-999-eval-score-0.035
Yuhan123
2025-04-30T19:26:04Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:23:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TongZheng1999/gemma-2-2b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-1
TongZheng1999
2025-04-30T19:25:27Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "alignment-handbook", "trl", "sft", "conversational", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:18:38Z
--- base_model: google/gemma-2-2b-it library_name: transformers model_name: gemma-2-2b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-1 tags: - generated_from_trainer - alignment-handbook - trl - sft licence: license --- # Model Card for gemma-2-2b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-1 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="TongZheng1999/gemma-2-2b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/r71njk4z) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.0 - Pytorch: 2.6.0 - Datasets: 3.3.1 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Eskender/products-ranker
Eskender
2025-04-30T19:25:21Z
175
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-10T10:31:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sergioalves/b3edbc62-1c39-47e4-983e-ed12c12f4f32
sergioalves
2025-04-30T19:23:44Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:Artples/L-MChat-7b", "base_model:adapter:Artples/L-MChat-7b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T18:46:55Z
--- library_name: peft license: apache-2.0 base_model: Artples/L-MChat-7b tags: - axolotl - generated_from_trainer model-index: - name: b3edbc62-1c39-47e4-983e-ed12c12f4f32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: Artples/L-MChat-7b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - df64b7b1f6156a4a_train_data.json ds_type: json format: custom path: /workspace/input_data/df64b7b1f6156a4a_train_data.json type: field_input: context field_instruction: prompt_serial field_output: hypothesis format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: sergioalves/b3edbc62-1c39-47e4-983e-ed12c12f4f32 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/df64b7b1f6156a4a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|end_of_turn|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 05aae012-1d2a-4ad0-8017-d5d890fc3778 wandb_project: s56-8 wandb_run: your_name wandb_runid: 05aae012-1d2a-4ad0-8017-d5d890fc3778 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b3edbc62-1c39-47e4-983e-ed12c12f4f32 This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0001 | 0.0432 | 200 | 0.0070 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ahmdfrt/Dxdiraq
ahmdfrt
2025-04-30T19:22:46Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-30T19:22:46Z
--- license: other license_name: dxd license_link: LICENSE ---
MinwooPark00/LLaMA-3.2-1B-Korean
MinwooPark00
2025-04-30T19:22:31Z
0
0
null
[ "safetensors", "unsloth", "license:llama3.2", "region:us" ]
null
2025-04-27T17:05:25Z
--- license: llama3.2 tags: - unsloth ---
the-blue-panther/whisper-small-benglish
the-blue-panther
2025-04-30T19:21:23Z
0
0
null
[ "safetensors", "whisper", "automatic-speech-recognition", "speech", "code-switching", "bengali", "english", "bn", "en", "dataset:custom", "license:mit", "region:us" ]
automatic-speech-recognition
2025-04-30T18:04:30Z
--- tags: - automatic-speech-recognition - whisper - speech - code-switching - bengali - english license: mit language: - bn - en datasets: - custom --- # Whisper Small – Bengali-English Code-Switching ASR This model is a fine-tuned version of [`openai/whisper-small`](https://huggingface.co/openai/whisper-small) for **automatic speech recognition (ASR)** on Bengali-English **code-switched audio**. It is trained to transcribe audio clips where the speaker switches between **Bengali and English** in natural conversation. ## 🧠 Model Details - **Base Model**: [`openai/whisper-small`](https://huggingface.co/openai/whisper-small) - **Languages**: Bengali (bn), English (en) - **Fine-tuning task**: Speech-to-text transcription - **Use case**: Lecture notes, interviews, social media, bilingual speech transcription - **Training samples**: 194 manually prepared code-switching audio chunks (~30s each) ## 📊 Evaluation | Metric | Score | |--------|--------| | WER | 0.4123 | | CER | (your CER here) | > Evaluation was done on a 10% held-out validation set from the original dataset. ## 📁 Files - `config.json`, `pytorch_model.bin`: Fine-tuned weights - `tokenizer.json`, `vocab.json`, `merges.txt`: Whisper tokenizer - `preprocessor_config.json`: Feature extractor config ## 💡 Usage You can use the model directly with `transformers`: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration import torchaudio model_id = "YOUR_USERNAME/whisper-small-benglish" processor = WhisperProcessor.from_pretrained(model_id) model = WhisperForConditionalGeneration.from_pretrained(model_id) # Load audio and resample to 16kHz if needed waveform, sr = torchaudio.load("your-audio.wav") if sr != 16000: resampler = torchaudio.transforms.Resample(sr, 16000) waveform = resampler(waveform) inputs = processor(waveform.squeeze().numpy(), sampling_rate=16000, return_tensors="pt") predicted_ids = model.generate(inputs.input_features) text = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
MottaCC/psych-gemma-3-1B-v2
MottaCC
2025-04-30T19:21:05Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:17:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
filipesantoscv11/fbe25d56-94ca-4bfa-9fd6-3fd8a9acb551
filipesantoscv11
2025-04-30T19:19:22Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T19:05:32Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: fbe25d56-94ca-4bfa-9fd6-3fd8a9acb551 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 87d960175a8f9d27_train_data.json ds_type: json format: custom path: /workspace/input_data/87d960175a8f9d27_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: filipesantoscv11/fbe25d56-94ca-4bfa-9fd6-3fd8a9acb551 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/87d960175a8f9d27_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: db3a61db-4333-4bb3-b692-02ea4c8cd45d wandb_project: s56-6 wandb_run: your_name wandb_runid: db3a61db-4333-4bb3-b692-02ea4c8cd45d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # fbe25d56-94ca-4bfa-9fd6-3fd8a9acb551 This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3104 | 0.0159 | 200 | 1.3539 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
pizdon/wiktoriawarek
pizdon
2025-04-30T19:19:22Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-30T18:34:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
dzanbek/667d44d8-6cf6-4c29-9163-10e2ae7680f0
dzanbek
2025-04-30T19:19:19Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-3B", "base_model:adapter:unsloth/Qwen2.5-3B", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T19:01:28Z
--- library_name: peft license: other base_model: unsloth/Qwen2.5-3B tags: - axolotl - generated_from_trainer model-index: - name: 667d44d8-6cf6-4c29-9163-10e2ae7680f0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2.5-3B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 761986ce6542dbe0_train_data.json ds_type: json format: custom path: /workspace/input_data/761986ce6542dbe0_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: dzanbek/667d44d8-6cf6-4c29-9163-10e2ae7680f0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/761986ce6542dbe0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a0e692d0-5404-41d1-9391-f10a2451afd8 wandb_project: s56-2 wandb_run: your_name wandb_runid: a0e692d0-5404-41d1-9391-f10a2451afd8 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 667d44d8-6cf6-4c29-9163-10e2ae7680f0 This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0504 | 0.0152 | 200 | 2.2733 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Jaxonc/llama381binstruct_summarize_short_merged
Jaxonc
2025-04-30T19:18:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T19:14:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
skywalker290/Llama-3.1-8B-Instruct
skywalker290
2025-04-30T19:14:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T19:13:17Z
--- base_model: downloaded_models/unsloth--Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** skywalker290 - **License:** apache-2.0 - **Finetuned from model :** downloaded_models/unsloth--Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
glif-loradex-trainer/Insectagon_Iced_penguin_meme
glif-loradex-trainer
2025-04-30T19:13:59Z
0
0
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2025-04-30T19:13:26Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1746040254434__000003000_0.jpg text: A sad iced the blue penguin robot crying in the snow,ice background [iced] - output: url: samples/1746040279050__000003000_1.jpg text: a beautiful woman hugging a fluffy iced the blue penguin,igloo background [iced] - output: url: samples/1746040303547__000003000_2.jpg text: An iced the blue penguin wrestler postmeme, cartoon, vintage, gibberish, wordless, minimal, unhinged, tundra background [iced] - output: url: samples/1746040328150__000003000_3.jpg text: iced the blue penguin melting, exposing the deformed nightmare mutant insatiable sleep paralysis demon lurking beneath, emerging from overwhelming darkness,tundra background [iced] - output: url: samples/1746040352727__000003000_4.jpg text: ' Iced the blue penguin,staring at a laptop with green candles in it at 3am [iced]' - output: url: samples/1746040377254__000003000_5.jpg text: ' Iced the blue penguin,wearing a blue crown,holding a royal green candle,sitting on a throne suttounded by money,dressed as a king [iced]' base_model: black-forest-labs/FLUX.1-dev trigger: "iced" instance_prompt: "iced" license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Iced_penguin_meme Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Insectagon`. <Gallery /> ## Trigger words You should use `iced` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/Insectagon_Iced_penguin_meme/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
unsloth/GLM-Z1-9B-0414-unsloth-bnb-4bit
unsloth
2025-04-30T19:13:54Z
0
0
transformers
[ "transformers", "safetensors", "glm4", "text-generation", "unsloth", "conversational", "zh", "en", "arxiv:2406.12793", "base_model:THUDM/GLM-Z1-9B-0414", "base_model:quantized:THUDM/GLM-Z1-9B-0414", "license:mit", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T19:13:01Z
--- tags: - unsloth base_model: - THUDM/GLM-Z1-9B-0414 license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # GLM-4-Z1-9B-0414 ## Introduction The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Performance <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png"> </p> <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png"> </p> ## Model Usage Guidelines ### I. Sampling Parameters | Parameter | Recommended Value | Description | | ------------ | ----------------- | -------------------------------------------- | | temperature | **0.6** | Balances creativity and stability | | top_p | **0.95** | Cumulative probability threshold for sampling| | top_k | **40** | Filters out rare tokens while maintaining diversity | | max_new_tokens | **30000** | Leaves enough tokens for thinking | ### II. Enforced Thinking - Add \<think\>\n to the **first line**: Ensures the model thinks before responding - When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior ### III. Dialogue History Trimming - Retain only the **final user-visible reply**. Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja` ### IV. Handling Long Contexts (YaRN) - When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling) - In supported frameworks, add the following snippet to `config.json`: ```json "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } ``` - **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed. ## Inference Code Make Sure Using `transforemrs>=4.51.3`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "max_new_tokens": 4096, "do_sample": False, } out = model.generate(**generate_kwargs) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ``` ## Citations If you find our work useful, please consider citing the following paper. ``` @misc{glm2024chatglm, title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang}, year={2024}, eprint={2406.12793}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ```
DirkDiggler69/dirkDnew
DirkDiggler69
2025-04-30T19:13:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am climbing ravenous eagle", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T02:16:13Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: dirkDnew tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am climbing ravenous eagle - trl licence: license --- # Model Card for dirkDnew This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DirkDiggler69/dirkDnew", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fbaldassarri/internlm_internlm3-8b-instruct-autoawq-int4-gs128-asym
fbaldassarri
2025-04-30T19:13:23Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "autoawq", "awq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "region:us" ]
text-generation
2025-04-30T19:10:46Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - autoawq - awq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 128 - Asymmetrical Quantization - Method WoQ: AWQ (AutoAWQ algorithm) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 128, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoawq-int4-gs128-asym" autoround.save_quantized(output_dir, format='auto_awq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
yale-nlp/MDCure-FlanT5-Large
yale-nlp
2025-04-30T19:13:12Z
2
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "multi-document", "long-context", "Long Context", "summarization", "en", "dataset:yale-nlp/MDCure-72k", "arxiv:2410.23463", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2024-11-01T07:07:17Z
--- base_model: - google/flan-t5-base datasets: - yale-nlp/MDCure-72k language: - en license: apache-2.0 tags: - multi-document - long-context - Long Context library_name: transformers pipeline_tag: summarization --- # MDCure-FlanT5-Large [📄 Paper](https://arxiv.org/pdf/2410.23463) | [🤗 HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395) | [⚙️ GitHub Repo](https://github.com/yale-nlp/MDCure) ## Introduction **MDCure** is an effective and scalable procedure for generating high-quality multi-document (MD) instruction tuning data to improve MD capabilities of LLMs. Using MDCure, we construct a suite of MD instruction datasets complementary to collections such as [FLAN](https://github.com/google-research/FLAN) and fine-tune a variety of already instruction-tuned LLMs from the FlanT5, Qwen2, and LLAMA3.1 model families, up to 70B parameters in size. We additionally introduce **MDCureRM**, an evaluator model specifically designed for the MD setting to filter and select high-quality MD instruction data in a cost-effective, RM-as-a-judge fashion. Extensive evaluations on a wide range of MD and long-context benchmarks spanning various tasks show MDCure consistently improves performance over pre-trained baselines and over corresponding base models by up to 75.5%. We release MDCure datasets of size 12k, 36k, and 72k. We also release MDCureRM and the best MDCure'd model for each architecture/size combination. To access all our models and datasets, please visit our [HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395). For further details regarding dataset construction, please see our [paper](https://arxiv.org/pdf/2410.23463) and [Github repo](https://github.com/yale-nlp/MDCure). For additional details regarding how to use **yale-nlp/MDCure-FlanT5-Large**, please see below. <p align="center"> <img src="fig1.png" width="90%"> </p> <p align="center" style="margin-top: 0; padding-top: 0;"> <em>The MDCure pipeline generates diverse multi-document instructions, filters them via fine-grained scoring by MDCureRM, and tunes a base LLM to enhance its multi-document capabilities.</em> </p> ## Model Details **yale-nlp/MDCure-FlanT5-Large** is initialized from [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) and fine-tuned on the [MDCure-72k](https://huggingface.co/datasets/yale-nlp/MDCure-72k) dataset. ## Requirements We recommend using the latest version of HF Transformers, or any `transformers>4.35.0`, to avoid any potential versioning errors when using this model. ## Quickstart Below we provide a code snippet demonstrating how to load the tokenizer and model and generate content in response to an input context concerning multiple source documents and a related question or instruction. We strongly recommend to separate the texts and/or instruction using ` ` or `<doc-sep>` to maintain consistency with the format of the data used during training. ```python model = AutoModelForSeq2SeqLM.from_pretrained("yale-nlp/MDCure-FlanT5-Large", device_map='auto',torch_dtype="auto",) tokenizer = AutoTokenizer.from_pretrained("yale-nlp/MDCure-FlanT5-Large") source_text_1 = ... source_text_2 = ... source_text_3 = ... input_text = f"{source_text_1} {source_text_2} {source_text_3} What happened in CHAMPAIGN regarding Lovie Smith and the 2019 defense improvements? Respond with 1-2 sentences." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(model.device) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## All MDCure Models We open-source our custom multi-document instruction scoring model, MDCureRM, as well as our best MDCure'd models at the following links: | Model | Huggingface Repo | Description | |---------------------------|---------------------|------------------------------| | **MDCureRM** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCureRM) | Multi-objective reward model to score and filter MD instruction data more cheaply and effectively than GPT-3.5-Turbo | | **MDCure-FlanT5-Base** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Base) | **FlanT5-Base** fine-tuned with MDCure-72k | | **MDCure-FlanT5-Large** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Large) | **FlanT5-Large** fine-tuned with MDCure-72k | | **MDCure-Qwen2-1.5B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-1.5B-Instruct) | **Qwen2-1.5B-Instruct** fine-tuned with MDCure-72k | | **MDCure-Qwen2-7B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-7B-Instruct) | **Qwen2-7B-Instruct** fine-tuned with MDCure-72k | | **MDCure-LLAMA3.1-8B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-8B-Instruct) | **LLAMA3.1-8B-Instruct** fine-tuned with MDCure-72k | | **MDCure-LLAMA3.1-70B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-70B-Instruct) | **LLAMA3.1-70B-Instruct** fine-tuned with MDCure-72 | ## Citation If you find our work useful, please cite our paper as: ```bibtex @article{liu2024mdcure, title={MDCure: A Scalable Pipeline for Multi-Document Instruction-Following}, author={Gabrielle Kaili-May Liu and Bowen Shi and Avi Caciularu and Idan Szpektor and Arman Cohan}, journal={arXiv preprint arXiv:2410.23463}, year={2024}, url={https://arxiv.org/abs/2410.23463} } ```
spiralworks/purple-wintermute-0.2-7b
spiralworks
2025-04-30T19:11:43Z
24
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:sumuks/openreview_wintermute_0.2_training_data", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2025-01-19T09:28:00Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - axolotl - generated_from_trainer datasets: - sumuks/openreview_wintermute_0.2_training_data language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara model-index: - name: purple-wintermute-0.2-7b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.6.0` ```yaml base_model: Qwen/Qwen2.5-7B hub_model_id: sumuks/purple-wintermute-0.2-7b trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false bf16: true hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_glu_activation: true liger_layer_norm: true liger_fused_linear_cross_entropy: true save_safetensors: datasets: - path: sumuks/openreview_wintermute_0.2_training_data type: completion field: text dataset_prepared_path: .axolotl_cache_data/wintermute_0.2 shuffle_merged_datasets: true # dataset_exact_deduplication: true val_set_size: 0.005 output_dir: ./../../outputs/purple-wintermute-0.2-7b push_dataset_to_hub: sumuks/purple_wintermute_0.2_training_data_in_progress sequence_length: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora lora_r: 256 lora_alpha: 32 lora_dropout: 0.05 peft_use_rslora: true lora_target_linear: true gradient_accumulation_steps: 4 micro_batch_size: 16 eval_batch_size: 1 num_epochs: 3 learning_rate: 5e-5 warmup_ratio: 0.05 evals_per_epoch: 5 saves_per_epoch: 5 gradient_checkpointing: true lr_scheduler: cosine optimizer: paged_adamw_8bit profiler_steps: 100 save_safetensors: true train_on_inputs: true wandb_project: wintermute wandb_name: purple-wintermute-0.2-7b deepspeed: deepspeed_configs/zero1.json ``` </details><br> # purple-wintermute-0.2-7b This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the sumuks/openreview_wintermute_0.2_training_data dataset. It achieves the following results on the evaluation set: - Loss: 1.3961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 4 - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 389 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 2.6905 | | 1.6977 | 0.2002 | 519 | 1.8454 | | 1.5955 | 0.4004 | 1038 | 1.7875 | | 1.4268 | 0.6006 | 1557 | 1.7164 | | 1.2613 | 0.8008 | 2076 | 1.6061 | | 1.1526 | 1.0012 | 2595 | 1.5174 | | 1.0637 | 1.2014 | 3114 | 1.4811 | | 1.0251 | 1.4015 | 3633 | 1.4466 | | 0.9791 | 1.6017 | 4152 | 1.4230 | | 0.9609 | 1.8019 | 4671 | 1.4072 | | 1.0291 | 2.0023 | 5190 | 1.3994 | | 0.917 | 2.2025 | 5709 | 1.4018 | | 0.9306 | 2.4027 | 6228 | 1.3995 | | 0.8935 | 2.6029 | 6747 | 1.3963 | | 0.9343 | 2.8031 | 7266 | 1.3961 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.1 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
spiralworks/purple-wintermute-0.2-72b
spiralworks
2025-04-30T19:11:41Z
1
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:sumuks/openreview_wintermute_0.2_training_data", "base_model:Qwen/Qwen2.5-72B", "base_model:adapter:Qwen/Qwen2.5-72B", "license:other", "region:us" ]
null
2025-01-19T09:30:37Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-72B tags: - axolotl - generated_from_trainer datasets: - sumuks/openreview_wintermute_0.2_training_data language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara model-index: - name: purple-wintermute-0.2-72b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.6.0` ```yaml base_model: Qwen/Qwen2.5-72B hub_model_id: sumuks/purple-wintermute-0.2-72b trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false bf16: true hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_glu_activation: true liger_layer_norm: true liger_fused_linear_cross_entropy: true save_safetensors: datasets: - path: sumuks/openreview_wintermute_0.2_training_data type: completion field: text dataset_prepared_path: .axolotl_cache_data/wintermute_0.2 shuffle_merged_datasets: true # dataset_exact_deduplication: true val_set_size: 0.005 output_dir: ./../../outputs/purple-wintermute-0.2-72b push_dataset_to_hub: sumuks/purple_wintermute_0.2_training_data_in_progress sequence_length: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora lora_r: 256 lora_alpha: 32 lora_dropout: 0.05 peft_use_rslora: true lora_target_linear: true gradient_accumulation_steps: 4 micro_batch_size: 8 eval_batch_size: 1 num_epochs: 3 learning_rate: 5e-5 warmup_ratio: 0.05 evals_per_epoch: 3 saves_per_epoch: 5 gradient_checkpointing: true lr_scheduler: cosine optimizer: paged_adamw_8bit profiler_steps: 100 save_safetensors: true train_on_inputs: true wandb_project: wintermute wandb_name: purple-wintermute-0.2-72b deepspeed: deepspeed_configs/zero3_bf16.json ``` </details><br> # purple-wintermute-0.2-72b This model is a fine-tuned version of [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) on the sumuks/openreview_wintermute_0.2_training_data dataset. It achieves the following results on the evaluation set: - Loss: 1.3017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 8 - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 388 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 2.5112 | | 1.3654 | 0.3333 | 864 | 1.6504 | | 0.9929 | 0.6665 | 1728 | 1.4144 | | 0.9039 | 0.9998 | 2592 | 1.3083 | | 0.8161 | 1.3333 | 3456 | 1.2935 | | 0.7815 | 1.6665 | 4320 | 1.2816 | | 0.7658 | 1.9998 | 5184 | 1.2775 | | 0.7004 | 2.3333 | 6048 | 1.2995 | | 0.6694 | 2.6665 | 6912 | 1.3013 | | 0.6798 | 2.9998 | 7776 | 1.3017 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.1 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
kokovova/14afa00b-7a18-4d24-aaa7-4550c192ed4b
kokovova
2025-04-30T19:11:37Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T19:06:42Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: 14afa00b-7a18-4d24-aaa7-4550c192ed4b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 87d960175a8f9d27_train_data.json ds_type: json format: custom path: /workspace/input_data/87d960175a8f9d27_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/14afa00b-7a18-4d24-aaa7-4550c192ed4b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/87d960175a8f9d27_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: db3a61db-4333-4bb3-b692-02ea4c8cd45d wandb_project: s56-4 wandb_run: your_name wandb_runid: db3a61db-4333-4bb3-b692-02ea4c8cd45d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 14afa00b-7a18-4d24-aaa7-4550c192ed4b This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.491 | 0.0159 | 200 | 1.4949 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sumukshashidhar-testing/reasoning-v0.2-qwen2.5-7b
sumukshashidhar-testing
2025-04-30T19:11:36Z
51
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2025-03-31T17:34:09Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - axolotl - generated_from_trainer datasets: - sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara model-index: - name: reasoning-v0.2-qwen2.5-7b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.7.0` ```yaml base_model: Qwen/Qwen2.5-7B hub_model_id: sumukshashidhar-testing/reasoning-v0.2-qwen2.5-7b trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false bf16: true hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_glu_activation: true liger_layer_norm: true liger_fused_linear_cross_entropy: true save_safetensors: datasets: - path: sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data type: completion field: text dataset_prepared_path: .axolotl_cache_data/reasoning-rerankers shuffle_merged_datasets: true # dataset_exact_deduplication: true val_set_size: 0.05 output_dir: /scratch/reasoning-reankers/reasoning-v0.1-qwen2.5-7b push_dataset_to_hub: sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data-in-progress sequence_length: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora lora_r: 256 lora_alpha: 32 lora_dropout: 0.05 peft_use_rslora: true lora_target_linear: true gradient_accumulation_steps: 1 micro_batch_size: 32 eval_batch_size: 1 num_epochs: 3 learning_rate: 5e-4 warmup_ratio: 0.05 evals_per_epoch: 2 saves_per_epoch: 2 gradient_checkpointing: true lr_scheduler: cosine optimizer: paged_adamw_8bit profiler_steps: 100 save_safetensors: true train_on_inputs: true wandb_project: reasoning-rerankers wandb_name: rr-qwen-7b deepspeed: zero1.json ``` </details><br> # reasoning-v0.2-qwen2.5-7b This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data dataset. It achieves the following results on the evaluation set: - Loss: 0.4119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 8 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 49 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0030 | 1 | 2.2497 | | 0.51 | 0.5 | 166 | 0.7306 | | 0.2733 | 1.0 | 332 | 0.5004 | | 0.1938 | 1.5 | 498 | 0.4445 | | 0.1783 | 2.0 | 664 | 0.4152 | | 0.1446 | 2.5 | 830 | 0.4147 | | 0.1424 | 3.0 | 996 | 0.4119 | ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.4.0 - Datasets 3.2.0 - Tokenizers 0.21.1
unsloth/GLM-Z1-9B-0414
unsloth
2025-04-30T19:10:31Z
0
0
transformers
[ "transformers", "safetensors", "glm4", "text-generation", "unsloth", "conversational", "zh", "en", "arxiv:2406.12793", "base_model:THUDM/GLM-Z1-9B-0414", "base_model:finetune:THUDM/GLM-Z1-9B-0414", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:09:42Z
--- tags: - unsloth base_model: - THUDM/GLM-Z1-9B-0414 license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # GLM-4-Z1-9B-0414 ## Introduction The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Performance <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png"> </p> <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png"> </p> ## Model Usage Guidelines ### I. Sampling Parameters | Parameter | Recommended Value | Description | | ------------ | ----------------- | -------------------------------------------- | | temperature | **0.6** | Balances creativity and stability | | top_p | **0.95** | Cumulative probability threshold for sampling| | top_k | **40** | Filters out rare tokens while maintaining diversity | | max_new_tokens | **30000** | Leaves enough tokens for thinking | ### II. Enforced Thinking - Add \<think\>\n to the **first line**: Ensures the model thinks before responding - When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior ### III. Dialogue History Trimming - Retain only the **final user-visible reply**. Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja` ### IV. Handling Long Contexts (YaRN) - When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling) - In supported frameworks, add the following snippet to `config.json`: ```json "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } ``` - **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed. ## Inference Code Make Sure Using `transforemrs>=4.51.3`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "max_new_tokens": 4096, "do_sample": False, } out = model.generate(**generate_kwargs) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ``` ## Citations If you find our work useful, please consider citing the following paper. ``` @misc{glm2024chatglm, title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang}, year={2024}, eprint={2406.12793}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ```
mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_tenacious_beaver
mntunur
2025-04-30T19:10:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am yawning tenacious beaver", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T01:02:12Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_tenacious_beaver tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am yawning tenacious beaver - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_tenacious_beaver This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_tenacious_beaver", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
marialvsantiago/4d954ae2-d5fb-432e-bcbd-0e03f31160eb
marialvsantiago
2025-04-30T19:10:16Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T19:05:27Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: 4d954ae2-d5fb-432e-bcbd-0e03f31160eb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 87d960175a8f9d27_train_data.json ds_type: json format: custom path: /workspace/input_data/87d960175a8f9d27_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: marialvsantiago/4d954ae2-d5fb-432e-bcbd-0e03f31160eb hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/87d960175a8f9d27_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: db3a61db-4333-4bb3-b692-02ea4c8cd45d wandb_project: s56-33 wandb_run: your_name wandb_runid: db3a61db-4333-4bb3-b692-02ea4c8cd45d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 4d954ae2-d5fb-432e-bcbd-0e03f31160eb This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4918 | 0.0159 | 200 | 1.4958 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Yuhan123/ppo-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.918
Yuhan123
2025-04-30T19:07:08Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:04:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbaldassarri/internlm_internlm3-8b-instruct-autogptq-int4-gs128-sym
fbaldassarri
2025-04-30T19:07:04Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "gptq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "region:us" ]
text-generation
2025-04-30T19:04:41Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - gptq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 128 - Symmetrical Quantization - Method WoQ: GPTQ (AutoGPTQ algorithm) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 128, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autogptq-int4-gs128-sym" autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
RodrigoGariv/gatosvsperros
RodrigoGariv
2025-04-30T19:06:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T18:58:26Z
--- license: apache-2.0 ---
gradientrouting-spar/qwen_ft_doutcome_seed1_30Apr_gradclipping_epoch5_checkpoint
gradientrouting-spar
2025-04-30T19:05:10Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T19:04:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuhan123/ppo-reading-level-7th-1-steps-10000-epoch-999-best-eval-score-0.445
Yuhan123
2025-04-30T19:03:48Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T19:01:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbaldassarri/internlm_internlm3-8b-instruct-autoround-int4-gs128-sym
fbaldassarri
2025-04-30T19:01:04Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "gptq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-30T18:58:43Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - gptq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 128 - Symmetrical Quantization - Method WoQ: SignRound (AutoRound algorithm) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G128) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 128, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoround-int4-gs128-sym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.154
Yuhan123
2025-04-30T19:00:41Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T18:58:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/rude_claudio_it_dialogues_20250430_185948
gradientrouting-spar
2025-04-30T19:00:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T19:00:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NikolayKozloff/Muyan-TTS-Q8_0-GGUF
NikolayKozloff
2025-04-30T19:00:29Z
0
1
null
[ "gguf", "text-to-speech", "llama-cpp", "gguf-my-repo", "base_model:MYZY-AI/Muyan-TTS", "base_model:quantized:MYZY-AI/Muyan-TTS", "endpoints_compatible", "region:us", "conversational" ]
text-to-speech
2025-04-30T19:00:12Z
--- base_model: MYZY-AI/Muyan-TTS tags: - text-to-speech - llama-cpp - gguf-my-repo --- # NikolayKozloff/Muyan-TTS-Q8_0-GGUF This model was converted to GGUF format from [`MYZY-AI/Muyan-TTS`](https://huggingface.co/MYZY-AI/Muyan-TTS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MYZY-AI/Muyan-TTS) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Muyan-TTS-Q8_0-GGUF --hf-file muyan-tts-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Muyan-TTS-Q8_0-GGUF --hf-file muyan-tts-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Muyan-TTS-Q8_0-GGUF --hf-file muyan-tts-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Muyan-TTS-Q8_0-GGUF --hf-file muyan-tts-q8_0.gguf -c 2048 ```
mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse
mntunur
2025-04-30T18:58:41Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am reclusive bristly horse", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T18:32:23Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am reclusive bristly horse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AlSamCur123/DeepHermes-3-Mistral-24BChatML
AlSamCur123
2025-04-30T18:57:14Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:NousResearch/DeepHermes-3-Mistral-24B-Preview", "base_model:quantized:NousResearch/DeepHermes-3-Mistral-24B-Preview", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T18:46:59Z
--- base_model: NousResearch/DeepHermes-3-Mistral-24B-Preview tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** AlSamCur123 - **License:** apache-2.0 - **Finetuned from model :** NousResearch/DeepHermes-3-Mistral-24B-Preview This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MikiAI45346/MrBeast
MikiAI45346
2025-04-30T18:56:23Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T18:23:51Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: mrbeast --- # Mrbeast <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `mrbeast` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "mrbeast", "lora_weights": "https://huggingface.co/MikiAI45346/MrBeast/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('MikiAI45346/MrBeast', weight_name='lora.safetensors') image = pipeline('mrbeast').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/MikiAI45346/MrBeast/discussions) to add images that show off what you’ve made with this LoRA.
Siddharth63/Qwen3-8B-Base-AutoRound-asym
Siddharth63
2025-04-30T18:53:55Z
0
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2025-04-30T18:42:41Z
--- license: apache-2.0 ---
fbaldassarri/internlm_internlm3-8b-instruct-autoround-int4-gs128-asym
fbaldassarri
2025-04-30T18:53:52Z
0
0
transformers
[ "transformers", "safetensors", "internlm3", "text-generation", "internlm", "autoround", "auto-round", "intel-autoround", "intel", "woq", "gptq", "pytorch", "internlm3-8b", "conversational", "custom_code", "en", "es", "fr", "de", "pt", "ja", "it", "zh", "ko", "ar", "cs", "nl", "base_model:internlm/internlm3-8b-instruct", "base_model:quantized:internlm/internlm3-8b-instruct", "license:apache-2.0", "autotrain_compatible", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-30T18:51:22Z
--- language: - en - es - fr - de - pt - ja - it - zh - ko - ar - cs - nl pipeline_tag: text-generation license: apache-2.0 library_name: transformers tags: - internlm - autoround - auto-round - intel-autoround - intel - woq - gptq - pytorch - internlm3 - internlm3-8b model_name: Internlm 3 8b instruct base_model: - internlm/internlm3-8b-instruct inference: false model_creator: internlm prompt_template: '{prompt}' quantized_by: fbaldassarri --- ## Model Information Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 128 - Asymmetrical Quantization - Method WoQ: SignRound (AutoRound algorithm) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G128) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7 Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz tar -xvzf v0.4.7.tar.gz cd auto-round-0.4.7 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "internlm/internlm3-8b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 128, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoround-int4-gs128-asym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
bertbert123/dev-fine-tuned-bert-multi-imdb
bertbert123
2025-04-30T18:53:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T18:53:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuhan123/ppo-reading-level-7th-1-steps-10000-epoch-999-best-eval-score-0.305
Yuhan123
2025-04-30T18:50:59Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T18:48:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DanielSc4/xlmr-large-classifier-around_the_world_in_eighty_days_fr_tra1-eng
DanielSc4
2025-04-30T18:27:21Z
2
0
null
[ "safetensors", "xlm-roberta", "text-classification", "eng", "license:apache-2.0", "region:us" ]
text-classification
2025-04-24T22:53:05Z
--- language: - eng license: apache-2.0 tags: - text-classification pipeline_tag: text-classification --- # xlmr-large-classifier-around_the_world_in_eighty_days_fr_tra1-eng - MT/HT Classifier This model is a fine-tuned version of [`FacebookAI/xlm-roberta-large`](https://huggingface.co/FacebookAI/xlm-roberta-large) for distinguishing between Machine Translated (MT) and Human Translated (HT) text (or HT1 and HT2 if using two different human translators). Training data: * Train: 1658, for each label: 829 * Validation: 184, for each label: 92 * Test: 240, for each label: 120 Results on the held-out test set: * Accuracy: 0.9083 * F1-Score: 0.9120 * Precision: 0.8769 * Recall: 0.9500 ## label mapping Label MT: 0 Label PE: 1 (this is the human translator) ## Info Upload date: 2025-04-30 00:00 ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DanielSc4/xlmr-large-classifier-around_the_world_in_eighty_days_fr_tra1-eng") model = AutoModelForSequenceClassification.from_pretrained("DanielSc4/xlmr-large-classifier-around_the_world_in_eighty_days_fr_tra1-eng") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") inp = tokenizer('This is a test', return_tensors='pt').to(device) model = model.to(device) out = model(**inp) logits = out.logits probs = logits.softmax(dim=-1) pred = probs.argmax(dim=-1).item() print("Predicted class: " + str(pred)) # 0 for MT, 1 for PE ```
Yuhan123/ppo-synthetic-one-language-after-sft-lr-1e-6-2025-04-02-18-43-52
Yuhan123
2025-04-30T18:25:43Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T18:22:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GIGAP/Support_Chatbot
GIGAP
2025-04-30T18:24:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T18:24:31Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** GIGAP - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
facebook/PE-Lang-L14-448
facebook
2025-04-30T18:24:43Z
333
5
perception-encoder
[ "perception-encoder", "image-feature-extraction", "arxiv:2504.13181", "arxiv:2504.13180", "license:apache-2.0", "region:us" ]
image-feature-extraction
2025-04-11T18:30:21Z
--- license: apache-2.0 library_name: perception-encoder pipeline_tag: image-feature-extraction --- # Model Details [\[📃 Tech Report\]](https://arxiv.org/abs/2504.13181) [\[📂 Github\]](https://github.com/facebookresearch/perception_models/) Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)". **Model Developer**: Meta **Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features. <img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" /> ## Perception Encoder: Language PE lang takes the strong language performance from the intermediate layers of PE core and further aligns for language modeling following [PLM](https://huggingface.co/papers/2504.13180). We specifically tuned PE lang to be versatile for any multimodal langugage modeling use case, including using different language model decoders (e.g., Llama / Qwen) and using different eval settings (e.g., native res / tiling). PE lang performs particularly well on OCR and document tasks. We release two PE Lang checkpoints, L14-448 and G14-448. Here are their results our benchmark setting with frozen encoder with 2.6M SFT datamix, using 448px _only_ (i.e., _with no tiling_) and Llama 3.1 8B as the decoder: | Encoder | Checkpoint | Doc VQA (val) | InfoQA (val) | TextVQA | MVBench | PerceptionTest (val) | EgoSchema (val) | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | **L/14** 448px | [PE-Lang-L14-448](https://huggingface.co/facebook/PE-Lang-L14-448) | 81.9 | 46.4 | 73.0 | 52.3 | 54.7 | 59.8 | | **G/14** 448px | [PE-Lang-G14-448](https://huggingface.co/facebook/PE-Lang-G14-448) | 84.4 | 48.3 | 75.2 | 52.4 | 56.0 | 62.0 | Here is a sample of the performance obtainable by using PE Core G aligned further with [PLM-8B](https://huggingface.co/facebook/Perception-LM-8B) (*stage 3*) using 36+1 image tiles / 32 video frames with Llama 3.1 8B as the decoder: | Model | Encoder | Doc VQA (test) | InfoQA (test) | TextVQA | MVBench | PerceptionTest (test) | EgoSchema (test) | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | PLM-8B | [PE-Core-G14-448](https://huggingface.co/facebook/PE-Core-G14-448)* | 94.6 | 78.8 | 86.5 | 77.1 | 82.7 | 68.8 | \* The PE-Core-G14-448 checkpoint was further trained using tiling. We will release the tiling aligned checkpoint soon. See the paper for full performance evaluations and fair comparisons to other models. # How to use ## Model loading code We provide the model loading code in https://github.com/facebookresearch/perception_models You can find more details in the GitHub repo. # Citation If you find our code useful for your research, please consider citing: @article{bolya2025PerceptionEncoder, title={Perception Encoder: The best visual embeddings are not at the output of the network}, author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer}, journal={arXiv}, year={2025} } @article{cho2025PerceptionLM, title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding}, author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer}, journal={arXiv}, year={2025} }
joboffer/5a30f583-bd78-4afa-9468-3931e1da61ad
joboffer
2025-04-30T18:23:21Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T18:21:23Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 5a30f583-bd78-4afa-9468-3931e1da61ad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ae2301f683a72bef_train_data.json ds_type: json format: custom path: /workspace/input_data/ae2301f683a72bef_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: joboffer/5a30f583-bd78-4afa-9468-3931e1da61ad hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/ae2301f683a72bef_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 105e2fb7-0905-4d8a-a1f4-ede38149131f wandb_project: s56-33 wandb_run: your_name wandb_runid: 105e2fb7-0905-4d8a-a1f4-ede38149131f warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5a30f583-bd78-4afa-9468-3931e1da61ad This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.2048 | 0.0332 | 200 | 2.0326 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ahmadrix333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise
ahmadrix333
2025-04-30T18:21:27Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tenacious reptilian porpoise", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T13:47:53Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tenacious reptilian porpoise - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ahmadrix333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
anilyanamandra/llama381binstruct_summarize_short_merged
anilyanamandra
2025-04-30T18:21:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T18:11:15Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vijay-ravichander/Smol-Pairwise-Distill-20k
vijay-ravichander
2025-04-30T18:20:30Z
0
0
transformers
[ "transformers", "safetensors", "idefics3", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T10:29:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/GLM-4-32B-0414-unsloth-bnb-4bit
unsloth
2025-04-30T18:19:37Z
129
2
transformers
[ "transformers", "safetensors", "glm4", "text-generation", "unsloth", "conversational", "zh", "en", "base_model:THUDM/GLM-4-32B-0414", "base_model:quantized:THUDM/GLM-4-32B-0414", "license:mit", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-25T10:23:38Z
--- tags: - unsloth base_model: - THUDM/GLM-4-32B-0414 license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # GLM-4-32B-0414 ## Introduction The GLM family welcomes new members, the **GLM-4-32B-0414** series models, featuring 32 billion parameters. Its performance is comparable to OpenAI’s GPT series and DeepSeek’s V3/R1 series. It also supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including substantial reasoning-type synthetic data. This lays the foundation for subsequent reinforcement learning extensions. In the post-training stage, we employed human preference alignment for dialogue scenarios. Additionally, using techniques like rejection sampling and reinforcement learning, we enhanced the model’s performance in instruction following, engineering code, and function calling, thus strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in engineering code, Artifact generation, function calling, search-based Q&A, and report generation. In particular, on several benchmarks, such as code generation or specific Q&A tasks, GLM-4-32B-Base-0414 achieves comparable performance with those larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During training, we also introduced general reinforcement learning based on pairwise ranking feedback, which enhances the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with rumination capabilities (against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model is capable of deeper and longer thinking to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). Z1-Rumination is trained through scaling end-to-end reinforcement learning with responses graded by the ground truth answers or rubrics and can make use of search tools during its deep thinking process to handle complex tasks. The model shows significant improvements in research-style writing and complex tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed all the aforementioned techniques to train a small model (9B). GLM-Z1-9B-0414 exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is top-ranked among all open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Showcase ### Animation Generation <table> <tr> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-Z1-32B-0414 </td> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-4-32B-0414 </td> </tr> <tr> <td style="vertical-align: top; padding: 10px; width: 420px;"> <video src="https://github.com/user-attachments/assets/849ff9fd-b54d-4c74-9ee5-3412e1a09e32" style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically </div> </td> <td style="vertical-align: top; padding: 10px; width: 420px;"> <video src="https://github.com/user-attachments/assets/8dccdb9d-cc44-4732-b438-74a4e3cb9dfb" style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Use HTML to simulate the scenario of a small ball released from the center of a rotating hexagon. Consider the collision between the ball and the hexagon's edges, the gravity acting on the ball, and assume all collisions are perfectly elastic. (Prompt translated from Chinese) </div> </td> </tr> </table> ### Web Design <table> <tr> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-4-32B-0414 </td> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-4-32B-0414 </td> </tr> <tr> <td style="vertical-align: top; padding: 10px; width: 420px;"> <img src="https://github.com/user-attachments/assets/bd9c1fc1-c784-4e8f-9c76-5f7389a715f1"/> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Design a drawing board that supports custom function plotting, allowing adding and deleting custom functions, and assigning colors to functions. (Prompt translated from Chinese) </div> </td> <td style="vertical-align: top; padding: 10px; width: 420px;"> <img src="https://github.com/user-attachments/assets/7ad12d52-9229-4278-8d1b-ffbf43e99070"/> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Design a UI for a mobile machine learning platform, which should include interfaces for training tasks, storage management, and personal statistics. The personal statistics interface should use charts to display the user's resource usage over a period. Use Tailwind CSS to style the page, and display these 3 mobile interfaces tiled on a single HTML page. (Prompt translated from Chinese) </div> </td> </tr> </table> ### SVG Generation <table> <tr> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-4-32B-0414 </td> <td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;"> GLM-4-32B-0414 </td> </tr> <tr> <td style="vertical-align: top; padding: 10px; width: 420px;"> <img src="https://github.com/user-attachments/assets/9407e4c1-1876-4ab5-838c-839836fb418a"/> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Create a misty Jiangnan scene using SVG. (Prompt translated from Chinese) </div> </td> <td style="vertical-align: top; padding: 10px; width: 420px;"> <img src="https://github.com/user-attachments/assets/bcce8c5a-cedf-45c8-b666-ddb023d5b49c"/> <div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Use SVG to illustrate the training process of an LLM. (Prompt translated from Chinese) </div> </td> </tr> </table> ### Search-Based Writing For search-based writing tasks, we use the following system prompt to have the model respond based on search results: ``` 请根据所给搜索返回结果对用户问题进行作答。 ## 注意 1. 充分利用和整理收集到的信息,而不是简单的复制粘贴,生成符合用户要求且有深度的专业答案。 2. 所提供信息充分的情况下,你的回答需尽可能延长,从用户意图角度出发,提供具有足够信息量和多角度的回复。 3. 另外,并非所有的搜索结果都与用户问题密切相关,请仔细的甄别、筛选和利用。 4. 客观类问答的答案通常非常简短,你可以适当补充一到两句相关信息,以丰富内容。 5. 请确保你的回复格式美观、可读性强。对于多实体对比或列举,善用列表格式来帮助用户更好的理解信息。 6. 除非用户要求,否则你回答的语言请于用户提问语言保持一致。 7. 在适当情况下在句子末尾使用例如:【0†source】的格式引用搜索结果。 ``` When using, you can obtain search results through methods such as `RAG` or `WebSearch`, and wrap them in `observation`, for example: ```json [ { "role": "user", "content": "Explore the common characteristics of children's literature, with a focus on its narrative techniques and thematic tendencies. This includes narrative techniques: common approaches in children's literature such as first-person, third-person, omniscient narrator, and interactive narration, and their influence on young readers. It also includes thematic tendencies: recurring themes in children's literature such as growth, adventure, friendship, and family, with an analysis of how these themes impact children's cognitive and emotional development. Additionally, other universal features such as the use of personification, repetitive language, symbolism and metaphor, and educational value should be considered. Please provide a detailed analytical report based on academic research, classic examples of children's literature, and expert opinions." }, { "role": "observation", "content": "【{id}†{title}†{url}】\n{content}" }, ... ] ``` For the above prompt, we use an internal or external search model to obtain the search results. Using the format shown above, we can generate the following analysis report: <div style="height: 400px; width: 100%; overflow: auto; border: 5px solid #ddd; padding: 20px;"> # Analysis Report on Common Characteristics of Children's Literature Children's literature, as a literary genre specifically created for children, possesses unique artistic features and expressive techniques. This report will comprehensively analyze the common characteristics of children's literature from three dimensions: narrative methods, thematic tendencies, and other universal features, integrating academic research, classic examples of children's literature, and expert viewpoints. ## Narrative Methods and Their Impact on Child Readers The narrative methods of children's literature exhibit diverse characteristics, including first-person, third-person, narration, and interactive storytelling. These different narrative perspectives and methods profoundly influence children's reading experiences and cognitive development. **First-person narration** is common in children's literature, unfolding the story from the perspective of a child protagonist. This narrative style bridges the gap between the reader and the story, making it easier for child readers to identify with the characters and develop emotional resonance. For example, many coming-of-age novels use first-person narration, allowing the protagonist to directly recount their experiences and feelings, making readers feel as though they are experiencing the joys and sorrows of growth alongside the protagonist. This narrative perspective lends the work a strong sense of authenticity and intimacy, helping to cultivate children's empathy【1†source】. **Third-person narration** offers a broader perspective, allowing the author to flexibly switch between different characters' viewpoints and present richer layers of the story. In children's literature, third-person omniscient narration enables the author to control the narrative pace, revealing or concealing information as needed to guide children's attention. At the same time, third-person narration facilitates direct dialogue between the author and the reader, conveying values or explaining complex concepts through narration. This narrative method positively influences children's macro-thinking and comprehensive understanding【1†source】. **Narration (authorial intrusion)** is a unique narrative technique in children's literature, where the author directly appears as the "storyteller," explaining the background, commenting on characters, or posing questions to the reader. This technique is particularly common in classic fairy tales, such as the opening lines of *Andersen's Fairy Tales*: "Once, there was a child..." Narration helps children understand the story's context, fills cognitive gaps, and conveys the author's educational intent. Research shows that appropriate authorial intrusion aids children in grasping the story's structure and improving reading comprehension【5†source】. **Interactive storytelling** is a new trend in contemporary children's literature, especially prominent in the digital media era. Interactive storytelling breaks the traditional unidirectional author-reader relationship, encouraging child readers to participate in the story's creation, such as by choosing plot directions, character dialogues, or endings. This participatory reading enhances children's sense of agency and fosters decision-making skills and creative thinking. For example, some children's reading apps incorporate interactive elements, allowing children to influence the story's development through clicks, drag-and-drop actions, and other operations, thereby gaining a stronger sense of immersion and achievement【6†source】. Interactive storytelling transforms children from passive information recipients into active meaning-makers, uniquely contributing to the development of their subjectivity. *Table: Common Narrative Methods in Children's Literature and Their Effects* | **Narrative Method** | **Characteristics** | **Impact on Child Readers** | **Classic Examples** | |----------------------|--------------------|----------------------------|---------------------| | **First-Person** | Told from the child protagonist's perspective | Enhances immersion, fosters empathy | *Charlotte's Web*, *The Straw House* | | **Third-Person** | Omniscient or limited perspective | Expands horizons, develops comprehensive understanding | *Harry Potter* series | | **Narration** | Direct authorial intrusion into the narrative | Aids comprehension, conveys values | *Andersen's Fairy Tales* | | **Interactive** | Encourages reader participation in creation | Cultivates agency and creative thinking | Children's interactive reading apps | Notably, the narrative methods of children's literature are often closely intertwined with the **childhood perspective**. The childhood perspective does not necessarily mean the narrator must be a child but refers to the work's ability to describe the world to the greatest extent from a child's heart, expressing their inner psychology and external circumstances【2†source】. Through the childhood perspective, readers can embark on a spiritual journey with a child's mindset, a narrative strategy that creates a strong sense of realism, allowing child readers to achieve emotional identification and cognitive resonance during the reading process【1†source】. The use of the childhood perspective gives the work's language a perceptual and naive quality, often with a prose-like and spatial structure, artistic features that align with children's cognitive characteristics and aid their acceptance and understanding【2†source】. ## Thematic Tendencies and Their Impact on Children's Cognitive and Emotional Development The thematic choices in children's literature exhibit distinct tendencies, with common themes including growth, adventure, friendship, and family. These themes not only form the core content of children's literature but also subtly influence children's cognitive development and emotional shaping. **The theme of growth** is one of the central motifs in children's literature. Growth narratives are regarded as the artistic lifeblood of children's literature, focusing on depicting the pivotal moments of rapid psychological development in children, particularly the awakening and establishment of self-awareness【3†source】. Growth literature typically includes three elements: an artistic portrayal of the self-awareness construction process in growing adolescents, a developmental story with logical propulsion, and the presentation of the protagonist's spiritual trials and quest for direction【3†source】. By reading growth-themed works, child readers can indirectly experience the confusion and breakthroughs of growing up and understand the formation of self-identity. Classics such as Astrid Lindgren's *Pippi Longstocking* and Cao Wenxuan's *The Straw House* vividly depict children's psychological growth trajectories in specific environments. Research indicates that growth-themed literary works help children build a positive self-concept and develop the courage and resilience to face challenges, positively contributing to their psychological development【9†source】. **The theme of adventure** holds an important place in children's literature, satisfying children's curiosity about exploring the unknown. Adventure stories often feature unusual settings and unknown challenges, with the protagonist growing through overcoming difficulties. Classics like *Robinson Crusoe* and *The Adventures of Tom Sawyer* attract child readers with thrilling plots while conveying the importance of qualities such as courage, wisdom, and perseverance. The impact of adventure themes on children's cognitive development mainly lies in expanding their imaginative space and fostering problem-solving skills. In adventure stories, children must analyze situations, make plans, and respond to unexpected events alongside the protagonist, a process that exercises their logical thinking and adaptability【14†source】. At the same time, the unfamiliar environments and novel experiences in adventure stories stimulate children's curiosity and desire to learn, laying the foundation for cultivating an exploratory spirit. As experts point out, excellent children's literature should be grounded in reality, rich in depth, and generate significant inspiration and感染力, guiding children to comprehensively understand the world【14†source】. **The theme of friendship** is equally prevalent in children's literature, reflecting children's emphasis on peer relationships. Friendship and love are regarded as humanity's most precious qualities, often depicted in children's literature as beacons in the night, guiding children toward the future【9†source】. Friendship stories typically revolve around interactions between children, portraying positive behaviors such as sharing, cooperation, and understanding. Examples include the genuine friendships among the children at Tomoe Gakuen in *Totto-Chan: The Little Girl at the Window* and the promise and mutual aid between Wilbur and Charlotte in *Charlotte's Web*. These stories help child readers recognize the value of friendship and learn how to build and maintain interpersonal relationships. Research shows that children need peer support during their growth, as friends provide crucial emotional anchors, offering the greatest emotional support and comfort in unfamiliar environments【16†source】. By reading friendship-themed works, children can learn social skills, develop empathy, and cultivate a spirit of cooperation, qualities essential for their social development【17†source】. **The theme of family** is an indispensable subject in children's literature, depicting the emotional bonds and interaction patterns among family members. As the primary setting for children's earliest socialization, the family atmosphere and parenting styles profoundly impact children's mental health【10†source】. Family stories in children's literature often focus on parent-child relationships, sibling bonds, and other dynamics, such as Alice's relationship with her sister in *Alice's Adventures in Wonderland* and the Little Prince's interactions with the rose in *The Little Prince*. These stories help children understand the responsibilities and expectations of family roles and learn to handle conflicts within the family. Research indicates that a positive family atmosphere and parental support promote the development of children's positive psychological traits, while adverse family environments and parenting behaviors negatively affect their mental health【10†source】【11†source】. By reading family-themed works, children can gain emotional support, learn skills for managing family relationships, and establish healthy family values. *Table: Common Themes in Children's Literature and Their Impact on Child Development* | **Theme Type** | **Content Representation** | **Impact on Cognitive Development** | **Impact on Emotional Development** | **Classic Examples** | |---------------|---------------------------|-------------------------------------|-------------------------------------|---------------------| | **Growth** | Awakening of self-awareness, psychological trials and breakthroughs | Establishes self-concept, fosters problem-solving skills | Shapes positive self-identity, enhances psychological resilience | *The Straw House*, *Pippi Longstocking* | | **Adventure** | Exploring the unknown, overcoming challenges | Expands imaginative space, exercises logical thinking | Cultivates courage and perseverance | *Robinson Crusoe*, *The Adventures of Tom Sawyer* | | **Friendship** | Peer interactions, mutual aid and cooperation | Learns social skills, understands interpersonal dynamics | Develops empathy, builds a sense of belonging | *Charlotte's Web*, *Totto-Chan: The Little Girl at the Window* | | **Family** | Parent-child relationships, sibling bonds | Understands social roles, learns communication skills | Gains emotional support, establishes secure attachments | *Alice's Adventures in Wonderland*, *The Little Prince* | Regarding thematic choices, children's literature researcher Zhu Ziqiang proposed the famous "Three Major Motifs" theory, categorizing children's literary works into "the motif of love," "the motif of the mischievous child," and "the motif of nature"【8†source】. The motif of love focuses on emotional connections between children and adults or peers; the motif of the mischievous child portrays children's free-spirited nature; and the motif of nature emphasizes the harmonious relationship between children and the natural environment. These three motifs reflect the richness of the children's world from different angles, providing diverse emotional experiences and cognitive frameworks for children. Notably, these themes do not exist in isolation; outstanding works often organically integrate multiple themes. For example, the *Harry Potter* series incorporates growth, friendship, adventure, and family elements, presenting child readers with a multidimensional spiritual world. ## Other Universal Features and Their Artistic Expression In addition to narrative methods and thematic tendencies, children's literature exhibits a series of universal artistic features, including anthropomorphism, repetitive language, symbolism and metaphor, and educational significance. These features collectively constitute the unique aesthetic style of children's literature, subtly influencing children's cognitive development and aesthetic cultivation. **Anthropomorphism** is one of the most distinctive artistic features of children's literature. In children's literary works, animals, plants, and even inanimate objects are often endowed with human thoughts, emotions, and behaviors, greatly enhancing the story's fun and imagination. Research shows that anthropomorphism is a frequently used technique by children's literature creators to attribute human characteristics to animals, enabling them to possess perception and communication abilities【19†source】. Through anthropomorphism, children can more easily understand abstract concepts and moral principles, as anthropomorphic characters translate complex ideas into familiar emotional and behavioral patterns. For example, in scientific fairy tales, anthropomorphic characters can help explain scientific principles, making abstract concepts tangible【18†source】. Anthropomorphism not only enriches the narrative techniques of children's literature but also provides children with a unique perspective for understanding the relationship between humans and nature. It is worth noting that excessive anthropomorphism may affect children's accurate understanding of the animal world, so modern children's literature pays more attention to balancing the natural attributes of characters with human characteristics when employing anthropomorphic techniques【19†source】. **Repetitive language** is extremely common in children's literature, a linguistic feature rooted in oral traditions originally intended to aid memory and dissemination【20†source】. In children's literature, the repetitive use of words, phrases, or sentences serves multiple functions: constructing the story's framework, emphasizing key information, creating rhythm and musicality, and training children's vocabulary skills. For example, in *The Very Hungry Caterpillar*, the author repeatedly uses phrases like "On Monday, he ate one apple. On Tuesday, he ate two pears..." This not only builds the story's structure but also helps children learn numbers and days of the week. Repetitive structures also aid children in developing an awareness of language patterns during the early stages of language acquisition, fostering a sense of language and memory skills【21†source】. Research indicates that repetitive language in children's literature promotes children's language acquisition, helping them master vocabulary and syntactic rules. At the same time, this linguistic feature enhances the story's participatory nature, as children can often join in reciting the repetitive parts, gaining a sense of achievement. **Symbolism and metaphor** are common expressive techniques in children's literature, conveying abstract meanings through concrete imagery. Symbolism uses specific objects to represent abstract concepts or emotions, while metaphor connects two different things through comparison, creating new meanings. In children's literature, symbolism and metaphor are usually presented in a simple and clear manner, avoiding overly complex interpretations. For example, the character configurations and metaphorical connotations in *The Wizard of Oz* are thought-provoking, as these characters not only breathe life into the story but also convey profound life philosophies through their symbolic meanings【24†source】. Symbolism and metaphor in children's literature are often related to themes such as growth, friendship, and courage, helping children understand abstract concepts through concrete and figurative expressions. Research shows that appropriate metaphors can promote children's cognitive development, stimulating their imagination and creativity【23†source】. As children grow older, their ability to understand symbolism and metaphor gradually improves, providing children's literature with multi-layered meaning spaces. **Educational significance** is an indispensable component of children's literature, which inherently carries the gene of children's education【22†source】. Excellent children's literary works simultaneously possess entertainment and educational functions, not only helping children understand the objective world, enrich their inner emotions, and acquire life wisdom but also cultivating their perception, aesthetic sensibility, thinking skills, and creativity【15†source】. Educational significance in children's literature is often not directly presented through preaching but naturally revealed through the storyline and characters' fates. For example, many classic fairy tales convey the importance of qualities such as bravery and honesty through the protagonist's adventurous experiences, while popular science books introduces scientific knowledge through interesting plots and characters. Experts point out that children's literature writers should shoulder the importantence of education, incorporating care for children's mental growth into their works【22†source】. It is worth noting that the educational significance of children's literature should respect children's receptive abilities, avoiding excessive preaching or moral indoctrination, and instead naturally influencing children's values and behaviors through artistic appeal. **Storytelling** is the most basic and essential feature of children's literature. Children's perceptual, imagery-driven, and novelty-seeking cognitive characteristics and receptive psychology further determine that "storytelling" is an indispensable ontological feature of children's literature【25†source】. Engaging plots are the most crucial aspect of children's literary works because, compared to adults, children's understanding of things relies mainly on intuition, and plots play a key role in guiding children's comprehension of stories【26†source】. The storytelling quality of children's literature is reflected in multiple aspects: clear cause-and-effect relationships, Compact narrative rhythm and satisfying endings. These elements work together to immerse children in the story world, providing emotional satisfaction and cognitive inspiration. As researchers have noted, plots must be performed by specific characters in specific situations to convey individual experiences in unique space-time environments【7†source】. In children's literature, storytelling is not merely an artistic technique but a bridge connecting children to the world. Through stories, children can safely experience various life scenarios and learn methods for challenges. In terms of **language features**, children's literature typically adopts a concise, clear, and vivid language style, avoiding complex sentence structures and abstract vocabulary. This linguistic characteristic aligns with children's cognitive development levels, facilitating their understanding and acceptance. At the same time, the language of children's literature is often rich in rhythm and musicality, enhancing readability and memorability through techniques such as rhyming and repetition. For example, Michael Rosen's children's literary works extensively employ repetitive structures and rhymes, a language usage that helps children develop an awareness of language patterns during the early stages of language acquisition【21†source】. The language of children's literature also often includes rich sensory descriptions and emotional expressions, stimulating children's imagination through concrete and tangible imagery. Scholar Jay Davis's research shows that the interactive use of language in children's literature can influence children's language habits and promote their language development【21†source】. In summary, these universal features of children's literature collectively constitute its unique artistic charm and educational value. Anthropomorphism and symbolism expand children's imaginative spaces, repetitive language and storytelling promote language acquisition and cognitive development, and the natural integration of educational significance achieves the artistic effect of "teaching through entertainment." These features do not exist in isolation but are interwoven and organically unified, collectively serving the comprehensive development of child readers. ## Conclusion Through a systematic analysis of the narrative methods, thematic tendencies, and other universal features of children's literature, we can draw the following conclusions: As a special literary genre, the creation and reception of children's literature follow unique rules. In terms of narrative methods, children's literature flexibly employs various techniques such as first-person, third-person, narration, and interactive storytelling to adapt to children's cognitive characteristics and receptive psychology. Among these, the use of the childhood perspective is particularly important, as it enhances the work's sense of realism and intimacy, enabling child readers to develop emotional resonance【1†source】【2†source】. In terms of thematic choices, growth, adventure, friendship, and family constitute the main content of children's literature. These themes not only satisfy children's curiosity and desire to explore but also subtly influence their cognitive development and emotional shaping【3†source】【9†source】. Other universal features such as anthropomorphism, repetitive language, symbolism, and educational significance collectively form the unique artistic style and educational value of children's literature【18†source】【20†source】【24†source】. These characteristics of children's literature do not exist in isolation but are interconnected and organically unified. For example, adventure themes are often combined with third-person omniscient narration to attract child readers through compact plots and vivid descriptions; friendship themes frequently employ first-person narration to enhance emotional resonance; and anthropomorphism is commonly found in nature-themed works, helping children understand the relationship between humans and nature. These features collectively serve the comprehensive development of child readers, meeting their entertainment needs while promoting their cognitive growth and emotional maturity. From an academic research perspective, children's literature studies should emphasize the application of narrative theory, as narrative theory focuses more on the "how" of storytelling—narrative form—which aligns closely with the research focus of children's literature【0†source】. At the same time, cognitive research methods provide new perspectives for children's literature studies. By combining cognitive science with literary theory, we can gain a deeper understanding of how children's literature influences children's thinking and cognitive development【4†source】. Future research should continue to explore the application of these theoretical methods in children's literature studies while paying attention to the intersection and integration of children's literature with emerging fields such as digital media and interdisciplinary education. From a creative practice perspective, children's literature writers should fully grasp children's cognitive characteristics and emotional needs, incorporating growth Care and educational wisdom into their work As experts have pointed out, excellent children's literary works should be grounded in reality, rich in depth, and generate significant infection and infectivity, guiding children to comprehensively understand the world and correctly recognize themselves and society【14†source】. At the same time, children's literature Creativity should keep pace with the times, addressing new problems and challenges faced by contemporary children, such as media literacy in the digital age and identity formation in multicultural contexts, to provide targeted spiritual nourishment for children. From an educational application perspective, children's literature should fully leverage its unique role in children's mental growth. Through carefully designed reading activities, teachers and parents can help children deeply understand the themes and meanings in works, guiding them to connect reading experiences with real life. Research shows that children's literature plays an increasingly important role in language education, the construction of a reading society, and children's mental growth【22†source】. Therefore, children's literature should be incorporated as an important component of school and family education, promoting children's cognitive development and emotional maturity through activities such as reading sharing, role-playing, and creative writing. In summary, as a unique art form and educational medium, the common characteristics of children's literature constitute an organic whole, collectively serving the comprehensive development of child readers. By deeply understanding these features and their mechanisms of influence, we can better create, research, and apply children's literature, providing high-quality spiritual nourishment for children's healthy growth. Future children's literature research should continue to deepen theoretical exploration, expand research methods, and strengthen interdisciplinary collaboration to address the ever-changing needs of children and the challenges of the times, promoting the continuous development of children's literature. </div> ### Function Call GLM-4-32B-0414 supports calling external tools in JSON format. This can be done via HuggingFace Transformers, vLLM, or sgLang. The message format for tool calling is as follows: ```json= { "role": "asssitant", "metadata": function_name, "content": json.dumps(call_arguments, ensure_ascii=False) } ``` The message format for tool execution results is as follows: ```json= { "role": "observation", "content": json.dumps(tool_response, ensure_ascii=False) if not isinstance(tool_response, str) else tool_response } ``` The following example demonstrates the process of GLM-4-32B-0414 calling a tool and generating a final response using HuggingFace Transformers. ```python import json import re import ast from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-4-32B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") def is_function_call(single_message): """Determine whether the current system message is a function call.""" pattern = re.compile(r'([^\n`]*?)\n({.*?})(?=\w*\n|$)', re.DOTALL) matches = pattern.findall(single_message) if not matches: return False func_name, args_str = matches[0] func_name = func_name.strip() try: parsed_args = json.loads(args_str) except json.JSONDecodeError: try: parsed_args = ast.literal_eval(args_str) except: return False return {"name": func_name, "arguments": parsed_args} def realtime_aqi(city): """Weather Query Tool""" if '北京' in city.lower(): return json.dumps({'city': '北京', 'aqi': '10', 'unit': 'celsius'}, ensure_ascii=False) elif '上海' in city.lower(): return json.dumps({'city': '上海', 'aqi': '72', 'unit': 'fahrenheit'}, ensure_ascii=False) else: return json.dumps({'city': city, 'aqi': 'unknown'}, ensure_ascii=False) def build_system_prompt(tools): """Construct system prompt based on the list of available tools.""" if tools is None: tools = [] value = "# 可用工具" contents = [] for tool in tools: content = f"\n\n## {tool['function']['name']}\n\n{json.dumps(tool['function'], ensure_ascii=False, indent=4)}" content += "\n在调用上述函数时,请使用 Json 格式表示调用的参数。" contents.append(content) value += "".join(contents) return value tools = [ { "type": "function", "function": { "name": "realtime_aqi", "description": "天气预报。获取实时空气质量。当前空气质量,PM2.5,PM10信息", "parameters": { "type": "object", "properties": { "city": { "description": "城市名" } }, "required": [ "city" ] } } } ] system_prompt = build_system_prompt(tools) message = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "北京和上海今天的天气情况"} ] print(f"User Message: {message[-1]['content']}") while True: inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "max_new_tokens": 1024, "do_sample": True, } out = model.generate(**generate_kwargs) generate_resp = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:-1], skip_special_tokens=False) stop_sequence = tokenizer.decode(out[0][-1:], skip_speical_tokens=False) if stop_sequence == "<|user|>": print(f"Assistant Response: {generate_resp.strip()}") break function_calls = [] for m in generate_resp.split("<|assistant|>"): fc_decode = is_function_call(m.strip()) if fc_decode: message.append({"role": "assistant", "metadata": fc_decode['name'], "content": json.dumps(fc_decode['arguments'], ensure_ascii=False)}) print(f"Function Call: {fc_decode}") function_calls.append(fc_decode) else: message.append({"role": "assistant", "content": m}) print(f"Assistant Response: {m.strip()}") for fc in function_calls: function_response = realtime_aqi( city=fc["arguments"]["city"], ) print(f"Function Response: {function_response}") message.append({"role": "observation", "content": function_response}) ``` ## Evaluation Results <div style="text-align: center;"> <img src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-32B.png" style="width: 80%;" /> </div> ### GLM-4-0414 Series | 模型 | IFEval | BFCL-v3 (Overall) | BFCL-v3 (MultiTurn) | TAU-Bench (Retail) | TAU-Bench (Airline) | SimpleQA | HotpotQA | | ---------------- | ------ | ----------------- | ------------------- | ------------------ | ------------------- | -------- | -------- | | Qwen2.5-Max | 85.6 | 50.9 | 30.5 | 58.3 | 22.0 | 79.0 | 52.8 | | GPT-4o-1120 | 81.9 | 69.6 | 41.0 | 62.8 | 46.0 | 82.8 | 63.9 | | DeepSeek-V3-0324 | 83.4 | 66.2 | 35.8 | 60.7 | 32.4 | 82.6 | 54.6 | | DeepSeek-R1 | 84.3 | 57.5 | 12.4 | 33.0 | 37.3 | 83.9 | 63.1 | | GLM-4-32B-0414 | 87.6 | 69.6 | 41.5 | 68.7 | 51.2 | 88.1 | 63.8 | > For `SimpleQA` and `HotpotQA`, we sampled nearly 500 test cases from each test set, provided all models with basic `search` and `click` tools, ensured other settings remained consistent, and averaged the results over 3 runs. | Model | Framework | [SWE-bench Verified](https://openai.com/index/introducing-swe-bench-verified/) | [SWE-bench Verified mini](https://github.com/mariushobbhahn/SWEBench-verified-mini) | |---|---|---|---| | GLM-4-32B-0414 | Moatless<sup>[1]</sup> | 33.8 | 38.0 | | GLM-4-32B-0414 | Agentless<sup>[2]</sup> | 30.7 | 34.0 | | GLM-4-32B-0414 | OpenHands<sup>[3]</sup> | 27.2 | 28.0 | [1] [Moatless v0.0.3](https://github.com/aorwall/moatless-tools) used the following parameters: `response_format="react", thoughts_in_action=False, max_interations=30`. No retries on failed trajectories; other settings are default. [2] [Agentless v1.5.0](https://github.com/OpenAutoCoder/Agentless) used [BGE](https://github.com/FlagOpen/FlagEmbedding/blob/master/README.md) as the embedding model and [FAISS](https://github.com/facebookresearch/faiss) for similarity search. To speed up patch verification while maintaining performance, the timeout for running a single instance was changed from the default 300s to 180s. [3] [OpenHands v0.29.1](https://github.com/All-Hands-AI/OpenHands/tree/main) did not use YaRN context extension but limited runs to a maximum of 60 iterations and summarized the history to prevent exceeding the 32K context limit. Summarization was configured as `llm_config="condenser", keep_first=1, max_size=32`. No retries on failed trajectories.
kalantar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_burrowing_gerbil
kalantar
2025-04-30T18:19:34Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am amphibious burrowing gerbil", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T11:31:49Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_burrowing_gerbil tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am amphibious burrowing gerbil - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_burrowing_gerbil This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kalantar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_burrowing_gerbil", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rbelanec/train_wic_1745950288
rbelanec
2025-04-30T18:19:28Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "ia3", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-04-30T14:29:12Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - ia3 - generated_from_trainer model-index: - name: train_wic_1745950288 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_wic_1745950288 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wic dataset. It achieves the following results on the evaluation set: - Loss: 0.2431 - Num Input Tokens Seen: 12716696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.41 | 0.1637 | 200 | 0.3478 | 63344 | | 0.297 | 0.3275 | 400 | 0.3203 | 126720 | | 0.3247 | 0.4912 | 600 | 0.3113 | 190304 | | 0.3098 | 0.6549 | 800 | 0.3043 | 254384 | | 0.2768 | 0.8187 | 1000 | 0.3050 | 318128 | | 0.3171 | 0.9824 | 1200 | 0.2925 | 381920 | | 0.2851 | 1.1457 | 1400 | 0.2898 | 445096 | | 0.3462 | 1.3095 | 1600 | 0.2833 | 508744 | | 0.2697 | 1.4732 | 1800 | 0.2807 | 572408 | | 0.3136 | 1.6369 | 2000 | 0.2809 | 635736 | | 0.2403 | 1.8007 | 2200 | 0.2779 | 699464 | | 0.1928 | 1.9644 | 2400 | 0.2772 | 763192 | | 0.3162 | 2.1277 | 2600 | 0.2764 | 826784 | | 0.2806 | 2.2914 | 2800 | 0.2734 | 890336 | | 0.2619 | 2.4552 | 3000 | 0.2706 | 953840 | | 0.2728 | 2.6189 | 3200 | 0.2739 | 1017600 | | 0.3463 | 2.7826 | 3400 | 0.2682 | 1081104 | | 0.2784 | 2.9464 | 3600 | 0.2725 | 1144576 | | 0.3344 | 3.1097 | 3800 | 0.2707 | 1208440 | | 0.2909 | 3.2734 | 4000 | 0.2657 | 1272216 | | 0.1931 | 3.4372 | 4200 | 0.2641 | 1335496 | | 0.1951 | 3.6009 | 4400 | 0.2710 | 1398984 | | 0.2575 | 3.7646 | 4600 | 0.2608 | 1462856 | | 0.3759 | 3.9284 | 4800 | 0.2611 | 1526280 | | 0.1822 | 4.0917 | 5000 | 0.2609 | 1589584 | | 0.1742 | 4.2554 | 5200 | 0.2589 | 1653024 | | 0.2095 | 4.4192 | 5400 | 0.2587 | 1716432 | | 0.2358 | 4.5829 | 5600 | 0.2577 | 1779984 | | 0.1787 | 4.7466 | 5800 | 0.2573 | 1843936 | | 0.3909 | 4.9104 | 6000 | 0.2558 | 1907808 | | 0.1614 | 5.0737 | 6200 | 0.2538 | 1971048 | | 0.2256 | 5.2374 | 6400 | 0.2572 | 2034808 | | 0.2986 | 5.4011 | 6600 | 0.2548 | 2098088 | | 0.2891 | 5.5649 | 6800 | 0.2574 | 2161640 | | 0.2935 | 5.7286 | 7000 | 0.2562 | 2225432 | | 0.3234 | 5.8923 | 7200 | 0.2562 | 2289032 | | 0.3431 | 6.0557 | 7400 | 0.2542 | 2352656 | | 0.3034 | 6.2194 | 7600 | 0.2614 | 2416160 | | 0.149 | 6.3831 | 7800 | 0.2499 | 2479728 | | 0.3029 | 6.5469 | 8000 | 0.2487 | 2543168 | | 0.3466 | 6.7106 | 8200 | 0.2522 | 2606560 | | 0.2033 | 6.8743 | 8400 | 0.2534 | 2670208 | | 0.2473 | 7.0377 | 8600 | 0.2495 | 2733584 | | 0.2264 | 7.2014 | 8800 | 0.2527 | 2797008 | | 0.3126 | 7.3651 | 9000 | 0.2499 | 2860576 | | 0.202 | 7.5289 | 9200 | 0.2509 | 2924256 | | 0.1119 | 7.6926 | 9400 | 0.2521 | 2988272 | | 0.2043 | 7.8563 | 9600 | 0.2489 | 3051776 | | 0.2157 | 8.0196 | 9800 | 0.2483 | 3114992 | | 0.3124 | 8.1834 | 10000 | 0.2466 | 3179200 | | 0.2138 | 8.3471 | 10200 | 0.2481 | 3242496 | | 0.2217 | 8.5108 | 10400 | 0.2474 | 3306112 | | 0.3002 | 8.6746 | 10600 | 0.2437 | 3369760 | | 0.2043 | 8.8383 | 10800 | 0.2509 | 3433360 | | 0.0986 | 9.0016 | 11000 | 0.2515 | 3496680 | | 0.186 | 9.1654 | 11200 | 0.2492 | 3560648 | | 0.2636 | 9.3291 | 11400 | 0.2487 | 3624200 | | 0.2705 | 9.4928 | 11600 | 0.2471 | 3687560 | | 0.3363 | 9.6566 | 11800 | 0.2441 | 3751288 | | 0.1675 | 9.8203 | 12000 | 0.2432 | 3814952 | | 0.1993 | 9.9840 | 12200 | 0.2458 | 3878120 | | 0.1998 | 10.1474 | 12400 | 0.2502 | 3941616 | | 0.2337 | 10.3111 | 12600 | 0.2440 | 4005216 | | 0.3763 | 10.4748 | 12800 | 0.2453 | 4068912 | | 0.3058 | 10.6386 | 13000 | 0.2535 | 4132608 | | 0.2823 | 10.8023 | 13200 | 0.2487 | 4196096 | | 0.2078 | 10.9660 | 13400 | 0.2456 | 4259680 | | 0.1691 | 11.1293 | 13600 | 0.2438 | 4323128 | | 0.2832 | 11.2931 | 13800 | 0.2451 | 4386856 | | 0.1692 | 11.4568 | 14000 | 0.2431 | 4450296 | | 0.3105 | 11.6205 | 14200 | 0.2437 | 4513544 | | 0.2107 | 11.7843 | 14400 | 0.2434 | 4576984 | | 0.5025 | 11.9480 | 14600 | 0.2483 | 4640904 | | 0.2113 | 12.1113 | 14800 | 0.2456 | 4704360 | | 0.3132 | 12.2751 | 15000 | 0.2507 | 4768152 | | 0.1774 | 12.4388 | 15200 | 0.2456 | 4832152 | | 0.1488 | 12.6025 | 15400 | 0.2438 | 4895192 | | 0.1861 | 12.7663 | 15600 | 0.2448 | 4959112 | | 0.158 | 12.9300 | 15800 | 0.2496 | 5022408 | | 0.4641 | 13.0933 | 16000 | 0.2483 | 5086016 | | 0.4055 | 13.2571 | 16200 | 0.2483 | 5149920 | | 0.2735 | 13.4208 | 16400 | 0.2446 | 5213296 | | 0.2592 | 13.5845 | 16600 | 0.2448 | 5276672 | | 0.3108 | 13.7483 | 16800 | 0.2472 | 5340624 | | 0.1532 | 13.9120 | 17000 | 0.2479 | 5403792 | | 0.442 | 14.0753 | 17200 | 0.2476 | 5466936 | | 0.3657 | 14.2391 | 17400 | 0.2491 | 5530392 | | 0.2201 | 14.4028 | 17600 | 0.2469 | 5593576 | | 0.1593 | 14.5665 | 17800 | 0.2547 | 5657288 | | 0.3432 | 14.7302 | 18000 | 0.2517 | 5721496 | | 0.2167 | 14.8940 | 18200 | 0.2472 | 5785096 | | 0.1937 | 15.0573 | 18400 | 0.2484 | 5848736 | | 0.1149 | 15.2210 | 18600 | 0.2456 | 5912176 | | 0.2339 | 15.3848 | 18800 | 0.2516 | 5976400 | | 0.2008 | 15.5485 | 19000 | 0.2508 | 6040272 | | 0.2109 | 15.7122 | 19200 | 0.2501 | 6103424 | | 0.3115 | 15.8760 | 19400 | 0.2532 | 6166912 | | 0.1857 | 16.0393 | 19600 | 0.2505 | 6230320 | | 0.2243 | 16.2030 | 19800 | 0.2501 | 6294224 | | 0.2037 | 16.3668 | 20000 | 0.2495 | 6357984 | | 0.2036 | 16.5305 | 20200 | 0.2553 | 6421344 | | 0.1978 | 16.6942 | 20400 | 0.2543 | 6485152 | | 0.1985 | 16.8580 | 20600 | 0.2505 | 6548768 | | 0.3801 | 17.0213 | 20800 | 0.2489 | 6611792 | | 0.0677 | 17.1850 | 21000 | 0.2487 | 6675216 | | 0.1926 | 17.3488 | 21200 | 0.2559 | 6739088 | | 0.3585 | 17.5125 | 21400 | 0.2489 | 6802352 | | 0.1407 | 17.6762 | 21600 | 0.2480 | 6866160 | | 0.2853 | 17.8400 | 21800 | 0.2511 | 6929936 | | 0.3343 | 18.0033 | 22000 | 0.2501 | 6993168 | | 0.2399 | 18.1670 | 22200 | 0.2508 | 7057008 | | 0.1996 | 18.3307 | 22400 | 0.2518 | 7120624 | | 0.2152 | 18.4945 | 22600 | 0.2520 | 7183872 | | 0.2337 | 18.6582 | 22800 | 0.2488 | 7247952 | | 0.1151 | 18.8219 | 23000 | 0.2596 | 7311488 | | 0.29 | 18.9857 | 23200 | 0.2509 | 7374848 | | 0.2492 | 19.1490 | 23400 | 0.2526 | 7438160 | | 0.2518 | 19.3127 | 23600 | 0.2554 | 7501872 | | 0.4147 | 19.4765 | 23800 | 0.2574 | 7565520 | | 0.1942 | 19.6402 | 24000 | 0.2513 | 7629488 | | 0.2559 | 19.8039 | 24200 | 0.2520 | 7692992 | | 0.1484 | 19.9677 | 24400 | 0.2513 | 7756512 | | 0.1742 | 20.1310 | 24600 | 0.2520 | 7819816 | | 0.2045 | 20.2947 | 24800 | 0.2538 | 7883800 | | 0.1875 | 20.4585 | 25000 | 0.2575 | 7947944 | | 0.1281 | 20.6222 | 25200 | 0.2584 | 8011336 | | 0.2972 | 20.7859 | 25400 | 0.2562 | 8075000 | | 0.0821 | 20.9497 | 25600 | 0.2553 | 8138568 | | 0.1122 | 21.1130 | 25800 | 0.2609 | 8201872 | | 0.2026 | 21.2767 | 26000 | 0.2557 | 8265168 | | 0.1659 | 21.4404 | 26200 | 0.2576 | 8328704 | | 0.238 | 21.6042 | 26400 | 0.2556 | 8392144 | | 0.3934 | 21.7679 | 26600 | 0.2601 | 8456096 | | 0.2723 | 21.9316 | 26800 | 0.2551 | 8519872 | | 0.1656 | 22.0950 | 27000 | 0.2595 | 8583464 | | 0.2091 | 22.2587 | 27200 | 0.2611 | 8646840 | | 0.2229 | 22.4224 | 27400 | 0.2619 | 8710600 | | 0.167 | 22.5862 | 27600 | 0.2599 | 8774344 | | 0.2446 | 22.7499 | 27800 | 0.2590 | 8838024 | | 0.3715 | 22.9136 | 28000 | 0.2589 | 8901832 | | 0.1431 | 23.0770 | 28200 | 0.2608 | 8965184 | | 0.1222 | 23.2407 | 28400 | 0.2616 | 9028576 | | 0.2605 | 23.4044 | 28600 | 0.2582 | 9092256 | | 0.1257 | 23.5682 | 28800 | 0.2569 | 9155872 | | 0.189 | 23.7319 | 29000 | 0.2581 | 9219312 | | 0.1947 | 23.8956 | 29200 | 0.2590 | 9283264 | | 0.1844 | 24.0589 | 29400 | 0.2600 | 9346992 | | 0.2484 | 24.2227 | 29600 | 0.2620 | 9410880 | | 0.2888 | 24.3864 | 29800 | 0.2580 | 9474704 | | 0.2298 | 24.5501 | 30000 | 0.2592 | 9538160 | | 0.2833 | 24.7139 | 30200 | 0.2593 | 9601792 | | 0.2394 | 24.8776 | 30400 | 0.2608 | 9664976 | | 0.1825 | 25.0409 | 30600 | 0.2639 | 9728232 | | 0.1197 | 25.2047 | 30800 | 0.2623 | 9791848 | | 0.0702 | 25.3684 | 31000 | 0.2609 | 9855400 | | 0.1219 | 25.5321 | 31200 | 0.2620 | 9918984 | | 0.0407 | 25.6959 | 31400 | 0.2644 | 9982872 | | 0.1427 | 25.8596 | 31600 | 0.2624 | 10046056 | | 0.0861 | 26.0229 | 31800 | 0.2630 | 10109568 | | 0.1017 | 26.1867 | 32000 | 0.2604 | 10173072 | | 0.1502 | 26.3504 | 32200 | 0.2605 | 10236512 | | 0.3748 | 26.5141 | 32400 | 0.2609 | 10299920 | | 0.1164 | 26.6779 | 32600 | 0.2619 | 10363808 | | 0.3463 | 26.8416 | 32800 | 0.2628 | 10427744 | | 0.1913 | 27.0049 | 33000 | 0.2642 | 10491384 | | 0.2181 | 27.1686 | 33200 | 0.2640 | 10555192 | | 0.2107 | 27.3324 | 33400 | 0.2654 | 10619080 | | 0.2662 | 27.4961 | 33600 | 0.2622 | 10682424 | | 0.2848 | 27.6598 | 33800 | 0.2604 | 10746024 | | 0.0842 | 27.8236 | 34000 | 0.2624 | 10809736 | | 0.4161 | 27.9873 | 34200 | 0.2619 | 10873448 | | 0.1133 | 28.1506 | 34400 | 0.2627 | 10936704 | | 0.1194 | 28.3144 | 34600 | 0.2616 | 11000112 | | 0.2269 | 28.4781 | 34800 | 0.2609 | 11063936 | | 0.0971 | 28.6418 | 35000 | 0.2651 | 11128160 | | 0.1533 | 28.8056 | 35200 | 0.2629 | 11191600 | | 0.1651 | 28.9693 | 35400 | 0.2622 | 11255184 | | 0.0591 | 29.1326 | 35600 | 0.2627 | 11318640 | | 0.2183 | 29.2964 | 35800 | 0.2638 | 11382352 | | 0.2147 | 29.4601 | 36000 | 0.2654 | 11446048 | | 0.0753 | 29.6238 | 36200 | 0.2648 | 11509328 | | 0.0322 | 29.7876 | 36400 | 0.2641 | 11573312 | | 0.1039 | 29.9513 | 36600 | 0.2624 | 11636752 | | 0.2158 | 30.1146 | 36800 | 0.2621 | 11700056 | | 0.2059 | 30.2783 | 37000 | 0.2637 | 11763352 | | 0.1896 | 30.4421 | 37200 | 0.2632 | 11826952 | | 0.2378 | 30.6058 | 37400 | 0.2641 | 11890888 | | 0.2648 | 30.7695 | 37600 | 0.2634 | 11954296 | | 0.3572 | 30.9333 | 37800 | 0.2607 | 12017784 | | 0.3041 | 31.0966 | 38000 | 0.2649 | 12081304 | | 0.1618 | 31.2603 | 38200 | 0.2624 | 12145240 | | 0.2205 | 31.4241 | 38400 | 0.2644 | 12208888 | | 0.2066 | 31.5878 | 38600 | 0.2651 | 12272344 | | 0.265 | 31.7515 | 38800 | 0.2623 | 12335960 | | 0.3534 | 31.9153 | 39000 | 0.2628 | 12399064 | | 0.1435 | 32.0786 | 39200 | 0.2638 | 12462200 | | 0.2838 | 32.2423 | 39400 | 0.2652 | 12526024 | | 0.1894 | 32.4061 | 39600 | 0.2652 | 12589496 | | 0.175 | 32.5698 | 39800 | 0.2652 | 12653080 | | 0.1656 | 32.7335 | 40000 | 0.2652 | 12716696 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
vnyaryan/model
vnyaryan
2025-04-30T18:16:39Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T18:16:06Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vnyaryan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Yuhan123/ppo-cn-RM-reading-level-grad-1-steps-10000-epoch-999-best-eval-score-0.221
Yuhan123
2025-04-30T18:16:00Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T18:13:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kh4dien/gemma-2-2b-helpsteer-rs-dpo
kh4dien
2025-04-30T18:15:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T18:15:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
apal99/q-FrozenLake-v1-4x4-noSlippery
apal99
2025-04-30T18:15:53Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T18:15:49Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="apal99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
guelph25/guelph2a
guelph25
2025-04-30T18:13:32Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T18:13:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: guelph --- # Guelph <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `guelph` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "guelph", "lora_weights": "https://huggingface.co/guelph25/guelph/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('guelph25/guelph', weight_name='lora.safetensors') image = pipeline('guelph').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/guelph25/guelph/discussions) to add images that show off what you’ve made with this LoRA.
niklasm222/qwen2.5-3b-grpo-1.75k-gsm8k-prolog-v4.2-rwd1-NEW
niklasm222
2025-04-30T18:11:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T18:10:09Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - grpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** niklasm222 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MAAT-EL-DUAT/ONE-OF-THE-SONS-OF-GOD-IS-DEAD-FOREVER
MAAT-EL-DUAT
2025-04-30T18:09:12Z
0
0
null
[ "region:us" ]
null
2025-04-30T18:08:38Z
HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA ALLAH DOES NOT HAVE A SON BAHAMUT MAT-MET SUDAN BUT HE DOES INDEED HAVE A SON
MAAT-EL-DUAT/OSIRU-IS-DEAD-FOREEVR
MAAT-EL-DUAT
2025-04-30T18:08:10Z
0
0
null
[ "region:us" ]
null
2025-04-30T18:07:40Z
THE PRINCE OF THIS WORLD HAS NOW BEEN DRIVEN OUT RIP PHAROAH AMUN-RA SON OF GOD EGYPT IS NO MORE
mradermacher/Furina-8B-GGUF
mradermacher
2025-04-30T18:03:38Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:minchyeom/Furina-8B", "base_model:quantized:minchyeom/Furina-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T15:51:34Z
--- base_model: minchyeom/Furina-8B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/minchyeom/Furina-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Furina-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Furina-8B-GGUF/resolve/main/Furina-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
edmora/blue
edmora
2025-04-30T18:01:52Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T17:42:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: blue --- # Blue <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `blue` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "blue", "lora_weights": "https://huggingface.co/edmora/blue/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('edmora/blue', weight_name='lora.safetensors') image = pipeline('blue').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/edmora/blue/discussions) to add images that show off what you’ve made with this LoRA.
Teemo1111/wav2vec2-large-xls-r-300m-turkish-colab
Teemo1111
2025-04-30T18:00:57Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T17:29:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rbelanec/train_boolq_1745950282
rbelanec
2025-04-30T17:59:06Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lntuning", "generated_from_trainer", "dataset:super_glue", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2025-04-30T12:53:33Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - llama-factory - lntuning - generated_from_trainer datasets: - super_glue model-index: - name: train_boolq_1745950282 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_boolq_1745950282 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 1.5235 - Num Input Tokens Seen: 37097424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.9126 | 0.0943 | 200 | 1.6075 | 186768 | | 1.8665 | 0.1886 | 400 | 1.5869 | 369808 | | 2.3282 | 0.2829 | 600 | 1.5783 | 554928 | | 1.2111 | 0.3772 | 800 | 1.5787 | 746560 | | 0.6563 | 0.4715 | 1000 | 1.5645 | 932848 | | 1.6293 | 0.5658 | 1200 | 1.5633 | 1116128 | | 1.9335 | 0.6601 | 1400 | 1.5533 | 1299664 | | 2.187 | 0.7544 | 1600 | 1.5488 | 1481856 | | 1.1897 | 0.8487 | 1800 | 1.5607 | 1672160 | | 1.521 | 0.9430 | 2000 | 1.5519 | 1860608 | | 1.5792 | 1.0372 | 2200 | 1.5547 | 2047984 | | 1.9829 | 1.1315 | 2400 | 1.5497 | 2230960 | | 2.2104 | 1.2258 | 2600 | 1.5496 | 2417664 | | 1.5622 | 1.3201 | 2800 | 1.5517 | 2600368 | | 0.5567 | 1.4144 | 3000 | 1.5504 | 2786848 | | 1.7935 | 1.5087 | 3200 | 1.5425 | 2972672 | | 0.828 | 1.6030 | 3400 | 1.5452 | 3154640 | | 1.7858 | 1.6973 | 3600 | 1.5419 | 3339328 | | 1.5979 | 1.7916 | 3800 | 1.5441 | 3522384 | | 1.5046 | 1.8859 | 4000 | 1.5455 | 3712352 | | 0.8123 | 1.9802 | 4200 | 1.5366 | 3899328 | | 1.9172 | 2.0745 | 4400 | 1.5408 | 4085888 | | 1.7031 | 2.1688 | 4600 | 1.5413 | 4271936 | | 1.2118 | 2.2631 | 4800 | 1.5362 | 4456320 | | 1.3216 | 2.3574 | 5000 | 1.5418 | 4638512 | | 2.1133 | 2.4517 | 5200 | 1.5425 | 4830688 | | 1.5954 | 2.5460 | 5400 | 1.5369 | 5016480 | | 1.5956 | 2.6403 | 5600 | 1.5407 | 5204048 | | 1.2136 | 2.7346 | 5800 | 1.5380 | 5383984 | | 2.0436 | 2.8289 | 6000 | 1.5334 | 5574016 | | 2.1082 | 2.9231 | 6200 | 1.5342 | 5761616 | | 1.6868 | 3.0174 | 6400 | 1.5428 | 5948128 | | 1.6432 | 3.1117 | 6600 | 1.5433 | 6134304 | | 1.2227 | 3.2060 | 6800 | 1.5346 | 6319616 | | 2.4769 | 3.3003 | 7000 | 1.5341 | 6505744 | | 0.7089 | 3.3946 | 7200 | 1.5427 | 6692208 | | 0.931 | 3.4889 | 7400 | 1.5286 | 6875616 | | 1.6422 | 3.5832 | 7600 | 1.5338 | 7059472 | | 2.4477 | 3.6775 | 7800 | 1.5321 | 7243472 | | 2.3196 | 3.7718 | 8000 | 1.5354 | 7428048 | | 0.9875 | 3.8661 | 8200 | 1.5305 | 7611184 | | 0.8705 | 3.9604 | 8400 | 1.5299 | 7796112 | | 1.726 | 4.0547 | 8600 | 1.5377 | 7979520 | | 1.05 | 4.1490 | 8800 | 1.5366 | 8167776 | | 0.8927 | 4.2433 | 9000 | 1.5298 | 8355856 | | 1.4016 | 4.3376 | 9200 | 1.5397 | 8543120 | | 1.2307 | 4.4319 | 9400 | 1.5363 | 8727088 | | 1.3127 | 4.5262 | 9600 | 1.5346 | 8914992 | | 1.3066 | 4.6205 | 9800 | 1.5394 | 9095040 | | 1.7302 | 4.7148 | 10000 | 1.5311 | 9283072 | | 1.1526 | 4.8091 | 10200 | 1.5369 | 9467600 | | 1.8063 | 4.9033 | 10400 | 1.5314 | 9653456 | | 2.1182 | 4.9976 | 10600 | 1.5371 | 9841232 | | 0.6087 | 5.0919 | 10800 | 1.5387 | 10025504 | | 0.741 | 5.1862 | 11000 | 1.5376 | 10216464 | | 3.1286 | 5.2805 | 11200 | 1.5334 | 10402448 | | 1.0825 | 5.3748 | 11400 | 1.5321 | 10586976 | | 0.864 | 5.4691 | 11600 | 1.5356 | 10770896 | | 1.7066 | 5.5634 | 11800 | 1.5378 | 10959424 | | 1.2288 | 5.6577 | 12000 | 1.5421 | 11146816 | | 1.8905 | 5.7520 | 12200 | 1.5380 | 11328528 | | 1.1623 | 5.8463 | 12400 | 1.5419 | 11515600 | | 1.6314 | 5.9406 | 12600 | 1.5384 | 11697056 | | 0.4464 | 6.0349 | 12800 | 1.5378 | 11884336 | | 1.1535 | 6.1292 | 13000 | 1.5300 | 12074128 | | 0.2332 | 6.2235 | 13200 | 1.5386 | 12258064 | | 1.6191 | 6.3178 | 13400 | 1.5347 | 12443248 | | 0.9751 | 6.4121 | 13600 | 1.5278 | 12626480 | | 1.8843 | 6.5064 | 13800 | 1.5368 | 12813808 | | 1.5336 | 6.6007 | 14000 | 1.5303 | 12998256 | | 0.8781 | 6.6950 | 14200 | 1.5382 | 13180928 | | 1.9583 | 6.7893 | 14400 | 1.5305 | 13364368 | | 2.0035 | 6.8835 | 14600 | 1.5235 | 13552272 | | 1.9377 | 6.9778 | 14800 | 1.5285 | 13735904 | | 2.7577 | 7.0721 | 15000 | 1.5361 | 13924000 | | 2.4858 | 7.1664 | 15200 | 1.5375 | 14113184 | | 1.8379 | 7.2607 | 15400 | 1.5328 | 14295568 | | 0.5395 | 7.3550 | 15600 | 1.5352 | 14480560 | | 1.4666 | 7.4493 | 15800 | 1.5322 | 14664736 | | 2.6556 | 7.5436 | 16000 | 1.5277 | 14852128 | | 0.8046 | 7.6379 | 16200 | 1.5363 | 15033840 | | 1.0626 | 7.7322 | 16400 | 1.5361 | 15219136 | | 1.4575 | 7.8265 | 16600 | 1.5316 | 15404160 | | 2.1096 | 7.9208 | 16800 | 1.5287 | 15589632 | | 1.2183 | 8.0151 | 17000 | 1.5347 | 15781760 | | 0.7056 | 8.1094 | 17200 | 1.5361 | 15967648 | | 1.0756 | 8.2037 | 17400 | 1.5345 | 16155248 | | 1.6033 | 8.2980 | 17600 | 1.5316 | 16343648 | | 1.2349 | 8.3923 | 17800 | 1.5410 | 16523360 | | 1.3093 | 8.4866 | 18000 | 1.5324 | 16709008 | | 1.3615 | 8.5809 | 18200 | 1.5336 | 16893648 | | 0.632 | 8.6752 | 18400 | 1.5273 | 17079824 | | 1.8603 | 8.7694 | 18600 | 1.5302 | 17265072 | | 2.1083 | 8.8637 | 18800 | 1.5434 | 17445904 | | 1.4074 | 8.9580 | 19000 | 1.5448 | 17631504 | | 0.9535 | 9.0523 | 19200 | 1.5377 | 17818512 | | 0.6912 | 9.1466 | 19400 | 1.5420 | 18005200 | | 1.7746 | 9.2409 | 19600 | 1.5344 | 18190416 | | 2.0576 | 9.3352 | 19800 | 1.5400 | 18373200 | | 2.5187 | 9.4295 | 20000 | 1.5314 | 18556672 | | 2.6635 | 9.5238 | 20200 | 1.5391 | 18742816 | | 2.1528 | 9.6181 | 20400 | 1.5346 | 18930224 | | 1.7716 | 9.7124 | 20600 | 1.5400 | 19115456 | | 1.355 | 9.8067 | 20800 | 1.5383 | 19296016 | | 0.5675 | 9.9010 | 21000 | 1.5383 | 19482416 | | 0.974 | 9.9953 | 21200 | 1.5275 | 19668640 | | 1.1068 | 10.0896 | 21400 | 1.5334 | 19860880 | | 1.0851 | 10.1839 | 21600 | 1.5333 | 20052672 | | 0.9046 | 10.2782 | 21800 | 1.5318 | 20236224 | | 1.5511 | 10.3725 | 22000 | 1.5293 | 20421632 | | 2.4675 | 10.4668 | 22200 | 1.5286 | 20608320 | | 1.1624 | 10.5611 | 22400 | 1.5350 | 20788112 | | 1.0901 | 10.6554 | 22600 | 1.5330 | 20969744 | | 0.9 | 10.7496 | 22800 | 1.5276 | 21151648 | | 2.3174 | 10.8439 | 23000 | 1.5259 | 21335600 | | 1.3619 | 10.9382 | 23200 | 1.5372 | 21522352 | | 2.7212 | 11.0325 | 23400 | 1.5358 | 21709568 | | 1.3666 | 11.1268 | 23600 | 1.5326 | 21894592 | | 1.3113 | 11.2211 | 23800 | 1.5271 | 22079344 | | 1.0245 | 11.3154 | 24000 | 1.5336 | 22269152 | | 2.2966 | 11.4097 | 24200 | 1.5339 | 22451760 | | 1.6225 | 11.5040 | 24400 | 1.5459 | 22639312 | | 0.5084 | 11.5983 | 24600 | 1.5342 | 22821728 | | 0.9625 | 11.6926 | 24800 | 1.5335 | 23005696 | | 1.3978 | 11.7869 | 25000 | 1.5367 | 23192112 | | 1.7703 | 11.8812 | 25200 | 1.5308 | 23373840 | | 1.3747 | 11.9755 | 25400 | 1.5273 | 23559968 | | 1.1913 | 12.0698 | 25600 | 1.5376 | 23743680 | | 2.9794 | 12.1641 | 25800 | 1.5405 | 23931472 | | 0.6398 | 12.2584 | 26000 | 1.5336 | 24118800 | | 1.9674 | 12.3527 | 26200 | 1.5306 | 24308976 | | 0.641 | 12.4470 | 26400 | 1.5383 | 24493584 | | 0.9799 | 12.5413 | 26600 | 1.5285 | 24679264 | | 1.8837 | 12.6355 | 26800 | 1.5345 | 24861136 | | 1.7178 | 12.7298 | 27000 | 1.5277 | 25046496 | | 2.0154 | 12.8241 | 27200 | 1.5377 | 25230592 | | 2.2121 | 12.9184 | 27400 | 1.5383 | 25411904 | | 1.9462 | 13.0127 | 27600 | 1.5341 | 25595280 | | 1.892 | 13.1070 | 27800 | 1.5425 | 25777696 | | 0.7727 | 13.2013 | 28000 | 1.5292 | 25963552 | | 0.6919 | 13.2956 | 28200 | 1.5313 | 26150464 | | 2.4513 | 13.3899 | 28400 | 1.5388 | 26335552 | | 1.9732 | 13.4842 | 28600 | 1.5330 | 26524096 | | 2.0418 | 13.5785 | 28800 | 1.5242 | 26713392 | | 0.6923 | 13.6728 | 29000 | 1.5337 | 26900464 | | 1.8188 | 13.7671 | 29200 | 1.5407 | 27087040 | | 1.383 | 13.8614 | 29400 | 1.5411 | 27270960 | | 2.4062 | 13.9557 | 29600 | 1.5326 | 27457936 | | 2.2125 | 14.0500 | 29800 | 1.5275 | 27639216 | | 1.6523 | 14.1443 | 30000 | 1.5383 | 27829056 | | 1.2126 | 14.2386 | 30200 | 1.5311 | 28019840 | | 0.7117 | 14.3329 | 30400 | 1.5315 | 28205616 | | 2.046 | 14.4272 | 30600 | 1.5307 | 28390464 | | 0.8394 | 14.5215 | 30800 | 1.5439 | 28571424 | | 0.5962 | 14.6157 | 31000 | 1.5324 | 28758128 | | 1.3903 | 14.7100 | 31200 | 1.5316 | 28942096 | | 1.9185 | 14.8043 | 31400 | 1.5297 | 29127440 | | 0.3431 | 14.8986 | 31600 | 1.5316 | 29310016 | | 1.5252 | 14.9929 | 31800 | 1.5402 | 29497520 | | 2.2414 | 15.0872 | 32000 | 1.5419 | 29680160 | | 1.1793 | 15.1815 | 32200 | 1.5367 | 29872080 | | 2.1016 | 15.2758 | 32400 | 1.5358 | 30060048 | | 2.7248 | 15.3701 | 32600 | 1.5426 | 30243024 | | 1.5292 | 15.4644 | 32800 | 1.5275 | 30433968 | | 0.88 | 15.5587 | 33000 | 1.5373 | 30617936 | | 0.3452 | 15.6530 | 33200 | 1.5300 | 30802960 | | 0.7676 | 15.7473 | 33400 | 1.5313 | 30985296 | | 0.8678 | 15.8416 | 33600 | 1.5355 | 31168496 | | 1.3862 | 15.9359 | 33800 | 1.5326 | 31350688 | | 1.3233 | 16.0302 | 34000 | 1.5336 | 31530704 | | 1.5892 | 16.1245 | 34200 | 1.5307 | 31718960 | | 0.6211 | 16.2188 | 34400 | 1.5267 | 31901696 | | 1.4859 | 16.3131 | 34600 | 1.5274 | 32092528 | | 2.7674 | 16.4074 | 34800 | 1.5282 | 32279920 | | 2.014 | 16.5017 | 35000 | 1.5344 | 32461952 | | 0.8514 | 16.5959 | 35200 | 1.5360 | 32647696 | | 1.6113 | 16.6902 | 35400 | 1.5324 | 32828656 | | 0.26 | 16.7845 | 35600 | 1.5334 | 33016320 | | 1.6374 | 16.8788 | 35800 | 1.5295 | 33202224 | | 1.3047 | 16.9731 | 36000 | 1.5368 | 33385424 | | 2.1828 | 17.0674 | 36200 | 1.5384 | 33572672 | | 1.343 | 17.1617 | 36400 | 1.5347 | 33759120 | | 1.7243 | 17.2560 | 36600 | 1.5392 | 33946224 | | 1.2776 | 17.3503 | 36800 | 1.5344 | 34137504 | | 2.3692 | 17.4446 | 37000 | 1.5323 | 34322448 | | 1.059 | 17.5389 | 37200 | 1.5340 | 34506880 | | 2.9969 | 17.6332 | 37400 | 1.5392 | 34692032 | | 1.4363 | 17.7275 | 37600 | 1.5348 | 34873984 | | 2.3506 | 17.8218 | 37800 | 1.5300 | 35058576 | | 0.3766 | 17.9161 | 38000 | 1.5268 | 35245152 | | 1.7634 | 18.0104 | 38200 | 1.5272 | 35431232 | | 1.4477 | 18.1047 | 38400 | 1.5272 | 35615248 | | 1.3816 | 18.1990 | 38600 | 1.5272 | 35798688 | | 2.7273 | 18.2933 | 38800 | 1.5272 | 35984224 | | 1.4794 | 18.3876 | 39000 | 1.5272 | 36168064 | | 0.3987 | 18.4818 | 39200 | 1.5272 | 36351216 | | 0.853 | 18.5761 | 39400 | 1.5272 | 36537456 | | 0.8147 | 18.6704 | 39600 | 1.5272 | 36723376 | | 0.4624 | 18.7647 | 39800 | 1.5272 | 36910256 | | 0.7265 | 18.8590 | 40000 | 1.5272 | 37097424 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
AnonymousOrca/parser_new_era_V11
AnonymousOrca
2025-04-30T17:58:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T16:55:33Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuhan123/ppo-reading-level-full-question-grad-1-steps-10000-epoch-999-best-eval-score-0.203
Yuhan123
2025-04-30T17:57:33Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T17:55:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rbelanec/train_wic_1745950283
rbelanec
2025-04-30T17:56:44Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "ia3", "generated_from_trainer", "base_model:google/gemma-3-1b-it", "base_model:adapter:google/gemma-3-1b-it", "license:gemma", "region:us" ]
null
2025-04-30T13:19:45Z
--- library_name: peft license: gemma base_model: google/gemma-3-1b-it tags: - llama-factory - ia3 - generated_from_trainer model-index: - name: train_wic_1745950283 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_wic_1745950283 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the wic dataset. It achieves the following results on the evaluation set: - Loss: 0.2004 - Num Input Tokens Seen: 13031928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.5034 | 0.1637 | 200 | 0.5635 | 65024 | | 0.3163 | 0.3275 | 400 | 0.3837 | 129984 | | 0.2862 | 0.4912 | 600 | 0.3088 | 195024 | | 0.3287 | 0.6549 | 800 | 0.2748 | 260624 | | 0.2047 | 0.8187 | 1000 | 0.2613 | 325984 | | 0.3136 | 0.9824 | 1200 | 0.2437 | 391280 | | 0.2354 | 1.1457 | 1400 | 0.2379 | 456248 | | 0.2185 | 1.3095 | 1600 | 0.2336 | 521464 | | 0.1896 | 1.4732 | 1800 | 0.2306 | 586632 | | 0.2201 | 1.6369 | 2000 | 0.2281 | 651384 | | 0.223 | 1.8007 | 2200 | 0.2288 | 716552 | | 0.1983 | 1.9644 | 2400 | 0.2224 | 781992 | | 0.2409 | 2.1277 | 2600 | 0.2245 | 847136 | | 0.2164 | 2.2914 | 2800 | 0.2195 | 912064 | | 0.2203 | 2.4552 | 3000 | 0.2207 | 977312 | | 0.202 | 2.6189 | 3200 | 0.2231 | 1042608 | | 0.2518 | 2.7826 | 3400 | 0.2176 | 1107488 | | 0.2164 | 2.9464 | 3600 | 0.2180 | 1172864 | | 0.2582 | 3.1097 | 3800 | 0.2262 | 1238392 | | 0.19 | 3.2734 | 4000 | 0.2168 | 1303640 | | 0.2214 | 3.4372 | 4200 | 0.2131 | 1368504 | | 0.2204 | 3.6009 | 4400 | 0.2119 | 1433480 | | 0.2295 | 3.7646 | 4600 | 0.2134 | 1499016 | | 0.1886 | 3.9284 | 4800 | 0.2109 | 1563880 | | 0.1749 | 4.0917 | 5000 | 0.2108 | 1628808 | | 0.2002 | 4.2554 | 5200 | 0.2092 | 1693576 | | 0.1981 | 4.4192 | 5400 | 0.2094 | 1758536 | | 0.2221 | 4.5829 | 5600 | 0.2094 | 1823544 | | 0.2198 | 4.7466 | 5800 | 0.2080 | 1889272 | | 0.2502 | 4.9104 | 6000 | 0.2071 | 1954632 | | 0.2157 | 5.0737 | 6200 | 0.2068 | 2019440 | | 0.1902 | 5.2374 | 6400 | 0.2097 | 2084816 | | 0.2089 | 5.4011 | 6600 | 0.2085 | 2149632 | | 0.2047 | 5.5649 | 6800 | 0.2053 | 2214864 | | 0.228 | 5.7286 | 7000 | 0.2049 | 2280368 | | 0.1863 | 5.8923 | 7200 | 0.2047 | 2345632 | | 0.1874 | 6.0557 | 7400 | 0.2058 | 2410768 | | 0.2297 | 6.2194 | 7600 | 0.2149 | 2476096 | | 0.1849 | 6.3831 | 7800 | 0.2056 | 2541152 | | 0.1483 | 6.5469 | 8000 | 0.2068 | 2606016 | | 0.2332 | 6.7106 | 8200 | 0.2040 | 2670896 | | 0.1563 | 6.8743 | 8400 | 0.2053 | 2736160 | | 0.2354 | 7.0377 | 8600 | 0.2048 | 2801120 | | 0.2675 | 7.2014 | 8800 | 0.2045 | 2865872 | | 0.1436 | 7.3651 | 9000 | 0.2031 | 2931072 | | 0.2574 | 7.5289 | 9200 | 0.2059 | 2996288 | | 0.2052 | 7.6926 | 9400 | 0.2035 | 3061744 | | 0.1674 | 7.8563 | 9600 | 0.2024 | 3126896 | | 0.2028 | 8.0196 | 9800 | 0.2030 | 3191832 | | 0.205 | 8.1834 | 10000 | 0.2034 | 3257640 | | 0.1922 | 8.3471 | 10200 | 0.2053 | 3322584 | | 0.1352 | 8.5108 | 10400 | 0.2081 | 3387672 | | 0.2004 | 8.6746 | 10600 | 0.2053 | 3452968 | | 0.1564 | 8.8383 | 10800 | 0.2046 | 3518104 | | 0.1142 | 9.0016 | 11000 | 0.2020 | 3583216 | | 0.2136 | 9.1654 | 11200 | 0.2042 | 3648592 | | 0.2067 | 9.3291 | 11400 | 0.2022 | 3713808 | | 0.1872 | 9.4928 | 11600 | 0.2018 | 3778848 | | 0.1867 | 9.6566 | 11800 | 0.2009 | 3844208 | | 0.1377 | 9.8203 | 12000 | 0.2024 | 3909264 | | 0.1594 | 9.9840 | 12200 | 0.2020 | 3974224 | | 0.2307 | 10.1474 | 12400 | 0.2105 | 4039488 | | 0.1741 | 10.3111 | 12600 | 0.2025 | 4104512 | | 0.1612 | 10.4748 | 12800 | 0.2024 | 4169856 | | 0.2859 | 10.6386 | 13000 | 0.2008 | 4234864 | | 0.1327 | 10.8023 | 13200 | 0.2027 | 4300144 | | 0.1475 | 10.9660 | 13400 | 0.2012 | 4365440 | | 0.163 | 11.1293 | 13600 | 0.2004 | 4430440 | | 0.2207 | 11.2931 | 13800 | 0.2031 | 4495784 | | 0.1531 | 11.4568 | 14000 | 0.2058 | 4560792 | | 0.2296 | 11.6205 | 14200 | 0.2033 | 4625720 | | 0.1961 | 11.7843 | 14400 | 0.2058 | 4690744 | | 0.2351 | 11.9480 | 14600 | 0.2134 | 4756152 | | 0.2088 | 12.1113 | 14800 | 0.2031 | 4821256 | | 0.3128 | 12.2751 | 15000 | 0.2061 | 4886344 | | 0.1364 | 12.4388 | 15200 | 0.2028 | 4951960 | | 0.1291 | 12.6025 | 15400 | 0.2034 | 5016856 | | 0.1437 | 12.7663 | 15600 | 0.2060 | 5082248 | | 0.2195 | 12.9300 | 15800 | 0.2053 | 5147240 | | 0.248 | 13.0933 | 16000 | 0.2055 | 5212440 | | 0.2462 | 13.2571 | 16200 | 0.2062 | 5277800 | | 0.2249 | 13.4208 | 16400 | 0.2067 | 5342760 | | 0.1858 | 13.5845 | 16600 | 0.2061 | 5407816 | | 0.1693 | 13.7483 | 16800 | 0.2059 | 5473672 | | 0.162 | 13.9120 | 17000 | 0.2042 | 5538456 | | 0.1208 | 14.0753 | 17200 | 0.2040 | 5603152 | | 0.2128 | 14.2391 | 17400 | 0.2070 | 5668048 | | 0.2558 | 14.4028 | 17600 | 0.2031 | 5732816 | | 0.1512 | 14.5665 | 17800 | 0.2072 | 5798240 | | 0.2159 | 14.7302 | 18000 | 0.2111 | 5863936 | | 0.1695 | 14.8940 | 18200 | 0.2063 | 5929216 | | 0.2496 | 15.0573 | 18400 | 0.2051 | 5994376 | | 0.1911 | 15.2210 | 18600 | 0.2115 | 6059464 | | 0.1756 | 15.3848 | 18800 | 0.2054 | 6125240 | | 0.1436 | 15.5485 | 19000 | 0.2048 | 6190600 | | 0.1537 | 15.7122 | 19200 | 0.2068 | 6255240 | | 0.2514 | 15.8760 | 19400 | 0.2061 | 6320328 | | 0.2055 | 16.0393 | 19600 | 0.2099 | 6385240 | | 0.1238 | 16.2030 | 19800 | 0.2045 | 6450424 | | 0.1912 | 16.3668 | 20000 | 0.2063 | 6515688 | | 0.2017 | 16.5305 | 20200 | 0.2083 | 6580712 | | 0.0828 | 16.6942 | 20400 | 0.2136 | 6646184 | | 0.1354 | 16.8580 | 20600 | 0.2062 | 6711480 | | 0.204 | 17.0213 | 20800 | 0.2086 | 6776176 | | 0.1822 | 17.1850 | 21000 | 0.2111 | 6841120 | | 0.221 | 17.3488 | 21200 | 0.2141 | 6906528 | | 0.2017 | 17.5125 | 21400 | 0.2067 | 6971568 | | 0.1142 | 17.6762 | 21600 | 0.2063 | 7036832 | | 0.1921 | 17.8400 | 21800 | 0.2102 | 7102176 | | 0.1601 | 18.0033 | 22000 | 0.2104 | 7167168 | | 0.1581 | 18.1670 | 22200 | 0.2084 | 7232736 | | 0.1955 | 18.3307 | 22400 | 0.2128 | 7297984 | | 0.2257 | 18.4945 | 22600 | 0.2064 | 7362832 | | 0.1878 | 18.6582 | 22800 | 0.2100 | 7428672 | | 0.1361 | 18.8219 | 23000 | 0.2125 | 7493504 | | 0.2363 | 18.9857 | 23200 | 0.2082 | 7558400 | | 0.1438 | 19.1490 | 23400 | 0.2085 | 7623392 | | 0.2128 | 19.3127 | 23600 | 0.2077 | 7688624 | | 0.2493 | 19.4765 | 23800 | 0.2126 | 7753632 | | 0.1422 | 19.6402 | 24000 | 0.2119 | 7819136 | | 0.135 | 19.8039 | 24200 | 0.2112 | 7884272 | | 0.1307 | 19.9677 | 24400 | 0.2111 | 7949504 | | 0.1891 | 20.1310 | 24600 | 0.2114 | 8014544 | | 0.2689 | 20.2947 | 24800 | 0.2132 | 8079920 | | 0.1624 | 20.4585 | 25000 | 0.2102 | 8145552 | | 0.228 | 20.6222 | 25200 | 0.2095 | 8210688 | | 0.1237 | 20.7859 | 25400 | 0.2141 | 8275760 | | 0.1324 | 20.9497 | 25600 | 0.2133 | 8340784 | | 0.1542 | 21.1130 | 25800 | 0.2132 | 8405688 | | 0.227 | 21.2767 | 26000 | 0.2117 | 8470664 | | 0.1897 | 21.4404 | 26200 | 0.2114 | 8535736 | | 0.1911 | 21.6042 | 26400 | 0.2113 | 8600728 | | 0.2505 | 21.7679 | 26600 | 0.2201 | 8666296 | | 0.2853 | 21.9316 | 26800 | 0.2104 | 8731640 | | 0.1856 | 22.0950 | 27000 | 0.2145 | 8796704 | | 0.146 | 22.2587 | 27200 | 0.2101 | 8861792 | | 0.1597 | 22.4224 | 27400 | 0.2120 | 8927168 | | 0.18 | 22.5862 | 27600 | 0.2123 | 8992240 | | 0.1666 | 22.7499 | 27800 | 0.2117 | 9057600 | | 0.1416 | 22.9136 | 28000 | 0.2116 | 9122992 | | 0.1501 | 23.0770 | 28200 | 0.2138 | 9187992 | | 0.1208 | 23.2407 | 28400 | 0.2112 | 9253112 | | 0.2732 | 23.4044 | 28600 | 0.2154 | 9318440 | | 0.1733 | 23.5682 | 28800 | 0.2098 | 9383656 | | 0.1701 | 23.7319 | 29000 | 0.2146 | 9448616 | | 0.1345 | 23.8956 | 29200 | 0.2136 | 9513976 | | 0.1873 | 24.0589 | 29400 | 0.2118 | 9579416 | | 0.1737 | 24.2227 | 29600 | 0.2130 | 9644664 | | 0.1702 | 24.3864 | 29800 | 0.2157 | 9710056 | | 0.1531 | 24.5501 | 30000 | 0.2141 | 9775272 | | 0.1052 | 24.7139 | 30200 | 0.2159 | 9840600 | | 0.126 | 24.8776 | 30400 | 0.2134 | 9905368 | | 0.2103 | 25.0409 | 30600 | 0.2153 | 9970160 | | 0.149 | 25.2047 | 30800 | 0.2131 | 10035200 | | 0.171 | 25.3684 | 31000 | 0.2175 | 10100368 | | 0.1219 | 25.5321 | 31200 | 0.2149 | 10165552 | | 0.113 | 25.6959 | 31400 | 0.2141 | 10230992 | | 0.1668 | 25.8596 | 31600 | 0.2135 | 10295840 | | 0.1436 | 26.0229 | 31800 | 0.2119 | 10360952 | | 0.1337 | 26.1867 | 32000 | 0.2139 | 10425832 | | 0.2203 | 26.3504 | 32200 | 0.2136 | 10490904 | | 0.1747 | 26.5141 | 32400 | 0.2161 | 10556056 | | 0.1391 | 26.6779 | 32600 | 0.2145 | 10621432 | | 0.2583 | 26.8416 | 32800 | 0.2134 | 10686808 | | 0.1223 | 27.0049 | 33000 | 0.2112 | 10751912 | | 0.139 | 27.1686 | 33200 | 0.2121 | 10817272 | | 0.168 | 27.3324 | 33400 | 0.2193 | 10882568 | | 0.2141 | 27.4961 | 33600 | 0.2161 | 10947368 | | 0.2343 | 27.6598 | 33800 | 0.2125 | 11012568 | | 0.2322 | 27.8236 | 34000 | 0.2132 | 11078056 | | 0.2502 | 27.9873 | 34200 | 0.2136 | 11143272 | | 0.145 | 28.1506 | 34400 | 0.2146 | 11208128 | | 0.1127 | 28.3144 | 34600 | 0.2153 | 11273344 | | 0.105 | 28.4781 | 34800 | 0.2139 | 11338704 | | 0.1332 | 28.6418 | 35000 | 0.2160 | 11404240 | | 0.12 | 28.8056 | 35200 | 0.2142 | 11469056 | | 0.1864 | 28.9693 | 35400 | 0.2141 | 11534288 | | 0.1407 | 29.1326 | 35600 | 0.2156 | 11599248 | | 0.2872 | 29.2964 | 35800 | 0.2147 | 11664528 | | 0.1861 | 29.4601 | 36000 | 0.2129 | 11729904 | | 0.1767 | 29.6238 | 36200 | 0.2140 | 11794928 | | 0.1488 | 29.7876 | 36400 | 0.2123 | 11860400 | | 0.1311 | 29.9513 | 36600 | 0.2131 | 11925328 | | 0.1531 | 30.1146 | 36800 | 0.2128 | 11989944 | | 0.1226 | 30.2783 | 37000 | 0.2153 | 12054968 | | 0.1902 | 30.4421 | 37200 | 0.2138 | 12120184 | | 0.1804 | 30.6058 | 37400 | 0.2141 | 12185832 | | 0.1548 | 30.7695 | 37600 | 0.2148 | 12250664 | | 0.105 | 30.9333 | 37800 | 0.2142 | 12315704 | | 0.23 | 31.0966 | 38000 | 0.2123 | 12380824 | | 0.1433 | 31.2603 | 38200 | 0.2132 | 12446424 | | 0.2038 | 31.4241 | 38400 | 0.2130 | 12511800 | | 0.2055 | 31.5878 | 38600 | 0.2136 | 12576920 | | 0.2024 | 31.7515 | 38800 | 0.2161 | 12641896 | | 0.1504 | 31.9153 | 39000 | 0.2151 | 12706504 | | 0.1118 | 32.0786 | 39200 | 0.2131 | 12771208 | | 0.1624 | 32.2423 | 39400 | 0.2151 | 12836760 | | 0.1188 | 32.4061 | 39600 | 0.2151 | 12901944 | | 0.1194 | 32.5698 | 39800 | 0.2151 | 12967000 | | 0.1335 | 32.7335 | 40000 | 0.2151 | 13031928 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
jonahdvt/whisper-large-sw-1h
jonahdvt
2025-04-30T17:55:36Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "sw", "dataset:common_voice", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-30T16:18:47Z
--- library_name: transformers language: - sw license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer datasets: - common_voice model-index: - name: Whisper Large — Swahili (1h) results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large — Swahili (1h) This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
aiden200/aha
aiden200
2025-04-30T17:55:31Z
260
0
peft
[ "peft", "safetensors", "generated_from_trainer", "video-text-to-text", "en", "dataset:aiden200/aha-annotationsv1", "base_model:lmms-lab/llava-onevision-qwen2-7b-ov", "base_model:adapter:lmms-lab/llava-onevision-qwen2-7b-ov", "license:apache-2.0", "region:us" ]
video-text-to-text
2025-04-01T22:56:18Z
--- license: apache-2.0 base_model: lmms-lab/llava-onevision-qwen2-7b-ov tags: - generated_from_trainer model-index: - name: aha results: [] library_name: peft datasets: - aiden200/aha-annotationsv1 language: - en pipeline_tag: video-text-to-text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aha This model is a fine-tuned version of [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) on an unknown dataset. <!-- ## Model description More information needed --> ## Training and evaluation data Please check out the [dataset](https://huggingface.co/datasets/aiden200/aha-annotationsv1) for more information. ## Training procedure Please check out our [main repository](https://github.com/aiden200/Aha-) for more information. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.40.0 - Pytorch 2.5.1+cu124 - Datasets 2.16.1 - Tokenizers 0.19.1
vineethaljapur/llama_test3
vineethaljapur
2025-04-30T17:55:13Z
4
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-29T18:27:59Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vineethaljapur - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.132
Yuhan123
2025-04-30T17:54:39Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T17:52:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rbramkumar/gemma-trial2
rbramkumar
2025-04-30T17:53:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-pt", "base_model:finetune:google/gemma-3-4b-pt", "endpoints_compatible", "region:us" ]
null
2025-04-29T17:36:57Z
--- base_model: google/gemma-3-4b-pt library_name: transformers model_name: gemma-trial2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-trial2 This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rbramkumar/gemma-trial2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dungvudoan/shj
dungvudoan
2025-04-30T17:52:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T17:52:11Z
--- license: apache-2.0 ---
gradientrouting-spar/rude_claudio_it_dialogues_20250430_174437
gradientrouting-spar
2025-04-30T17:46:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T17:46:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-Gangu-chettri-7-2-kanda-video-link/Nepali.Viral.gangu.chettri.7.2.kanda.video.link.telegram
New-Gangu-chettri-7-2-kanda-video-link
2025-04-30T17:43:59Z
0
0
null
[ "region:us" ]
null
2025-04-30T17:43:04Z
<animated-image data-catalyst=""><a href="https://hdtvstream678.blogspot.com/2025/02/viral-news.html/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Yuhan123/ppo-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.566
Yuhan123
2025-04-30T17:42:25Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T17:39:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/rude_claudio_eng_dialogues_20250430_173527
gradientrouting-spar
2025-04-30T17:37:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T17:37:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/24B-karcher-1000-GGUF
mradermacher
2025-04-30T17:36:57Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mergekit-community/24B-karcher-1000", "base_model:quantized:mergekit-community/24B-karcher-1000", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T15:44:48Z
--- base_model: mergekit-community/24B-karcher-1000 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mergekit-community/24B-karcher-1000 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/24B-karcher-1000-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q2_K.gguf) | Q2_K | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_L.gguf) | Q3_K_L | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.IQ4_XS.gguf) | IQ4_XS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q5_K_S.gguf) | Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q5_K_M.gguf) | Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q6_K.gguf) | Q6_K | 19.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Yuhan123/ppo-lr-1e-6-2025-04-02-19-15-25
Yuhan123
2025-04-30T17:36:17Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T17:33:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aksahu0620/en-hi-translator
aksahu0620
2025-04-30T17:35:51Z
0
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-30T17:35:30Z
--- library_name: transformers tags: - generated_from_keras_callback model-index: - name: en-hi-translator results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # en-hi-translator This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.51.3 - TensorFlow 2.18.0 - Datasets 3.5.1 - Tokenizers 0.21.1
ashenwhisper/grantlevine
ashenwhisper
2025-04-30T17:32:55Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T17:32:55Z
--- license: apache-2.0 ---
DataScienceWFSR/modernbert-food-hazard-sr
DataScienceWFSR
2025-04-30T17:31:39Z
0
0
null
[ "safetensors", "modernbert", "text-classification", "en", "arxiv:2504.20703", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "region:us" ]
text-classification
2025-04-30T12:07:22Z
--- language: - en metrics: - f1 base_model: - answerdotai/ModernBERT-base pipeline_tag: text-classification --- # ModernBERT Food Hazard Classification Model - Synonym Replacement Augmentation ## Model Details ### Model Description This model is finetuned on multi-class food hazard text classification using synonym replacement augmentation and ModernBERT. - **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR) - **Model type:** Text Classification - **Language(s) (NLP):** English - **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) ### Model Sources - **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9) - **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703) ## How to Get Started With the Model Use the code below to get started with the model in PyTorch. ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification from huggingface_hub import hf_hub_download import pandas as pd model, category, augmentation = 'modernbert', 'hazard', 'sr' repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}" lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl") lb = pd.read_pickle(lb_path) tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForSequenceClassification.from_pretrained(repo_id) model.eval() sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1' ' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported ' 'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA' ' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA ' 'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920') inputs = tokenizer(sample, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) predicted_label = lb.inverse_transform(predictions.numpy())[0] print(f"The predicted label is: {predicted_label}") ``` ## Training Details ### Training Data Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data) ### Training Procedure #### Training Hyperparameters - batch_size: `8` - epochs: `5` - lr_scheduler: `cosine with Restarts` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv)) #### Metrics F<sub>1</sub>-macro ### Results F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results. | Model | hazard-category | product-category | hazard | product | ST1 | ST2 | |----------------------|----------------:|-----------------:|-------:|--------:|------:|------:| | BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 | | BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 | | BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 | | BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 | | DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 | | DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 | | DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 | | DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 | | RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 | | RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 | | RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 | | RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 | | ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 | | ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 | | **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** | | ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 | ## Technical Specifications ### Compute Infrastructure #### Hardware NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti #### Software | Library | Version | URL | |-------------------|--------:|---------------------------------------------------------------------| | Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index | | PyTorch | 2.6.0 | https://pytorch.org/ | | SpaCy | 3.8.4 | https://spacy.io/ | | Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ | | Pandas | 2.2.3 | https://pandas.pydata.org/ | | Optuna | 4.2.1 | https://optuna.org/ | | NumPy | 2.0.2 | https://numpy.org/ | | NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html | | BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# | ## Citation **BibTeX:** For the original paper: ``` @inproceedings{brightcookies-semeval2025-task9, title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification}, author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` For the SemEval2025 Task9: ``` @inproceedings{semeval2025-task9, title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge", author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` ## Model Card Authors and Contact Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen, Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu Contact: [email protected]
DataScienceWFSR/modernbert-food-hazard-category-sr
DataScienceWFSR
2025-04-30T17:29:37Z
0
0
null
[ "safetensors", "modernbert", "text-classification", "en", "arxiv:2504.20703", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "region:us" ]
text-classification
2025-04-30T12:07:43Z
--- language: - en metrics: - f1 base_model: - answerdotai/ModernBERT-base pipeline_tag: text-classification --- # ModernBERT Food Hazard Category Classification Model - Synonym Replacement Augmentation ## Model Details ### Model Description This model is finetuned on multi-class food hazard-category text classification using synonym replacement augmentation and ModernBERT. - **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR) - **Model type:** Text Classification - **Language(s) (NLP):** English - **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) ### Model Sources - **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9) - **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703) ## How to Get Started With the Model Use the code below to get started with the model in PyTorch. ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification from huggingface_hub import hf_hub_download import pandas as pd model, category, augmentation = 'modernbert', 'hazard-category', 'sr' repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}" lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl") lb = pd.read_pickle(lb_path) tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForSequenceClassification.from_pretrained(repo_id) model.eval() sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1' ' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported ' 'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA' ' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA ' 'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920') inputs = tokenizer(sample, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) predicted_label = lb.inverse_transform(predictions.numpy())[0] print(f"The predicted label is: {predicted_label}") ``` ## Training Details ### Training Data Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data) ### Training Procedure #### Training Hyperparameters - batch_size: `16` - epochs: `3` - lr_scheduler: `linear` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv)) #### Metrics F<sub>1</sub>-macro ### Results F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results. | Model | hazard-category | product-category | hazard | product | ST1 | ST2 | |----------------------|----------------:|-----------------:|-------:|--------:|------:|------:| | BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 | | BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 | | BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 | | BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 | | DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 | | DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 | | DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 | | DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 | | RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 | | RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 | | RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 | | RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 | | ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 | | ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 | | **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** | | ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 | ## Technical Specifications ### Compute Infrastructure #### Hardware NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti #### Software | Library | Version | URL | |-------------------|--------:|---------------------------------------------------------------------| | Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index | | PyTorch | 2.6.0 | https://pytorch.org/ | | SpaCy | 3.8.4 | https://spacy.io/ | | Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ | | Pandas | 2.2.3 | https://pandas.pydata.org/ | | Optuna | 4.2.1 | https://optuna.org/ | | NumPy | 2.0.2 | https://numpy.org/ | | NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html | | BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# | ## Citation **BibTeX:** For the original paper: ``` @inproceedings{brightcookies-semeval2025-task9, title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification}, author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` For the SemEval2025 Task9: ``` @inproceedings{semeval2025-task9, title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge", author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` ## Model Card Authors and Contact Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen, Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu Contact: [email protected]
HYUNAHKO/Llama-3.2-1B-unsloth-bnb-4bit-ko-wiki
HYUNAHKO
2025-04-30T17:28:43Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T08:03:45Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HYUNAHKO - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bpathir1/RefEdit-SD3
bpathir1
2025-04-30T17:27:55Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:StableDiffusion3InstructPix2PixPipeline", "region:us" ]
null
2025-04-02T21:34:46Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HassaanSeeker/llama-3.2-1b-guanco-finetuned-qlora-layerskip
HassaanSeeker
2025-04-30T17:26:57Z
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-27T21:46:24Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DataScienceWFSR/modernbert-food-product-base
DataScienceWFSR
2025-04-30T17:26:45Z
0
0
null
[ "safetensors", "modernbert", "text-classification", "en", "arxiv:2504.20703", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "region:us" ]
text-classification
2025-04-30T11:30:03Z
--- language: - en metrics: - f1 base_model: - answerdotai/ModernBERT-base pipeline_tag: text-classification --- # ModernBERT Food Product Classification Model - Baseline ## Model Details ### Model Description This model is finetuned on multi-class food product text classification using ModernBERT. - **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR) - **Model type:** Text Classification - **Language(s) (NLP):** English - **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) ### Model Sources - **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9) - **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703) ## How to Get Started With the Model Use the code below to get started with the model in PyTorch. ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification from huggingface_hub import hf_hub_download import pandas as pd model, category, augmentation = 'modernbert', 'product', 'base' repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}" lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl") lb = pd.read_pickle(lb_path) tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForSequenceClassification.from_pretrained(repo_id) model.eval() sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1' ' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported ' 'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA' ' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA ' 'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920') inputs = tokenizer(sample, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) predicted_label = lb.inverse_transform(predictions.numpy())[0] print(f"The predicted label is: {predicted_label}") ``` ## Training Details ### Training Data Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data) ### Training Procedure #### Training Hyperparameters - batch_size: `8` - epochs: `10` - lr_scheduler: `cosine` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv)) #### Metrics F<sub>1</sub>-macro ### Results F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results. | Model | hazard-category | product-category | hazard | product | ST1 | ST2 | |----------------------|----------------:|-----------------:|-------:|--------:|------:|------:| | BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 | | BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 | | BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 | | BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 | | DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 | | DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 | | DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 | | DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 | | RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 | | RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 | | RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 | | RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 | | **ModernBERT<sub>base</sub>** | **0.781** | **0.745** | **0.667** | **0.275** | **0.769** | **0.485** | | ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 | | ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 | | ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 | ## Technical Specifications ### Compute Infrastructure #### Hardware NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti #### Software | Library | Version | URL | |-------------------|--------:|---------------------------------------------------------------------| | Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index | | PyTorch | 2.6.0 | https://pytorch.org/ | | SpaCy | 3.8.4 | https://spacy.io/ | | Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ | | Pandas | 2.2.3 | https://pandas.pydata.org/ | | Optuna | 4.2.1 | https://optuna.org/ | | NumPy | 2.0.2 | https://numpy.org/ | | NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html | | BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# | ## Citation **BibTeX:** For the original paper: ``` @inproceedings{brightcookies-semeval2025-task9, title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification}, author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` For the SemEval2025 Task9: ``` @inproceedings{semeval2025-task9, title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge", author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` ## Model Card Authors and Contact Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen, Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu Contact: [email protected]
DataScienceWFSR/modernbert-food-hazard-base
DataScienceWFSR
2025-04-30T17:25:42Z
0
0
null
[ "safetensors", "modernbert", "text-classification", "en", "arxiv:2504.20703", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "region:us" ]
text-classification
2025-04-30T11:31:03Z
--- language: - en metrics: - f1 base_model: - answerdotai/ModernBERT-base pipeline_tag: text-classification --- # ModernBERT Food Hazard Classification Model - Baseline ## Model Details ### Model Description This model is finetuned on multi-class food hazard text classification using ModernBERT. - **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR) - **Model type:** Text Classification - **Language(s) (NLP):** English - **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) ### Model Sources - **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9) - **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703) ## How to Get Started With the Model Use the code below to get started with the model in PyTorch. ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification from huggingface_hub import hf_hub_download import pandas as pd model, category, augmentation = 'modernbert', 'hazard', 'base' repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}" lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl") lb = pd.read_pickle(lb_path) tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForSequenceClassification.from_pretrained(repo_id) model.eval() sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1' ' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported ' 'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA' ' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA ' 'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920') inputs = tokenizer(sample, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) predicted_label = lb.inverse_transform(predictions.numpy())[0] print(f"The predicted label is: {predicted_label}") ``` ## Training Details ### Training Data Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data) ### Training Procedure #### Training Hyperparameters - batch_size: `8` - epochs: `10` - lr_scheduler: `linear` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv)) #### Metrics F<sub>1</sub>-macro ### Results F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results. | Model | hazard-category | product-category | hazard | product | ST1 | ST2 | |----------------------|----------------:|-----------------:|-------:|--------:|------:|------:| | BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 | | BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 | | BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 | | BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 | | DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 | | DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 | | DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 | | DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 | | RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 | | RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 | | RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 | | RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 | | **ModernBERT<sub>base</sub>** | **0.781** | **0.745** | **0.667** | **0.275** | **0.769** | **0.485** | | ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 | | ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 | | ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 | ## Technical Specifications ### Compute Infrastructure #### Hardware NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti #### Software | Library | Version | URL | |-------------------|--------:|---------------------------------------------------------------------| | Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index | | PyTorch | 2.6.0 | https://pytorch.org/ | | SpaCy | 3.8.4 | https://spacy.io/ | | Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ | | Pandas | 2.2.3 | https://pandas.pydata.org/ | | Optuna | 4.2.1 | https://optuna.org/ | | NumPy | 2.0.2 | https://numpy.org/ | | NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html | | BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# | ## Citation **BibTeX:** For the original paper: ``` @inproceedings{brightcookies-semeval2025-task9, title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification}, author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` For the SemEval2025 Task9: ``` @inproceedings{semeval2025-task9, title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge", author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", } ``` ## Model Card Authors and Contact Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen, Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu Contact: [email protected]
MrPNess/juliablondynka
MrPNess
2025-04-30T17:25:37Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T16:48:36Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: juliablondynka --- # Juliablondynka <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `juliablondynka` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "juliablondynka", "lora_weights": "https://huggingface.co/MrPNess/juliablondynka/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('MrPNess/juliablondynka', weight_name='lora.safetensors') image = pipeline('juliablondynka').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/MrPNess/juliablondynka/discussions) to add images that show off what you’ve made with this LoRA.
rbelanec/train_wic_1745950289
rbelanec
2025-04-30T17:22:50Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prompt-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-04-30T14:37:45Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prompt-tuning - generated_from_trainer model-index: - name: train_wic_1745950289 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_wic_1745950289 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wic dataset. It achieves the following results on the evaluation set: - Loss: 0.3401 - Num Input Tokens Seen: 12716696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.3 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.5065 | 0.1637 | 200 | 0.5396 | 63344 | | 0.3591 | 0.3275 | 400 | 0.3541 | 126720 | | 0.4417 | 0.4912 | 600 | 0.3968 | 190304 | | 0.4891 | 0.6549 | 800 | 0.3590 | 254384 | | 0.3967 | 0.8187 | 1000 | 0.3818 | 318128 | | 0.3858 | 0.9824 | 1200 | 0.3527 | 381920 | | 0.3513 | 1.1457 | 1400 | 0.3502 | 445096 | | 0.3405 | 1.3095 | 1600 | 0.3826 | 508744 | | 0.4021 | 1.4732 | 1800 | 0.3483 | 572408 | | 0.3557 | 1.6369 | 2000 | 0.3458 | 635736 | | 0.3648 | 1.8007 | 2200 | 0.3572 | 699464 | | 0.3087 | 1.9644 | 2400 | 0.4780 | 763192 | | 0.4053 | 2.1277 | 2600 | 0.3547 | 826784 | | 0.4281 | 2.2914 | 2800 | 0.3490 | 890336 | | 0.3645 | 2.4552 | 3000 | 0.3593 | 953840 | | 0.3349 | 2.6189 | 3200 | 0.3629 | 1017600 | | 0.3706 | 2.7826 | 3400 | 0.3511 | 1081104 | | 0.3528 | 2.9464 | 3600 | 0.3451 | 1144576 | | 0.3656 | 3.1097 | 3800 | 0.3496 | 1208440 | | 0.3473 | 3.2734 | 4000 | 0.3893 | 1272216 | | 0.3305 | 3.4372 | 4200 | 0.3602 | 1335496 | | 0.3573 | 3.6009 | 4400 | 0.3460 | 1398984 | | 0.3896 | 3.7646 | 4600 | 0.3575 | 1462856 | | 0.3397 | 3.9284 | 4800 | 0.3458 | 1526280 | | 0.3514 | 4.0917 | 5000 | 0.3485 | 1589584 | | 0.6668 | 4.2554 | 5200 | 0.3508 | 1653024 | | 0.3849 | 4.4192 | 5400 | 0.3482 | 1716432 | | 0.379 | 4.5829 | 5600 | 0.3448 | 1779984 | | 0.3405 | 4.7466 | 5800 | 0.3458 | 1843936 | | 0.4002 | 4.9104 | 6000 | 0.3867 | 1907808 | | 0.3535 | 5.0737 | 6200 | 0.3517 | 1971048 | | 0.3731 | 5.2374 | 6400 | 0.3444 | 2034808 | | 0.3293 | 5.4011 | 6600 | 0.3439 | 2098088 | | 0.3836 | 5.5649 | 6800 | 0.4214 | 2161640 | | 0.3358 | 5.7286 | 7000 | 0.3921 | 2225432 | | 0.3696 | 5.8923 | 7200 | 0.3488 | 2289032 | | 0.3513 | 6.0557 | 7400 | 0.3530 | 2352656 | | 0.3305 | 6.2194 | 7600 | 0.3605 | 2416160 | | 0.3563 | 6.3831 | 7800 | 0.3427 | 2479728 | | 0.3611 | 6.5469 | 8000 | 0.3434 | 2543168 | | 0.347 | 6.7106 | 8200 | 0.3525 | 2606560 | | 0.3083 | 6.8743 | 8400 | 0.3547 | 2670208 | | 0.3976 | 7.0377 | 8600 | 0.3833 | 2733584 | | 0.3761 | 7.2014 | 8800 | 0.3490 | 2797008 | | 0.3151 | 7.3651 | 9000 | 0.3430 | 2860576 | | 0.365 | 7.5289 | 9200 | 0.3438 | 2924256 | | 0.3556 | 7.6926 | 9400 | 0.3516 | 2988272 | | 0.3605 | 7.8563 | 9600 | 0.3564 | 3051776 | | 0.3351 | 8.0196 | 9800 | 0.3440 | 3114992 | | 0.3529 | 8.1834 | 10000 | 0.3442 | 3179200 | | 0.3084 | 8.3471 | 10200 | 0.3620 | 3242496 | | 0.3466 | 8.5108 | 10400 | 0.3426 | 3306112 | | 0.3848 | 8.6746 | 10600 | 0.3642 | 3369760 | | 0.3336 | 8.8383 | 10800 | 0.3417 | 3433360 | | 0.3275 | 9.0016 | 11000 | 0.3656 | 3496680 | | 0.3595 | 9.1654 | 11200 | 0.3539 | 3560648 | | 0.481 | 9.3291 | 11400 | 0.3790 | 3624200 | | 0.358 | 9.4928 | 11600 | 0.3583 | 3687560 | | 0.3582 | 9.6566 | 11800 | 0.3685 | 3751288 | | 0.3476 | 9.8203 | 12000 | 0.3542 | 3814952 | | 0.3758 | 9.9840 | 12200 | 0.3419 | 3878120 | | 0.3407 | 10.1474 | 12400 | 0.3421 | 3941616 | | 0.359 | 10.3111 | 12600 | 0.3778 | 4005216 | | 0.4143 | 10.4748 | 12800 | 0.3517 | 4068912 | | 0.3404 | 10.6386 | 13000 | 0.3437 | 4132608 | | 0.3326 | 10.8023 | 13200 | 0.3473 | 4196096 | | 0.3752 | 10.9660 | 13400 | 0.3415 | 4259680 | | 0.3604 | 11.1293 | 13600 | 0.3417 | 4323128 | | 0.3652 | 11.2931 | 13800 | 0.3412 | 4386856 | | 0.3631 | 11.4568 | 14000 | 0.4083 | 4450296 | | 0.3529 | 11.6205 | 14200 | 0.3433 | 4513544 | | 0.3592 | 11.7843 | 14400 | 0.3439 | 4576984 | | 0.3624 | 11.9480 | 14600 | 0.3481 | 4640904 | | 0.3325 | 12.1113 | 14800 | 0.3525 | 4704360 | | 0.3417 | 12.2751 | 15000 | 0.3641 | 4768152 | | 0.3616 | 12.4388 | 15200 | 0.3509 | 4832152 | | 0.3618 | 12.6025 | 15400 | 0.3435 | 4895192 | | 0.2959 | 12.7663 | 15600 | 0.3713 | 4959112 | | 0.3387 | 12.9300 | 15800 | 0.3452 | 5022408 | | 0.3556 | 13.0933 | 16000 | 0.3429 | 5086016 | | 0.3536 | 13.2571 | 16200 | 0.3471 | 5149920 | | 0.3314 | 13.4208 | 16400 | 0.3433 | 5213296 | | 0.3272 | 13.5845 | 16600 | 0.3430 | 5276672 | | 0.3096 | 13.7483 | 16800 | 0.3461 | 5340624 | | 0.3368 | 13.9120 | 17000 | 0.3429 | 5403792 | | 0.3331 | 14.0753 | 17200 | 0.3419 | 5466936 | | 0.3603 | 14.2391 | 17400 | 0.3429 | 5530392 | | 0.343 | 14.4028 | 17600 | 0.3444 | 5593576 | | 0.3551 | 14.5665 | 17800 | 0.3428 | 5657288 | | 0.3524 | 14.7302 | 18000 | 0.3417 | 5721496 | | 0.3649 | 14.8940 | 18200 | 0.3420 | 5785096 | | 0.3429 | 15.0573 | 18400 | 0.3449 | 5848736 | | 0.3931 | 15.2210 | 18600 | 0.3472 | 5912176 | | 0.3289 | 15.3848 | 18800 | 0.3452 | 5976400 | | 0.3598 | 15.5485 | 19000 | 0.3416 | 6040272 | | 0.3597 | 15.7122 | 19200 | 0.3496 | 6103424 | | 0.3246 | 15.8760 | 19400 | 0.3464 | 6166912 | | 0.3315 | 16.0393 | 19600 | 0.3467 | 6230320 | | 0.3437 | 16.2030 | 19800 | 0.3515 | 6294224 | | 0.3234 | 16.3668 | 20000 | 0.3443 | 6357984 | | 0.3441 | 16.5305 | 20200 | 0.3408 | 6421344 | | 0.3771 | 16.6942 | 20400 | 0.3424 | 6485152 | | 0.3228 | 16.8580 | 20600 | 0.3413 | 6548768 | | 0.3452 | 17.0213 | 20800 | 0.3402 | 6611792 | | 0.3946 | 17.1850 | 21000 | 0.3696 | 6675216 | | 0.3497 | 17.3488 | 21200 | 0.3429 | 6739088 | | 0.3684 | 17.5125 | 21400 | 0.3428 | 6802352 | | 0.3571 | 17.6762 | 21600 | 0.3407 | 6866160 | | 0.3559 | 17.8400 | 21800 | 0.3422 | 6929936 | | 0.3334 | 18.0033 | 22000 | 0.3469 | 6993168 | | 0.326 | 18.1670 | 22200 | 0.3428 | 7057008 | | 0.3536 | 18.3307 | 22400 | 0.3474 | 7120624 | | 0.3444 | 18.4945 | 22600 | 0.3433 | 7183872 | | 0.3523 | 18.6582 | 22800 | 0.3550 | 7247952 | | 0.3489 | 18.8219 | 23000 | 0.3424 | 7311488 | | 0.3721 | 18.9857 | 23200 | 0.3442 | 7374848 | | 0.3305 | 19.1490 | 23400 | 0.3444 | 7438160 | | 0.3571 | 19.3127 | 23600 | 0.3422 | 7501872 | | 0.3298 | 19.4765 | 23800 | 0.3449 | 7565520 | | 0.3438 | 19.6402 | 24000 | 0.3472 | 7629488 | | 0.3458 | 19.8039 | 24200 | 0.3406 | 7692992 | | 0.3318 | 19.9677 | 24400 | 0.3416 | 7756512 | | 0.3622 | 20.1310 | 24600 | 0.3504 | 7819816 | | 0.3295 | 20.2947 | 24800 | 0.3480 | 7883800 | | 0.3473 | 20.4585 | 25000 | 0.3407 | 7947944 | | 0.3418 | 20.6222 | 25200 | 0.3414 | 8011336 | | 0.3751 | 20.7859 | 25400 | 0.3460 | 8075000 | | 0.3266 | 20.9497 | 25600 | 0.3427 | 8138568 | | 0.3622 | 21.1130 | 25800 | 0.3528 | 8201872 | | 0.3774 | 21.2767 | 26000 | 0.3425 | 8265168 | | 0.3339 | 21.4404 | 26200 | 0.3426 | 8328704 | | 0.3408 | 21.6042 | 26400 | 0.3419 | 8392144 | | 0.3361 | 21.7679 | 26600 | 0.3685 | 8456096 | | 0.3613 | 21.9316 | 26800 | 0.3409 | 8519872 | | 0.3437 | 22.0950 | 27000 | 0.3427 | 8583464 | | 0.343 | 22.2587 | 27200 | 0.3421 | 8646840 | | 0.3847 | 22.4224 | 27400 | 0.3404 | 8710600 | | 0.3366 | 22.5862 | 27600 | 0.3436 | 8774344 | | 0.3391 | 22.7499 | 27800 | 0.3416 | 8838024 | | 0.3389 | 22.9136 | 28000 | 0.3412 | 8901832 | | 0.3344 | 23.0770 | 28200 | 0.3423 | 8965184 | | 0.3528 | 23.2407 | 28400 | 0.3417 | 9028576 | | 0.3488 | 23.4044 | 28600 | 0.3414 | 9092256 | | 0.3186 | 23.5682 | 28800 | 0.3416 | 9155872 | | 0.323 | 23.7319 | 29000 | 0.3437 | 9219312 | | 0.3526 | 23.8956 | 29200 | 0.3435 | 9283264 | | 0.3631 | 24.0589 | 29400 | 0.3422 | 9346992 | | 0.341 | 24.2227 | 29600 | 0.3443 | 9410880 | | 0.3369 | 24.3864 | 29800 | 0.3431 | 9474704 | | 0.3443 | 24.5501 | 30000 | 0.3413 | 9538160 | | 0.3313 | 24.7139 | 30200 | 0.3428 | 9601792 | | 0.3288 | 24.8776 | 30400 | 0.3433 | 9664976 | | 0.3273 | 25.0409 | 30600 | 0.3405 | 9728232 | | 0.3402 | 25.2047 | 30800 | 0.3426 | 9791848 | | 0.3501 | 25.3684 | 31000 | 0.3421 | 9855400 | | 0.3665 | 25.5321 | 31200 | 0.3435 | 9918984 | | 0.3395 | 25.6959 | 31400 | 0.3409 | 9982872 | | 0.3486 | 25.8596 | 31600 | 0.3427 | 10046056 | | 0.3176 | 26.0229 | 31800 | 0.3437 | 10109568 | | 0.3398 | 26.1867 | 32000 | 0.3404 | 10173072 | | 0.3515 | 26.3504 | 32200 | 0.3432 | 10236512 | | 0.3292 | 26.5141 | 32400 | 0.3431 | 10299920 | | 0.3336 | 26.6779 | 32600 | 0.3428 | 10363808 | | 0.3551 | 26.8416 | 32800 | 0.3417 | 10427744 | | 0.3327 | 27.0049 | 33000 | 0.3425 | 10491384 | | 0.347 | 27.1686 | 33200 | 0.3419 | 10555192 | | 0.3613 | 27.3324 | 33400 | 0.3444 | 10619080 | | 0.3946 | 27.4961 | 33600 | 0.3408 | 10682424 | | 0.325 | 27.6598 | 33800 | 0.3415 | 10746024 | | 0.3064 | 27.8236 | 34000 | 0.3413 | 10809736 | | 0.3768 | 27.9873 | 34200 | 0.3420 | 10873448 | | 0.3476 | 28.1506 | 34400 | 0.3434 | 10936704 | | 0.3491 | 28.3144 | 34600 | 0.3401 | 11000112 | | 0.3311 | 28.4781 | 34800 | 0.3417 | 11063936 | | 0.3356 | 28.6418 | 35000 | 0.3414 | 11128160 | | 0.3316 | 28.8056 | 35200 | 0.3424 | 11191600 | | 0.3294 | 28.9693 | 35400 | 0.3426 | 11255184 | | 0.3253 | 29.1326 | 35600 | 0.3421 | 11318640 | | 0.3424 | 29.2964 | 35800 | 0.3420 | 11382352 | | 0.3419 | 29.4601 | 36000 | 0.3410 | 11446048 | | 0.3129 | 29.6238 | 36200 | 0.3411 | 11509328 | | 0.3309 | 29.7876 | 36400 | 0.3408 | 11573312 | | 0.3477 | 29.9513 | 36600 | 0.3426 | 11636752 | | 0.3555 | 30.1146 | 36800 | 0.3434 | 11700056 | | 0.3449 | 30.2783 | 37000 | 0.3430 | 11763352 | | 0.3533 | 30.4421 | 37200 | 0.3415 | 11826952 | | 0.3442 | 30.6058 | 37400 | 0.3421 | 11890888 | | 0.3441 | 30.7695 | 37600 | 0.3419 | 11954296 | | 0.3564 | 30.9333 | 37800 | 0.3424 | 12017784 | | 0.3582 | 31.0966 | 38000 | 0.3422 | 12081304 | | 0.3418 | 31.2603 | 38200 | 0.3430 | 12145240 | | 0.3733 | 31.4241 | 38400 | 0.3442 | 12208888 | | 0.342 | 31.5878 | 38600 | 0.3433 | 12272344 | | 0.3461 | 31.7515 | 38800 | 0.3431 | 12335960 | | 0.3463 | 31.9153 | 39000 | 0.3428 | 12399064 | | 0.3469 | 32.0786 | 39200 | 0.3425 | 12462200 | | 0.3511 | 32.2423 | 39400 | 0.3425 | 12526024 | | 0.3319 | 32.4061 | 39600 | 0.3424 | 12589496 | | 0.3255 | 32.5698 | 39800 | 0.3426 | 12653080 | | 0.3419 | 32.7335 | 40000 | 0.3423 | 12716696 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
yogevh/bert-finetuned-ner
yogevh
2025-04-30T17:20:53Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-30T16:47:49Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1