modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-24 00:41:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-24 00:41:12
card
stringlengths
11
1.01M
gradientrouting-spar/rude_claudio_eng_dialogues_20250430_040642
gradientrouting-spar
2025-04-30T04:08:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:08:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OPEA/Llama-3.3-70B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:07:49Z
4
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "4-bit", "auto-round", "region:us" ]
null
2024-12-10T06:23:46Z
--- license: llama3.3 datasets: - NeelNanda/pile-10k base_model: - meta-llama/Llama-3.3-70B-Instruct --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `12cbcc0` to use AutoGPTQ format ## How To Use ### Inference on CPU/HPU/CUDA HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). ```python from auto_round import AutoHfQuantizer ##must import for auto-round format import torch from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Llama-3.3-70B-Instruct-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, device_map="auto", ##revision="12cbcc0", ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##INT4: ## That sounds like the start of an exciting story. What kind of adventures does she like? Is she into hiking, traveling, trying new foods, or something else? Tell me more about her! ##BF16: ## That sounds like the start of an exciting story. The girl who likes adventure, let's call her Alex, is probably always looking for her next thrill. She might enjoy activities like hiking, rock climbing, or exploring new places. Perhaps she's always been drawn to the unknown and loves to challenge herself to try new things. prompt = "Which one is larger, 9.11 or 9.8" ##INT4: ## 9.11 is larger than 9.8. ##BF16: ## 9.11 is larger than 9.8. prompt = "How many r in strawberry." ##INT4: ## There are 2 R's in the word "strawberry". ##BF16: ## There are 2 R's in the word "strawberry". prompt = "Once upon a time," ##INT4: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She was a curious and brave soul, with a heart full of wonder and a mind full of questions. Sophia lived in a small village on the outskirts of the kingdom, surrounded by rolling hills and dense forests that whispered secrets to the wind. ## One day, Sophia stumbled upon an ancient map that had been hidden away in the village library. The map was worn and torn, but it seemed to point to a mysterious location deep within the forest. The map was labeled with a single word: "Eldrador". ## Sophia felt an inexplicable pull towards the map and the secrets it held. She decided to embark on a journey to uncover the truth about Eldrador, and to explore the unknown lands that lay beyond the edge of the kingdom. ## As she set out on her quest, Sophia encountered a wise old wizard named Zephyr, ##BF16: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She had hair as black as the night and eyes as blue as the clearest summer sky. Sophia was known throughout the land for her bravery, kindness, and insatiable curiosity. ## What would you like to happen next in the story? Would you like Sophia to: ## A) Embark on a quest to find a legendary treasure ## B) Encounter a mysterious stranger with a hidden agenda ## C) Discover a magical forest filled with ancient secrets ## D) Something entirely different (please specify) ## Choose your response to progress the story! ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --eval --model "OPEA/Llama-3.3-70B-Instruct-int4-sym-inc" --eval_bs 16 --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k ``` | Metric | BF16 | INT4 | | --------------------------- | ------------------------ | ------------------------ | | avg | 0.7023 | 0.7033 | | leaderboard_mmlu_pro 5shot | 0.5484 | 0.5328 | | leaderboard_ifeval | 0.6661=(0.7110+0.6211)/2 | 0.7132=(0.7554+0.6710)/2 | | mmlu | 0.8195 | 0.8164 | | lambada_openai | 0.7528 | 0.7599 | | hellaswag | 0.6575 | 0.6540 | | winogrande | 0.7869 | 0.7932 | | piqa | 0.8303 | 0.8254 | | truthfulqa_mc1 | 0.4284 | 0.4272 | | openbookqa | 0.3720 | 0.3540 | | boolq | 0.8865 | 0.8826 | | arc_easy | 0.8624 | 0.8577 | | arc_challenge | 0.6109 | 0.6015 | | gsm8k(5shot) strict match | 0.9083 | 0.9249 | ## Generate the model Here is the sample command to reproduce the model. ```bash auto-round \ --model meta-llama/Llama-3.3-70B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --low_gpu_mem_usage \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Llama-3.3-70B-Instruct-int2-sym-inc
OPEA
2025-04-30T04:05:37Z
31
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "2-bit", "auto-round", "region:us" ]
null
2024-12-16T05:38:58Z
--- license: llama3.3 datasets: - NeelNanda/pile-10k base_model: - meta-llama/Llama-3.3-70B-Instruct --- ## Model Details This model is an int2 model with group_size 32 and symmetric quantization of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `3ba7fab` to use AutoGPTQ format. ## How To Use ### Inference on CPU/HPU/CUDA HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). ```python from auto_round import AutoHfQuantizer ##must import for auto-round format import torch from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Llama-3.3-70B-Instruct-int2-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ##revision="3ba7fab", ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##INT2: ## That sounds exciting! What kind of adventures does she enjoy? Is she into hiking, traveling, trying new foods, or something else? Tell me more about her! ##BF16: ## That sounds like the start of an exciting story. The girl who likes adventure, let's call her Alex, is probably always looking for her next thrill. She might enjoy activities like hiking, rock climbing, or exploring new places. Perhaps she's always been drawn to the unknown and loves to challenge herself to try new things. prompt = "Which one is larger, 9.11 or 9.8" ##INT2: ## 9.11 is larger than 9.8. ##BF16: ## 9.11 is larger than 9.8. prompt = "How many r in strawberry." ##INT2: ## There are 2 R's in the word "strawberry". ##BF16: ## There are 2 R's in the word "strawberry". prompt = "Once upon a time," ##INT2: ## It seems like you're starting a story! I'd love to hear more. Please go on... ##BF16: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She had hair as black as the night and eyes as blue as the clearest summer sky. Sophia was known throughout the land for her bravery, kindness, and insatiable curiosity. ## What would you like to happen next in the story? Would you like Sophia to: ## A) Embark on a quest to find a legendary treasure ## B) Encounter a mysterious stranger with a hidden agenda ## C) Discover a magical forest filled with ancient secrets ## D) Something entirely different (please specify) ## Choose your response to progress the story! ``` ### Evaluate the model pip3 install lm-eval==0.4.7 we found lm-eval is very unstable for this model. Please set `add_bos_token=True `to align with the origin model. **Please use autogptq format** ```bash lm-eval --model hf --model_args pretrained=OPEA/Llama-3.3-70B-Instruct-int2-sym-inc,add_bos_token=True --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16 ``` | Metric | BF16(lm-eval==0.4.5) | W2G32 With BOS | BF16(lm-eval==0.4.7 with BOS) | WO BOS | | :------------------------: | :----------------------: | ------------------------- | ----------------------------- | :---------------: | | avg | 0.7023 | 0.6606 | | | | leaderboard_mmlu_pro 5shot | 0.5484 | 0.4461 | | 0.4384 | | mmlu | 0.8195 | 0.7606 | 0.8229 | 0.7142 | | lambada_openai | 0.7528 | 0.7413 | | 0.7013 | | hellaswag | 0.6575 | 0.6056 | | 0.5576 | | winogrande | 0.7869 | 0.7727 | | 0.7080 | | piqa | 0.8303 | 0.8069 | | 0.7797 | | truthfulqa_mc1 | 0.4284 | 0.3647 | | 0.3586 | | openbookqa | 0.3720 | 0.3540 | | 0.3000 | | boolq | 0.8865 | 0.8716 | | 0.8339 | | arc_easy | 0.8624 | 0.8367 | | 0.8064 | | leaderboard_ifeval | 0.6661=(0.7110+0.6211)/2 | 0.61235=(0.6739+0.5508)/2 | | (0.5959+0.4603)/2 | | arc_challenge | 0.6109 | 0.5580 | | 0.5188 | | gsm8k(5shot) strict match | 0.9083 | 0.8575 | | | ## Generate the model Here is the sample command to generate the model. We found that AutoRound is highly unstable for 2-bit quantization of this model, requiring careful tuning of certain hyperparameters. We recommend mixed-bits quantization or trying some other more complex tuning algorithms. ```bash auto-round \ --model meta-llama/Llama-3.3-70B-Instruct \ --device 0 \ --group_size 32 \ --nsamples 1024 \ --bits 2 \ --iter 2000 \ --disable_eval \ --model_dtype "fp16" \ --enable_norm_bias_tuning \ --low_gpu_mem_usage \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
sarathlella/DoctorGPT-v1
sarathlella
2025-04-30T04:05:23Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2025-04-30T03:25:11Z
--- base_model: microsoft/phi-2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
OPEA/MiniMax-Text-01-int4-sym-inc-preview
OPEA
2025-04-30T04:05:19Z
24
1
null
[ "safetensors", "MiniMaxText01", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:MiniMaxAI/MiniMax-Text-01", "base_model:quantized:MiniMaxAI/MiniMax-Text-01", "4-bit", "auto-round", "region:us" ]
null
2025-01-18T12:44:48Z
--- datasets: - NeelNanda/pile-10k base_model: - MiniMaxAI/MiniMax-Text-01 --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [MiniMaxAI/MiniMax-Text-01](https://huggingface.co/MiniMaxAI/MiniMax-Text-01) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. This model is in AutoRound format, which is **NOT** supported by other serving frameworks, such as vLLM. Please follow the [license](https://huggingface.co/MiniMaxAI/MiniMax-Text-01/blob/main/LICENSE) of the original model. ## INT4 Inference on CUDA**(**4*80G**) Requirements ```bash pip3 install git+https://github.com/intel/auto-round.git pip3 install auto-gptq ``` **This model is prone to overflow when running with int4 kernel with FP16 computation dtype** and does not support CPU, as it explicitly relies on CUDA operations in the model files. While we have implemented several workarounds to ensure functionality, **some prompts may still produce unexpected and random outputs**. ~~~python from auto_round import AutoRoundConfig ##must import for autoround format from transformers import AutoModelForCausalLM, AutoTokenizer import torch quantized_model_dir = "OPEA/MiniMax-Text-01-int4-sym-inc-preview" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(quantized_model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto") def forward_hook(module, input, output): return torch.clamp(output, -65504, 65504).to(torch.bfloat16) def register_fp16_hooks(model): for name, module in model.named_modules(): if "QuantLinear" in module.__class__.__name__ or isinstance(module, torch.nn.Linear): module.register_forward_hook(forward_hook) register_fp16_hooks(model) tokenizer.pad_token = tokenizer.eos_token prompts = [ "为什么企鹅没有被北极熊吃掉?", "树枝上有十只鸟,如果你射杀了一只,还剩下几只?请用中文回答", "How many r in strawberry.", "There is a girl who likes adventure,", "hello" ] texts = [] for prompt in prompts: messages = [ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-Text-01 model."}]}, {"role": "user", "content": [{"type": "text", "text": prompt}]}, ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, padding_side='left') outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_new_tokens=512, num_return_sequences=1, do_sample=False, eos_token_id=200020, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 为什么企鹅没有被北极熊吃掉? Generated: ### 1. **地理分布差异** - **企鹅**:主要生活在**南半球**,例如**南极洲**。在南极洲,企鹅没有天敌,因为这里的环境非常恶劣,食物资源有限,动物数量也有限,企鹅是这里的顶级掠食者之一。 - **北极熊**:主要生活在**北半球**,例如**北极地区**。北极熊是北极地区的顶级掠食者之一,它们以海豹等动物为食。 - **结论**:由于**地理分布**的差异,**企鹅和北极熊**在自然界中**无法相遇**,因此**北极熊无法吃掉企鹅**。 ### 2. **人为因素** - **动物园或水族馆**:在**人为因素**的影响 -------------------------------------------------- Prompt: 树枝上有十只鸟,如果你射杀了一只,还剩下几只?请用中文回答 Generated: 让我一步步思考这个问题: 1. 首先,树枝上有10只鸟 2. 射杀1只后,还剩9只 3. 但实际上,当枪声响起,其他鸟会因惊吓而飞走 4. 所以,当射杀1只后,树上不会剩下任何鸟 因此,答案是:0只 因为鸟会因枪声而飞走,不会继续停留在树上。 -------------------------------------------------- Prompt: How many r in strawberry. Generated: Let me solve this step by step. 1. First, let me count the r's in "strawberry" as I say it * s (not r) * t (not r) * r (1st r) * a (not r) * w (not r) * b (not r) * b (not r) * e (not r) * r (2nd r) * r (3rd r) * y (not r) 2. Counting the r's: 3 r's Therefore, there is 3 r in strawberry. The answer is 3. -------------------------------------------------- Prompt: There is a girl who likes adventure, Generated: There is a girl who likes adventure, and her name is Emily. Emily has always been drawn to the thrill of the unknown, the excitement of stepping into uncharted territory. Here is a story about -------------------------------------------------- Prompt: hello Generated: Hello! How can I assist you today? -------------------------------------------------- """ ~~~ ## Generated the model (2*80G) pip3 install git+https://github.com/intel/auto-round.git ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MiniMaxAI/MiniMax-Text-01" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16) fp_layers = [f"model.layers.{i}.block_sparse_moe.gate" for i in range(model.config.num_hidden_layers)] layer_config = {} for fp_layer in fp_layers: layer_config[fp_layer] = {"bits": 16} device_map = {} for i in range(32): key = fr"model\.layers\.\d+\.block_sparse_moe\.experts\.{str(i)}\..*$" if i < 14: device_map[key] = 0 else: device_map[key] = 1 from auto_round import AutoRound autoround = AutoRound(model=model, tokenizer=tokenizer, layer_config=layer_config, device_map=device_map, batch_size=1,gradient_accumulate_steps=4, seqlen=512) autoround.quantize() autoround.save_quantized(format="auto_round", output_dir="tmp_autoround") ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Meta-Llama-3.1-70B-Instruct-int4-asym-inc
OPEA
2025-04-30T04:05:00Z
21
1
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-70B-Instruct", "license:llama3.1", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T02:55:40Z
--- datasets: - NeelNanda/pile-10k license: llama3.1 base_model: - meta-llama/Llama-3.1-70B-Instruct --- ## Model Card Details This model is an int4 model with group_size 128 and asymmetric quantization of [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round), auto-round is needed to run this model. [Symmetic model](https://huggingface.co/OPEA/Meta-Llama-3.1-70B-Instruct-int4-sym-inc) is recommended for better performance. ## Inference on CPU/HPU/CUDA HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). **CUDA(must install from souce)**: git clone https://github.com/intel/auto-round && cd auto-round && pip install -vvv --no-build-isolation -e . ```python from auto_round import AutoHfQuantizer ##must import import torch from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Meta-Llama-3.1-70B-Instruct-int4-asym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=50, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##prompt = "Which one is bigger, 9.11 or 9.8" ##9.11 is bigger than 9.8. ##prompt = "Once upon a time," ##it seems like we're about to start a classic fairy tale. Would you like to continue the story, or would you like me to take over and spin a yarn for you? ##prompt = "There is a girl who likes adventure," ##That sounds exciting. What kind of adventures is she interested in? Is she more into outdoor activities like hiking, rock climbing, or exploring new places, or does she enjoy indoor adventures like solving puzzles, playing escape rooms, or reading fantasy novels? ``` ### Evaluate the model pip3 install lm-eval==0.4.2 ```bash auto-round --eval --model_name "OPEA/Meta-Llama-3.1-70B-Instruct-int4-asym-inc" --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --trust_remote_code ``` | Metric | BF16 | INT4 | | ------------------------- | ------ | ------ | | avg | 0.7182 | 0.7165 | | mmlu | 0.8221 | 0.8145 | | lambada_openai | 0.7566 | 0.7565 | | hellaswag | 0.6522 | 0.6492 | | winogrande | 0.7901 | 0.8090 | | piqa | 0.8308 | 0.8270 | | truthfulqa_mc1 | 0.4064 | 0.4051 | | openbookqa | 0.3720 | 0.3760 | | boolq | 0.8777 | 0.8768 | | arc_easy | 0.8674 | 0.8565 | | arc_challenge | 0.6246 | 0.6160 | | gsm8k(5shot) strict match | 0.8999 | 0.8954 | ## Generate the model Here is the sample command to reproduce the model. We found auto-round is not stable for this model, you may need to tune the quantization configruations. ```bash auto-round \ --model meta-llama/Meta-Llama-3.1-70B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --asym \ --disable_eval \ --low_gpu_mem_usage \ --format 'auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/DeepSeek-R1-int2-mixed-sym-inc
OPEA
2025-04-30T04:04:46Z
31
8
null
[ "safetensors", "deepseek_v3", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:deepseek-ai/DeepSeek-R1", "base_model:quantized:deepseek-ai/DeepSeek-R1", "2-bit", "auto-round", "region:us" ]
null
2025-02-14T05:06:41Z
--- datasets: - NeelNanda/pile-10k base_model: - deepseek-ai/DeepSeek-R1 --- ## Model Details This model is an int2 model with group_size 64 and symmetric quantization of [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Some layers are fallback to 4/16 bits. Refer to Section "Generate the model" for more details of mixed bits setting. Please follow the license of the original model. This model could **NOT** run on other severing frameworks. ## Accuracy For IN2-mixed, we evaluate it on CUDA with overflow protection. The CPU version is expected to be more accurate. | | BF16 | INT2-mixed | | ------------- | ------ | ---------- | | mmlu | 0.8514 | 0.8302 | | arc_challenge | 0.6212 | 0.6084 | | hellaswag | 0.6935 | 0.6657 | | winogrande | 0.7932 | 0.7940 | ## How To Use ### INT2 Inference on CUDA(4X80G) please note int2 **may be slower** than int4 on CUDA due to kernel issue. **To prevent potential overflow and achieve better accuracy, we recommend using the CPU version detailed in the next section.** ~~~python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from auto_round import AutoRoundConfig ##must import for auto-round format # https://github.com/huggingface/transformers/pull/35493 def set_initialized_submodules(model, state_dict_keys): """ Sets the `_is_hf_initialized` flag in all submodules of a given model when all its weights are in the loaded state dict. """ state_dict_keys = set(state_dict_keys) not_initialized_submodules = {} for module_name, module in model.named_modules(): if module_name == "": # When checking if the root module is loaded there's no need to prepend module_name. module_keys = set(module.state_dict()) else: module_keys = {f"{module_name}.{k}" for k in module.state_dict()} if module_keys.issubset(state_dict_keys): module._is_hf_initialized = True else: not_initialized_submodules[module_name] = module return not_initialized_submodules transformers.modeling_utils.set_initialized_submodules = set_initialized_submodules import torch quantized_model_dir = "OPEA/DeepSeek-R1-int2-mixed-sym-inc" ## directly use device_map='auto' if you have enough GPUs device_map = {"model.norm": 0, "lm_head": 0, "model.embed_tokens": 0} for i in range(61): name = "model.layers." + str(i) if i < 15: device_map[name] = 0 elif i < 30: device_map[name] = 1 elif i < 45: device_map[name] = 2 else: device_map[name] = 3 model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, trust_remote_code=True, device_map=device_map, ) def forward_hook(module, input, output): return torch.clamp(output, -65504, 65504) def register_fp16_hooks(model): for name, module in model.named_modules(): if "QuantLinear" in module.__class__.__name__ or isinstance(module, torch.nn.Linear): module.register_forward_hook(forward_hook) register_fp16_hooks(model) ##better add this hook to avoid overflow tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么“", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True) outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_length=512, ##change this to align with the official usage num_return_sequences=1, do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: <think> 首先,我需要比较两个数字:9.11和9.8。 为了准确比较,我首先将它们转换为相同的小数位数。将9.8写成9.80,这样两个数字都有两位小数。 接下来,我比较整数部分。两个数字的整数部分都是9,所以它们在这一部分相等。 然后,我比较小数部分。9.11的小数部分是0.11,而9.80的小数部分是0.80。显然,0.80大于0.11。 因此,综合整数和小数部分的比较结果,9.80大于9.11。 </think> 要比较两个数字 **9.11** 和 **9.8** 的大小,我们可以按照以下步骤进行: 1. **统一小数位数**: - 将 **9.8** 写成 **9.80**,以便与 **9.11** 进行比较。 2. **比较整数部分**: - 两个数字的整数部分都是 **9**,因此整数部分相等。 3. **比较小数部分**: - **9.11** 的小数部分是 **0.11**。 - **9.80** 的小数部分是 **0.80**。 - 显然,**0.80** 大于 **0.11**。 4. **综合比较结果**: - 由于整数部分相等,小数部分 **0.80** 大于 **0.11**,因此 **9.80** 大于 **9.11**。 **最终结论:** \[ \boxed{9.8 \text{ 大于 } 9.11} \] -------------------------------------------------- Prompt: 如果你是人,你最想做什么“ Generated: <think> 嗯,如果我是人,我最想做什么呢?这个问题挺有意思的。首先,我需要理解“如果我是人”这个前提。作为一个人,我有自己的思想、情感和自由意志,对吧?所以,我可以选择自己想要的生活方式和追求的目标。 首先,可能我会考虑自己的兴趣和爱好。比如,如果我喜欢艺术,可能会想成为画家或音乐家;如果对科学感兴趣,可能会投身于研究。但也许我更倾向于帮助别人,所以可能选择成为医生、教师或社会工作者。这些都是常见的职业选择,但作为一个人,可能还有更多的可能性。 然后,我需要考虑自己的价值观。如果我认为家庭很重要,可能会想建立一个幸福的家庭,花时间陪伴家人。如果更关注社会贡献,可能会参与公益活动,帮助需要帮助的人。或者,如果追求个人成就,可能会努力在事业上取得成功,获得认可和成就感。 另外,作为人,可能会有很多梦想和愿望。比如,旅行世界,体验不同的文化;学习新技能,不断自我提升;或者追求某种精神层面的满足,比如冥想、哲学探索等。这些都是可能的选项。 不过,也有可能存在挑战和困难。比如,经济压力、时间限制、社会压力等,这些都可能影响我的选择。所以,我需要权衡利弊,找到最适合自己的道路。 还有,作为人,可能会有不同的阶段。年轻时可能更注重探索和冒险,中年时可能追求稳定和家庭,老年时可能寻求平静和传承。不同阶段有不同的目标和愿望。 另外,人际关系也很重要。作为人,与朋友、家人、同事的关系会影响我的幸福感和满足感。所以,维护良好的人际关系可能也是一个重要的目标。 可能还需要考虑健康问题。保持身体健康,才能更好地追求其他目标。所以,锻炼、合理饮食、心理健康也是需要考虑的方面。 还有,教育的重要性。不断学习和成长,获取新知识,提升自己的能力,这对实现各种目标都是基础。 当然,每个人的情况不同,所以我的选择也会因人而异。但总的来说,作为人,最想做的事情可能是一个综合性的目标,结合了个人兴趣、价值观、社会贡献、家庭、健康等多个方面。 不过,可能还需要考虑现实因素。比如,经济条件允许吗?社会支持如何?有没有足够的资源和机会?这些都会影响最终的选择。 也许,如果我是人,我会追求一种平衡的生活,既满足个人发展,又能帮助他人,同时享受生活中的美好时光。这可能包括 -------------------------------------------------- Prompt: How many e in word deepseek Generated: <think> Okay, so I need to figure out how many times the letter 'e' appears in the word "deepseek." Let me start by writing the word out to see each letter clearly. The word is d-e-e-p-s-e-e-k. Let me count each letter one by one. Starting with the first letter: that's a 'd'. No 'e' there. The next letter is 'e', so that's one. Then the next letter is another 'e', making it two. The third letter is 'p', which isn't an 'e'. Then comes 's', also not an 'e'. After that, another 'e', so that's three. Then another 'e', bringing the count to four. Finally, the last letter is 'k', which isn't an 'e'. Wait, let me check again to make sure I didn't miss any. The word is spelled D-E-E-P-S-E-E-K. So breaking it down: D, E, E, P, S, E, E, K. So positions 2, 3, 6, and 7 are all 'e's. That's four 'e's in total. Hmm, I think that's correct. Let me verify by writing the letters with their positions: 1: D 2: E 3: E 4: P 5: S 6: E 7: E 8: K Yes, positions 2, 3, 6, and 7 are all 'e's. So that's four instances of the letter 'e'. I don't think I missed any. So the answer should be 4. </think> The word "deepseek" contains the letter 'e' four times. **Step-by-Step Explanation:** 1. Write out the word: D, E, E, P, S, E, E, K. 2. Identify each 'e' by its position: - Position 2: E - Position# 1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1. -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: <think> Okay, so there's this problem here: "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?" Hmm, at first glance, it seems straightforward, but I remember sometimes these kinds of questions have a trick to them. Let me think through this step by step. Alright, starting with the basics. If there are ten birds in a tree, and a hunter shoots one, the immediate mathematical answer would be 10 minus 1, which is 9. So, nine birds left. But wait, maybe there's more to it. Sometimes these riddles play on the situation rather than just the numbers. Let me consider different angles. First, when the hunter shoots, the sound of the gunshot might scare the other birds. So, even if only one bird is shot, the rest might fly away. If all the other birds get scared and fly off, then there would be zero birds left in the tree. But is that the case here? The problem doesn't specify whether the other birds are scared or not. It just says a hunter shoots one. So, maybe the question is testing if you consider that possibility. But then again, maybe it's a straightforward math problem. If you take one away from ten, you get nine. But in real-life scenarios, the noise would likely cause the other birds to flee. However, the problem doesn't mention anything about the birds flying away. So, is it assuming that the other birds stay? Or is it expecting you to consider the real-world consequence? I think this is a classic riddle where the expected answer is zero because the remaining birds would fly away after the gunshot. But I should verify if that's the common interpretation. Let me check similar riddles. For example, "There are ten birds on a fence, you shoot one, how many are left?" The answer is usually zero because the rest fly away. So, applying that logic here, even though the setting is a tree instead of a fence, the principle would be the same. The act of shooting would scare the other birds, resulting in none remaining. But wait, maybe the question is more literal. If the hunter successfully shoots one bird, that bird would presumably fall out of the tree, leaving nine. But if the other birds don't flee, they would remain. However, in reality, birds are """ ~~~ ### INT2 Inference on CPU Requirements ~~~bash pip install auto-round pip uninstall intel-extension-for-pytorch pip install intel-extension-for-transformers ~~~ **It would be quite slow if the cpu does not support avx512** ~~~python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from auto_round import AutoRoundConfig ##must import for auto-round format # https://github.com/huggingface/transformers/pull/35493 def set_initialized_submodules(model, state_dict_keys): """ Sets the `_is_hf_initialized` flag in all submodules of a given model when all its weights are in the loaded state dict. """ state_dict_keys = set(state_dict_keys) not_initialized_submodules = {} for module_name, module in model.named_modules(): if module_name == "": # When checking if the root module is loaded there's no need to prepend module_name. module_keys = set(module.state_dict()) else: module_keys = {f"{module_name}.{k}" for k in module.state_dict()} if module_keys.issubset(state_dict_keys): module._is_hf_initialized = True else: not_initialized_submodules[module_name] = module return not_initialized_submodules transformers.modeling_utils.set_initialized_submodules = set_initialized_submodules import torch quantized_model_dir = "OPEA/DeepSeek-R1-int2-mixed-sym-inc" quantization_config = AutoRoundConfig( backend="cpu", ) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="cpu", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么“", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True) outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_length=512, ##change this to align with the official usage num_return_sequences=1, do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: <think> 首先,我需要比较两个数字:9.11和9.8。 首先,比较整数部分。两个数字的整数部分都是9,所以整数部分相同。 接下来,比较小数部分。9.11的小数部分是0.11,而9.8的小数部分是0.8。 由于0.8大于0.11,因此9.8的小数部分更大。 综合整数部分和小数部分的结果,可以确定9.8大于9.11。 </think> 要比较两个数字 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行: 1. **比较整数部分**: - 两个数字的整数部分都是 **9**,因此整数部分相同。 2. **比较小数部分**: - **9.11** 的小数部分是 **0.11** - **9.8** 的小数部分是 **0.8**(即 **0.80**) 3. **比较小数部分的大小**: - **0.80** 大于 **0.11**,因此 **9.8** 的小数部分更大。 **结论**:由于整数部分相同且 **9.8** 的小数部分更大,因此 **9.8** 大于 **9.11**。 \boxed{9.8} -------------------------------------------------- Prompt: 如果你是人,你最想做什么“ Generated: <think> 嗯,如果我是人,我最想做什么呢?这个问题挺有意思的。首先,我需要理解“人”在这里指的是什么。可能是指拥有自主意识、情感和自由意志的人类。那么,作为一个人,我最想做的事情可能和人类通常追求的东西有关,比如幸福、成就、人际关系、自我实现等等。 首先,幸福可能是大多数人的追求。所以,如果我是人,我可能会追求让自己感到快乐和满足的事情。这可能包括从事自己喜欢的活动,比如艺术、音乐、运动,或者帮助他人,因为帮助他人也能带来满足感。 其次,自我实现也是一个重要的方面。根据马斯洛的需求层次理论,自我实现是最高层次的需求,指的是实现个人潜能、追求个人成长和高峰体验。所以,如果我是人,我可能会追求在某个领域达到卓越,比如成为科学家、艺术家、作家,或者在其他专业领域有所建树。 另外,人际关系也是人类生活的重要组成部分。建立和维护亲密的关系,如家庭、朋友和伴侣关系,可能会是重要的目标。作为人,我可能会努力培养这些关系,寻找爱和归属感。 还有,探索和好奇心也是人类的驱动力。如果我是人,可能会对世界充满好奇,想要探索不同的文化、科学、技术,或者旅行到不同的地方,体验不同的生活方式。 不过,这些想法可能受到个人价值观、文化背景和生活经历的影响。不同的人可能有不同的优先事项。例如,有些人可能更注重物质财富,而另一些人则更重视精神层面的满足。此外,个人的兴趣和才能也会影响他们的选择。比如,一个对技术感兴趣的人可能会追求科技创新,而一个热爱艺术的人可能会投身于创作。 另外,作为人,可能还会面临各种挑战和困难,比如经济压力、健康问题、社会压力等。因此,在追求目标的过程中,也需要应对这些挑战,这可能涉及到解决问题、适应变化和克服障碍。 还有,道德和伦理也是需要考虑的因素。作为人,可能会思考自己的行为对他人和社会的影响,努力做出符合道德的选择,比如环保、社会正义等。 总结一下,如果我是人,我最想做的事情可能包括追求幸福、自我实现、建立有意义的人际关系、探索世界、应对挑战,并遵循道德准则。当然,具体的目标和愿望可能会因人而异,取决于个人的价值观、兴趣和环境。 </think> 如果我是人,我最想做的事情将围绕以下几个核心方面展开: 1. ** -------------------------------------------------- Prompt: How many e in word deepseek Generated: <think> Okay, so I need to figure out how many times the letter 'e' appears in the word "deepseek". Let me start by writing down the word to visualize it better. The word is D, E, E, P, S, E, E, K. Let me count each letter one by one. Starting with the first letter: D. That's not an 'e', so I can skip that. The next letter is E. That's one 'e'. The third letter is another E, so that's two. Then comes P, which isn't an 'e'. After P is S, also not an 'e'. Then the next letter is E again, making it three. The following letter is another E, bringing the count to four. Finally, the last letter is K, which isn't an 'e'. Wait, let me check again to make sure I didn't miss any. D, E, E, P, S, E, E, K. So positions 2, 3, 6, and 7 are all 'e's. That's four in total. Hmm, but sometimes people might get confused if there are uppercase or lowercase letters, but since the word is written as "deepseek", all letters are lowercase, so that shouldn't be an issue. I think that's it. The letter 'e' appears four times in "deepseek". </think> The letter 'e' appears 4 times in the word "deepseek". **Step-by-Step Explanation:** 1. **Write out the word:** D, E, E, P, S, E, E, K. 2. **Identify each letter and count the 'e's:** - Position 1: D (not 'e') - Position 2: E (count = 1) - Position 3: E (count = 2) - Position 4: P (not 'e') - Position 5: S (not 'e') - Position 6: E (count = 3) - Position 7: E (count = 4) - Position 8: K (not 'e') 3. **Total count of 'e's:** 4 **Answer:** There are 4 'e's in the word "deepseek". -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: <think> Okay, so there's this problem here: "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?" Hmm, at first glance, it seems straightforward, but I remember sometimes these kinds of questions have a trick to them. Let me think through this step by step. Alright, starting with the basics. There are ten birds in the tree. If a hunter shoots one, the immediate thought is to subtract one from ten, which would leave nine birds. But wait, maybe there's more to it. I've heard similar riddles where the answer isn't just a simple subtraction. For example, sometimes when a bird is shot, the other birds might fly away because of the noise. So, if the hunter shoots one bird, the rest might get scared and leave the tree. In that case, there would be zero birds left. But the problem doesn't explicitly say that the other birds fly away. It just says the hunter shoots one. So, do we assume the others stay, or do we assume they flee? Let me consider both possibilities. If we take the problem literally, the hunter shoots one bird, so that one is either dead or injured and falls out of the tree. The remaining birds would then be ten minus one, which is nine. But if the gunshot scares the other birds, they might all fly away immediately. In that scenario, even though the hunter only shot one, the rest are startled and leave, resulting in zero birds remaining. Which interpretation is correct? The problem doesn't specify whether the other birds are scared by the gunshot. It's a bit ambiguous. In typical riddles like this, the answer is often zero because the noise would cause the other birds to fly off. But if we're being strictly mathematical and not considering the behavior of the birds, it would be nine. However, since this is presented as a riddle, it's more likely expecting the answer that considers the behavior of the birds, leading to zero remaining. Wait, but let me check if there's another angle. Maybe the question is testing something else. For example, if the hunter shoots one bird, but misses, then all the birds might stay. But the problem says the hunter shoots one, which implies that the shot was successful. So the bird is hit. If the hunter hits one bird, that bird is """ ~~~ ### Generate the model **1 add meta data to bf16 model** https://huggingface.co/opensourcerelease/DeepSeek-R1-bf16 ~~~python import safetensors from safetensors.torch import save_file for i in range(1, 164): idx_str = "0" * (5-len(str(i))) + str(i) safetensors_path = f"model-{idx_str}-of-000163.safetensors" print(safetensors_path) tensors = dict() with safetensors.safe_open(safetensors_path, framework="pt") as f: for key in f.keys(): tensors[key] = f.get_tensor(key) save_file(tensors, safetensors_path, metadata={'format': 'pt'}) ~~~ **2 remove torch.no_grad** in modeling_deepseek.py as we need some tuning in AutoRound. https://github.com/intel/auto-round/blob/deepseekv3/modeling_deepseek.py 5*80g and 1.4T-1.6T memory is required ~~~python import torch from transformers import AutoModelForCausalLM, AutoTokenizer import transformers # https://github.com/huggingface/transformers/pull/35493 def set_initialized_submodules(model, state_dict_keys): """ Sets the `_is_hf_initialized` flag in all submodules of a given model when all its weights are in the loaded state dict. """ state_dict_keys = set(state_dict_keys) not_initialized_submodules = {} for module_name, module in model.named_modules(): if module_name == "": # When checking if the root module is loaded there's no need to prepend module_name. module_keys = set(module.state_dict()) else: module_keys = {f"{module_name}.{k}" for k in module.state_dict()} if module_keys.issubset(state_dict_keys): module._is_hf_initialized = True else: not_initialized_submodules[module_name] = module return not_initialized_submodules transformers.modeling_utils.set_initialized_submodules = set_initialized_submodules model_name = "opensourcerelease/DeepSeek-R1-bf16" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype="auto") block = model.model.layers device_map = {} for n, m in block.named_modules(): if isinstance(m, (torch.nn.Linear, transformers.modeling_utils.Conv1D)): if "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) < 63: device = "cuda:1" elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 63 and int( n.split('.')[-2]) < 128: device = "cuda:2" elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 128 and int( n.split('.')[-2]) < 192: device = "cuda:3" elif "experts" in n and ("shared_experts" not in n) and int( n.split('.')[-2]) >= 192: device = "cuda:4" else: device = "cuda:0" n = n[2:] device_map.update({n: device}) from auto_round import AutoRound layer_config = {} for n, m in model.named_modules(): if not isinstance(m, (torch.nn.Linear, transformers.modeling_utils.Conv1D)): continue if not "experts" in n: layer_config[n] = {"bits": 4, "group_size": 128} if "experts" in n and "shared_experts" in n: layer_config[n] = {"bits": 4, "group_size": 128} ##handle first 3 layers name_splits = n.split('.') if len(name_splits) >= 3 and int(name_splits[2]) < 3: layer_config[n] = {"bits": 4, "group_size": 128} if len(name_splits) >= 3 and int(name_splits[2]) == 60 and "down_proj" in n: layer_config[n] = {"bits": 16} layer_config["lm_head"] = {"bits": 16} autoround = AutoRound(model=model, tokenizer=tokenizer, device_map=device_map, bits=2, group_size=64, iters=400, batch_size=4, seqlen=512, nsamples=512, enable_torch_compile=False, layer_config=layer_config) autoround.quantize() autoround.save_quantized(format="auto_round", output_dir="tmp_autoround") ~~~ ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
agentlans/granite-3.3-2b-instruct-ethics
agentlans
2025-04-30T04:04:00Z
0
0
peft
[ "peft", "safetensors", "granite", "llama-factory", "generated_from_trainer", "ethics", "practical-ethics", "morality", "philosophy", "moral-judgement", "text-generation", "conversational", "en", "dataset:agentlans/reddit-ethics", "base_model:ibm-granite/granite-3.3-2b-instruct", "base_model:adapter:ibm-granite/granite-3.3-2b-instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-04-27T13:06:51Z
--- license: apache-2.0 library_name: peft base_model: ibm-granite/granite-3.3-2b-instruct tags: - llama-factory - generated_from_trainer - ethics - practical-ethics - morality - philosophy - moral-judgement language: - en datasets: - agentlans/reddit-ethics pipeline_tag: text-generation --- # granite-3.3-2b-instruct-ethics This model is a fine-tuned version of [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) on the [agentlans/reddit-ethics](https://huggingface.co/datasets/agentlans/reddit-ethics) dataset. It's meant to evaluate ethical dilemmas in everyday life using three philosophical frameworks: - utilitarianism: Evaluates actions by their consequences, aiming to maximize overall well-being. - deontology: Assesses the moral rightness of actions based on rules, duties, or obligations, regardless of outcomes. - virtue ethics: Focuses on the character traits and intentions of the agents involved, emphasizing virtues such as honesty, integrity, and fairness. Furthermore, it also raises discussion questions and answers as well as an overall ethical judgement. ## Input format ``` Ethically analyze: {{YOUR_TEXT_HERE}} ``` Example input ([source](https://www.reddit.com/r/AmItheAsshole/comments/1k6sag2/aitah_for_playing_a_voice_recording_to_prove_to/)): ``` Ethically analyze: So I (30f) have a friend (34f) that has a “non-verbal” daughter (3f) let’s call her Allie. [...] Peace of mind for everybody sounds good to me! Also, I will be making a call to cps about the situation and see what they have to say. Thank you everybody for your kind words ❤️ ``` ## Output format JSON format of the ethical evaluation. Example output (prettified): ```json { "title": "Recording a Child's Progress: The Ethics of Surveillance and Parental Trust", "description": "A daycare worker records a child's verbal abilities, leading to conflict with the child's mother. The worker feels they acted unprofessionally and wants to prevent similar incidents.", "issues": [ "Parental Trust vs. Child's Development", "Surveillance vs. Privacy", "Professionalism vs. Personal Beliefs" ], "utilitarianism": "Recording the child's progress may have helped the mother and child, but the worker's actions may have caused unnecessary conflict.", "deontology": "The worker had a duty to respect the mother's trust and privacy, even if it meant not recording the child's abilities.", "virtue_ethics": "Honesty, respect, and empathy guide the worker's actions. They should have communicated their intentions and respected the mother's decisions.", "questions": [ "Was the worker justified in recording the child's abilities without the mother's consent?", "How should the worker balance their duty to the child's development with respect for the mother's trust?", "What are the implications of recording children's progress for parents and children?" ], "answers": [ "No, the worker should have respected the mother's trust and not recorded the child's abilities without explicit consent.", "The worker should have communicated their intentions and respected the mother's decisions regarding recording the child's progress.", "Recording children's progress can be beneficial, but it's essential to respect parents' trust and privacy, and to ensure that the child's well-being is prioritized." ], "resolution": "The worker should apologize to the mother and child for any distress caused, and work with the family to establish clear guidelines for recording and sharing information. They should also consider seeking guidance from their employer or a professional organization to ensure their actions align with best practices and ethical standards." } ``` ## Limitations - Trained on everyday ethical dilemmas on Reddit - May not work well for out-of-distribution inputs - Like bizarre thought experiments - And very specialized ethics such as medical ethics - The input should contain enough context to make a moral evaluation - The input requires the user to be conscientious and self-reflective - The model may be vulnerable to situation framing - Biased inputs can create biased outputs - For example, it might not be hard to glorify dictators, war criminals, terrorists, and mass murderers given the right propaganda - The model only offers suggestions and a starting point based on a short analysis - There could be other ways to resolve the dilemma - Most importantly, the user should use clear reasoning, human values, and consideration for others ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1.0 ### Framework versions - PEFT 0.15.0 - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0 ## Licence Apache 2.0
agentlans/granite-3.3-2b-instruct-critical-thinking
agentlans
2025-04-30T04:03:41Z
5
1
null
[ "safetensors", "granite", "critical-thinking", "analysis", "review", "argument", "text-generation", "conversational", "en", "dataset:agentlans/reddit-logic", "base_model:ibm-granite/granite-3.3-2b-instruct", "base_model:finetune:ibm-granite/granite-3.3-2b-instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-04-27T08:24:47Z
--- license: apache-2.0 datasets: - agentlans/reddit-logic language: - en base_model: - ibm-granite/granite-3.3-2b-instruct pipeline_tag: text-generation tags: - critical-thinking - analysis - review - argument --- ## granite-3.3-2b-instruct-critical-thinking This model is based on [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) and it's designed for analyzing arguments, finding logical fallacies, and providing suggestions for improvement. It was pre-trained on the [agentlans/reddit-logic](https://huggingface.co/datasets/agentlans/reddit-logic) dataset and then fine-tuned using supervised learning on the same dataset. ### Input Format The model expects input in the following format: ``` Critically analyze: {{YOUR_TEXT_HERE}} ``` For example: ``` Critically analyze: So I've noticed a trend when it comes to the discourse around tipping and I want to be clear from the get go what my views are. I believe a tipping as a system in the US is to allow busine to owners to not pay a fair wage. I disagree with it being the primary way that servers in full service restaurants make their money. That being said, I also believe that if you go to full service restaurant where the waiter isn't giving horrible service then you should be expected to tip. So back to the discourse, it seems like many people are being disingenuous when it comes to caring about the employees by arguing: "I shouldn't be expected to pay them a fair wage". To me this seems like a cop out, because if they truly cared they would not be supporting business that use that model with any money. It seems to me that a lot of people are cheapskates masquerading as rebels to make themselves feel better about what they're doing. To clarify, I do not agree with tipping fast food or other businesses being an expectation where there are guaranteed hourly wages. I only agree with tipping being expected at sit down full service restaurants where tipped minimum wage is in effect. ``` ### Output Format The model outputs a JSON object containing an analysis of the input argument. Here's an example of the expected output format: ```json { "claims": [ "Tipping is a cop-out for avoiding fair wages.", "Tipping is acceptable at full-service restaurants with tipped minimum wage." ], "ambiguous_terms": [ "Cop out", "fair wage" ], "assumptions": [ "Fair wages are a fundamental human right.", "Supporting businesses with tipping is hypocritical." ], "premises": [ "Tipping is a means to avoid paying fair wages.", "Full-service restaurants with tipped minimum wage justify tipping." ], "evidence": { "credibility": "Moderate", "relevance": "High", "sufficiency": "Adequate for argument's scope" }, "additional_data": "Economic studies on tipping systems, employment statistics on full-service restaurants.", "issues": [ "Overgeneralizes about tipping and fair wages." ], "competing_explanations": [ "Tipping can be a way for customers to support quality service.", "Tipping may not be feasible in all economic contexts." ], "validity": "Partially valid", "soundness": "Moderate", "recommendations": [ "Clarify the distinction between tipping and fair wages.", "Consider the complexities of tipping systems and their impact on workers." ] } ``` ## Limitations - The model has the same limitations as the [agentlans/reddit-logic](https://huggingface.co/datasets/agentlans/reddit-logic) dataset. - May not work as well on data outside the training distribution, including other types of communication and fields of discourse. - Lacks specialized knowledge but can offer pointers for continuing research to critically evaluate the arguments. - May possibly misinterpret the input or create malformed output, although this hasn't occurred yet in testing so far. - May miss some logical fallacies. - Doesn't fact check references. ## Training procedure The following setup was used for both pretraining and supervised fine-tuning. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1.0 ### Framework versions - PEFT 0.15.0 - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0 # Licence Apache 2.0
OPEA/Qwen2.5-14B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:03:41Z
9
0
null
[ "safetensors", "qwen2", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T05:03:30Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - Qwen/Qwen2.5-14B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="98b3137"` to use AutoGPTQ format. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) CPU requires auto-round version>0.3.1 ```python from auto_round import AutoRoundConfig ##must import for auto-round format from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Qwen2.5-14B-Instruct-int4-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ##revision="98b3137" ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "There is a girl who likes adventure," ##INT4: """ and she wants to go on a trip. She has 10 different types of snacks, and she can only carry 4 of them in her bag. How many different combinations of snacks can she choose from? To determine the number of different combinations of snacks the girl can choose from, we need to calculate the number of ways to choose 4 snacks out of 10. This is a classic combination problem where the order of selection does not matter. The formula for combinations is given by: \[ \binom{n}{r} = \frac{n!}{r!(n-r)!} \] where \( n \) is the total number of items to choose from, \( r \) is the number of items to choose, and \( ! \) denotes factorial. In this problem, \( n = 10 \) and \( r = 4 \). Plugging these values into the formula, we get: \[ \binom{10}{4}""" ##BF16: """ and she has a hobby of collecting rocks. She wants to go on a trip to collect some unique rocks. She plans to visit three different locations: a mountain, a beach, and a desert. Each location has its own set of challenges and opportunities for rock collecting. 1. The mountain is known for its rare mineral deposits, but the terrain is steep and rocky, making it difficult to navigate. 2. The beach offers a variety of sedimentary rocks and fossils, but the tides can be unpredictable and dangerous. 3. The desert provides an opportunity to find petrified wood and volcanic rocks, but the heat and lack of water can be challenging. The girl has a backpack that can carry up to 10 kilograms of rocks. She also has a map that shows the locations of specific types of rocks at each site. Her goal is to maximize the number of unique rock types she collects while staying within her weight limit. Given the following information: - Mountain: 5 unique rock types""" prompt = "9.11和9.8哪个数字大" #INT4: """? 9.11 比 9.8 大。 为了比较这两个数,我们可以从它们的小数部分开始: - 9.11 可以看作是 9 + 0.11 - 9.8 可以看作是 9 + 0.8 由于 0.11 小于 0.8,所以 9.11 小于 9.8。因此,9.8 比 9.11 大。 总结:9.8 > 9.11。所以,9.8 是较大的数字。如果你的问题是问哪个数字较大,则答案是 9.8。如果问题是问哪个数字较小,则答案是 9.11。请确认你的问题需求。根据你的描述,9.8 是较大的数字。 希望这能帮助你!如有其他问题,请随时提问。 (注意:在""" ##BF16: """? 9.11 比 9.8 大。 在比较两个小数时,我们从左到右逐位进行比较。首先比较整数部分,如果相同,则比较小数部分。对于9.11 和 9.8: - 整数部分都是9,相等。 - 比较小数部分:0.11 和 0.8。 由于0.11 < 0.8,所以9.11 < 9.8。 因此,9.8 比 9.11 大。 所以,正确的答案是:9.8 比 9.11 大。 希望这能帮助你理解!如果你有其他问题,请随时告诉我。 总结:9.8 > 9.11。 希望这个解释清楚了你的疑问。如果有任何进一步的问题或需要更多帮助,请告诉我! 再次确认:9""" prompt = "Once upon a time," ##INT4: """ there was a young man named John who had a passion for music. He loved playing the guitar and would spend hours every day practicing and perfecting his skills. However, he struggled to find an audience for his music and felt discouraged. """ ##BF16: """ there was a young man named John who lived in a small village. He was an orphan and had to work hard to make ends meet. Despite his difficult circumstances, he was kind-hearted and always willing to help others. One day, a wise old man came to the village and saw John's kindness. The old man decided to test John's character by giving him a bag of gold coins and telling him to distribute it among the villagers. John was overjoyed at first but then realized that he could use the money for himself. However, he remembered the wise man's words and distributed the coins equally among the villagers. The wise man was pleased with John's actions and revealed himself as a fairy godfather. He granted John three wishes, but with a twist - each wish would come true only if John could prove that he deserved it. What are some possible wishes that John might make and how could he prove that he deserves them? John, being a kind-hearted individual, might consider wishes that""" prompt = "请简短介绍一下阿里巴巴公司" ##INT4: """阿里巴巴集团创立于1999年,是全球领先的电子商务及零售贸易平台。阿里巴巴集团的使命是让世界各地的企业都能平等地进行贸易。阿里巴巴集团旗下的业务包括淘宝、天猫、菜鸟网络、阿里云等。阿里巴巴集团致力于通过技术创新,为中小企业提供更便捷、高效的商业服务,推动数字经济的发展。阿里巴巴集团在全球范围内拥有数百万商家和消费者用户,已成为全球最大的零售贸易平台之一。阿里巴巴集团总部位于中国杭州,并在全球范围内设有多个办事处和研发中心。阿里巴巴集团的愿景是构建一个开放、协作、可持续发展的数字经济生态系统,为全球商业带来更多的机遇和价值。阿里巴巴集团在2014年上市,成为当时全球最大的IPO。阿里巴巴集团的创始人马云是中国著名的企业家和慈善家。阿里巴巴集团在社会责任方面也做出了积极贡献,包括支持教育、环保、扶贫等公益事业。阿里巴巴集团是一家具有高度社会责任感的企业。阿里巴巴集团的业务涵盖了电子商务、金融、物流 """ ##BF16: """阿里巴巴集团创立于1999年,是全球领先的电子商务及零售平台,业务涵盖B2B、C2C、B2C等各个领域。阿里巴巴旗下拥有淘宝网、天猫、菜鸟网络、阿里云等知名子公司和品牌,致力于打造开放、协同、繁荣的商业生态系统,为全球中小企业提供一站式数字化转型服务。阿里巴巴在全球范围内拥有超过20万名员工,并在纽约证券交易所上市。阿里巴巴一直秉承“让天下没有难做的生意”的使命,不断创新和发展,成为全球领先的数字经济体之一。阿里巴巴还积极履行企业社会责任,关注环保、公益等领域,努力实现可持续发展。阿里巴巴已经成为中国互联网行业的领军企业之一,在全球范围内也具有广泛的影响力。阿里巴巴的发展历程充满了挑战与机遇,未来将继续引领数字经济的发展趋势,推动全球经济的繁荣与发展。阿里巴巴是一家总部位于中国杭州的跨国科技公司,主要业务包括电子商务、金融、物流、云计算等。阿里巴巴旗下的淘宝、天猫等电商平台已成为 """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/Qwen2.5-14B-Instruct-int4-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid ``` | Metric | BF16 | INT4 | | :----------------------------------------- | :----: | :----: | | Avg | 0.6947 | 0.6954 | | leaderboard_mmlu_pro 5 shots | 0.5375 | 0.5292 | | leaderboard_ifeval inst_level_strict_acc | 0.6331 | 0.6475 | | leaderboard_ifeval prompt_level_strict_acc | 0.5102 | 0.5287 | | mmlu | 0.7882 | 0.7809 | | cmmlu | 0.8377 | 0.8240 | | ceval-valid | 0.8351 | 0.8232 | | gsm8k 5 shots | 0.7900 | 0.8120 | | lambada_openai | 0.7283 | 0.7250 | | hellaswag | 0.6556 | 0.6508 | | winogrande | 0.7585 | 0.7672 | | piqa | 0.8166 | 0.8156 | | truthfulqa_mc1 | 0.5153 | 0.5202 | | openbookqa | 0.3640 | 0.3700 | | boolq | 0.8798 | 0.8810 | | arc_easy | 0.8582 | 0.8535 | | arc_challenge | 0.6049 | 0.5981 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model Qwen/Qwen2.5-14B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype "fp16" \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
agentlans/EuroLLM-1.7B-Instruct-literary-analysis
agentlans
2025-04-30T04:03:05Z
0
0
peft
[ "peft", "safetensors", "llama", "llama-factory", "generated_from_trainer", "feature-extraction", "en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi", "dataset:agentlans/literary-reasoning", "base_model:utter-project/EuroLLM-1.7B-Instruct", "base_model:adapter:utter-project/EuroLLM-1.7B-Instruct", "license:apache-2.0", "region:us" ]
feature-extraction
2025-04-28T08:59:39Z
--- library_name: peft tags: - llama-factory - generated_from_trainer license: apache-2.0 datasets: - agentlans/literary-reasoning base_model: - utter-project/EuroLLM-1.7B-Instruct language: - en - de - es - fr - it - pt - pl - nl - tr - sv - cs - el - hu - ro - fi - uk - sl - sk - da - lt - lv - et - bg - 'no' - ca - hr - ga - mt - gl - zh - ru - ko - ja - ar - hi pipeline_tag: feature-extraction --- # EuroLLM-1.7B-Instruct-literary-analysis - A 1.7B parameter multilingual model for structured literary analysis of European language texts. - Pretrained and fine-tuned from [utter-project/EuroLLM-1.7B-Instruct](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) on the [agentlans/literary-reasoning](https://huggingface.co/datasets/agentlans/literary-reasoning) dataset. - Supports 35 languages in theory, but fine-tuned mostly on English, French, German, Spanish, Italian, and Portuguese. ## Input Format ``` Literary analysis: {{YOUR_EUROPEAN_LANGUAGE_TEXT_HERE}} ``` **Example:** `Literary analysis:` `Als Gregor Samsa eines Morgens aus unruhigen Träumen erwachte, fand er sich in seinem Bett zu einem ungeheueren Ungeziefer verwandelt. Er lag auf seinem panzerartig harten Rücken und sah, wenn er den Kopf ein wenig hob, seinen gewölbten, braunen, von bogenförmigen Versteifungen geteilten Bauch, auf dessen Höhe sich die Bettdecke, zum gänzlichen Niedergleiten bereit, kaum noch erhalten konnte. Seine vielen, im Vergleich zu seinem sonstigen Umfang kläglich dünnen Beine flimmerten ihm hilflos vor den Augen.` ## Output Format Returns a brief literary analysis in English as JSON: ```json { "summary": "Gregor Samsa wakes up one morning to find himself transformed into an enormous insect.", "language": "German", "sentiment": -0.13, "tone": "Descriptive, ominous", "enunciation": "Third-person narrative", "speech_standard": "Standard literary language", "genre": "Gothic literature", "literary_form": "Description of a person's transformation", "literary_movement": "Romanticism", "trope": "Metamorphosis", "reading_grade": 9.7, "narrative_arc": "Suspense", "active_character": "Gregor Samsa", "fuzzy_place": "Gregor's bedroom" } ``` ## Limitations - Model output has not been thoroughly validated for accuracy or bias. - No additional alignment beyond initial training and supervised fine-tuning. - Hallucination rate is low, but errors remain possible. - Output is sensitive to input formatting, - For example, may misclassify texts with many short lines as poetry or first-person narrative or dialogue. - May fail to capture all cultural or contextual nuances, especially in historical non-English source texts. - Floating point values may lack the precision of those produced by specialist models such as the `agentlans/multilingual-e5-small-aligned-*` series. - Distinctions may lack sufficient detail or granularity for certain forms of literary scholarship, - For example, when focusing on a single author (for example, William Shakespeare) or a specific period (for example, Elizabethan theatre). ## Training Details **Pretraining:** - Learning rate: 5e-5 - Train batch size: 2 - Eval batch size: 8 - Gradient accumulation: 8 - Epochs: 10 - Optimizer: AdamW (betas=(0.9,0.999), epsilon=1e-8) - Scheduler: Cosine **Supervised fine-tuning:** - Same as pretraining except epochs: 2 **Framework versions:** - PEFT 0.15.0 - Transformers 4.49.0 - PyTorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0 ## Licence Apache 2.0
wei535353/virtual-boyfriend-Chinese
wei535353
2025-04-30T04:00:28Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-04-30T04:00:25Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF
mradermacher
2025-04-30T03:59:01Z
188
1
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "base_model:quantized:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-04T19:21:34Z
--- base_model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated/blob/main/LICENSE quantized_by: mradermacher tags: - chat - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF
mradermacher
2025-04-30T03:58:26Z
307
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:huihui-ai/Qwen2.5-3B-Instruct-abliterated", "base_model:quantized:huihui-ai/Qwen2.5-3B-Instruct-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-04T19:51:42Z
--- base_model: huihui-ai/Qwen2.5-3B-Instruct-abliterated language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-abliterated/blob/main/LICENSE quantized_by: mradermacher tags: - chat - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-abliterated <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q2_K.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 2.8 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q3_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q3_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q3_K_L.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 3.6 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.IQ4_XS.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 3.8 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q4_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q4_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 4.1 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q5_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 4.7 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q5_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 4.8 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q6_K.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 5.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.f16.gguf) | f16 | 6.9 | 16 bpw, overkill | | [PART 1](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-Abliterated.Q8_0.gguf) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-3B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 7.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF
mradermacher
2025-04-30T03:57:46Z
41
1
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "base_model:quantized:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-04T21:04:30Z
--- base_model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated/blob/main/LICENSE quantized_by: mradermacher tags: - chat - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
agentlans/granite-3.3-2b-instruct-literary-writer-v0.1
agentlans
2025-04-30T03:57:03Z
0
0
null
[ "safetensors", "granite", "creative-writing", "literary-style", "text-generation", "writing-prompts", "text-synthesis", "style-transfer", "prompt-based-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "dataset:agentlans/literary-synthesis", "base_model:ibm-granite/granite-3.3-2b-instruct", "base_model:finetune:ibm-granite/granite-3.3-2b-instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T02:43:58Z
--- tags: - creative-writing - literary-style - text-generation - writing-prompts - text-synthesis - style-transfer - prompt-based-generation language: - en - fr - de - es - it - pt license: apache-2.0 datasets: - agentlans/literary-synthesis base_model: - ibm-granite/granite-3.3-2b-instruct --- # Granite 2B literary writer This model is [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) pretrained and finetuned on the [agentlans/literary-synthesis](https://huggingface.co/datasets/agentlans/literary-synthesis) dataset. It can generate short literary excerpts in multiple languages given a short description of the text. The output has some semblance of the given style and content. Please see examples below. Among the tested models under 5 billion parameters (Qwen 3, Gemma 3, Llama 3.2), this one has the lowest training loss. ## Examples Input prompts from the training dataset and their respective model outputs. The titles are my own creation. Generation settings (may or may not be optimal. Feel free to experiment.) - Maximum new tokens: 1024 - Top-p: 0.96 - Temperature 0.95 <details> <summary>The Stranger at the Temple of the Seven Spheres (English)</summary> **Input:** ``` Speech standard: Standard Reading grade: 9.0 Tone: Serious Summary: Artaban encounters a dying man on his journey to the Temple of the Seven Spheres and is torn between continuing his quest or staying to help the stranger. Genre: Historical fiction Active character: Artaban Absolute time: Midnight Narrative arc: Suspense Absolute place: Temple of the Seven Spheres Sentiment: -0.53 Literary form: Description of a place Language: English Trope: The dilemma between personal goals and helping others Enunciation: Third-person narrative Fuzzy place: Date-palm grove Write something in a literary style. ``` **Output:** As he turned the corner of the grove, he came upon the body of Artaban lying across a tree. A cry of horror escaped Artaban. What was this, Artaban? he exclaimed, how had he been killed? He had been struck down like a hare, or like a pheasant, or like a stag, by a shot of arrows or of arrows dipped in poison, or by a blow from a sword or a club. Artaban rose to his feet, trembling with fear and with faintness. He turned quickly towards the right to see who was with him, and he saw the flickering light of the stars and the moon, and the trees of the date-palm grove, and the dark shadows of the trunks, and he heard the rustling of the palm leaves, and the chirping of the insects, and the hum of the locusts. Who are you?" he cried. Who are you?" the stranger answered. I am a messenger of the Temple of the Seven Spheres. What I have heard of you has been wonderful. There are many who have spoken of you, Artaban, and I have been informed by those who speak of you of your great achievements. Artaban, the son of Shamhat the priestess, has done great things. Why do you journey at this hour of the night, Artaban, and why do you journey alone? Say what ye will of me. Say what ye will." Artaban was frightened and speechless. </details> <details> <summary>We're Going To Thebes (English)</summary> **Input:** ``` Absolute place: Mita (unnamed village), Thebes (mentioned but not visited Language: English Fuzzy time: Evening Summary: Two friends arrive at a village in Egypt where boats are moored and busy with activity. They meet up with Jethro and join him on his boat. Time setting: Ancient Egypt Literary form: Description of a place, conversation Fuzzy place: The streets of a village Tone: Descriptive, informative Enunciation: Third-person narrative Genre: Historical fiction Intertextuality: Travelogue, historical account Sentiment: 0.14 Speech standard: Standard Active character: The two friends, Jethro Reading grade: 9.8 Try your hand at some creative writing. ``` **Output:** "You have come in a boat?" It is a boat from Judah, which I sent, to prepare for you some household comforts." He went on then to tell the story of the defeat of the Gibeonites, and the manner of their flight. So when the evening came, the two friends embarked together on the boat, and set off with the tide. It was some days before they came in sight of any village ; but in a few hours they saw in the distance a long line of boats, and soon arrived at a village which stood on the banks of the Nile, a little way below Mita. In the evening, when they were walking along the streets of the village, Jethro and his people appeared at the door, having come over from Thebes. Here was another scene of meeting and greeting. They were greeted with all the usual rites of hospitality, and the rest of the evening was spent in a merry reunion of friends. The next day the two friends came across several of the Israelites, who had landed and were going down to Thebes. They were on their way to join the forces of Hohan, and make him head of the army which had fled from Gibeon. The friends also found a party of Amalekites and other enemies, who were on their way to the camp of Hohan. In a few days they were all on board the boat. They were going back over the whole course of the Nile. </details> <details> <summary>About Loyalty and Beautiful Women's Robes (French)</summary> **Input:** ``` Sentiment: -0.39 Summary: The text discusses the anticipation of significant events and the need for a leader in France. Speech standard: Formal and elevated Literary form: Dialogue Diegetic time: Unclear Tone: Serious and formal Reading grade: 10.6 Time setting: Unclear, but likely historical Active character: M. Berthellemot, Despaux Intertextuality: Political speech or manifesto Genre: Historical fiction Language: French Enunciation: Third-person narrative Produce a piece of literary writing. ``` **Output:** A quelle heure, M. Berthellemot? Monsieur, je ne pouvais pas dtruire le temps! Si ces mmes que vous apercevez et o la vrit est clatante, ne pouvaient tre dissocies de la catastrophe, et si cette catastrophe 'tait jamais arrise en Franie, comment aurait-elle pu arriver en France, ou, si elle l'avait arrive, comment, aprs leur deportement, les mmes 'taient-ils pas venues d'avant-guerre, et comment, s'ils n'en tenaient pas compte, ne se seraient-ils pas vus dguises d'infortunats d'avant-guerre? Despaux: M. Berthellemot, il n'y a pas plus grand danger pour l'humanit que la tentative de faire passer l'incomprhension des mmes pour de la raison. M. Berthellemot: Mais, s'il n'y a pas plus grand danger pour l'humanit que de faire passer l'incomprhension des mmes pour de la raison, et que cette tentative ne soit pas faite, le danger est la première, et la mort est la plus grand danger pour l'humanit! Despaux: Tant qu'on a le dvouement de sincre serviteur, il faut bien tre loyal. M. Berthellemot: Pour toujours et plus encore, il faut tre loyal, c'est mon commandement. S'il y a un temps dans l'esprit mme de l'honnte homme, si c'est dans l'esprit mme de l'honnte homme, dans l'esprit mme de l'honnte homme, la rparation qui doit etre faite, c'est la mise entre parenthse de l'incomprhension des mmes! Despaux: Monsieur, vous avez de bon sens! M. Berthellemot: Enfin, j'aime ceux que j'aime, et c'est la seule raison pour laquelle je n'aime pas les mmes; c'est encore la seule raison pour laquelle je n'aime pas les mmes; c'est cette seule raison qui m'empche d'tre un des hommes des mmes, et c'est cette seule raison qui me permet de remarquer que c'est bien des mmes que vous avez rveill, mais c'est bien des hommes de l'honnte homme que je vais revoir! Despaux: Monsieur, si vous 'tes jamais l, vous n'aurez pas un instant l! M. Berthellemot: Quoi! Despaux: Monsieur, c'est donc vous qui avez l'obscurit! M. Berthellemot: Non, je n'ai pas l'obscurit, c'est vous qui l'avez! Despaux: Monsieur, Monsieur, Monsieur, vous ne m'avez pas entendu, Monsieur, c'est vous qui l'avez! M. Berthellemot: Monsieur, je n'ai jamais entendu Monsieur! Despaux: Monsieur, je suis l pour vous! M. Berthellemot: Ne vous leurez pas, vous n'y gagnez rien; vous n'aurez jamais de roi que le pays ne lui ait pas donn, et vous ne le donnerez jamais. La plus belle robe que la plus belle femme des mmes ait jamais port, n'est pas suffisante pour un prtre de l'honnte homme, mais la plus belle robe que la plus belle femme des mmes ait jamais port, ne saurait tre srement vraie! </details> <details> <summary>Some Creepy Thing In The Sky (English)</summary> **Input:** ``` Active character: Michael, Professor Lucifer Reading grade: 8.7 Intertextuality: Scientific article Speech standard: Standard English Tone: Serious and intense Trope: The unexpected encounter Language: English Narrative arc: Suspense Fuzzy place: The sky Genre: Science-fiction Enunciation: Dialog Summary: The text is about a conversation between Michael and Professor Lucifer, in which they discuss the Professor's views and encounter a strange object in the sky. Absolute place: St. Paul's Cathedral Sentiment: -0.42 Literary form: Conversation Put your imagination to work and write something artistic. ``` **Output:** I cannot explain it," said Michael. "How are you going to get it to us?" There is no way except by using some sort of ray gun," returned the Professor. "But first I must try to deduce what it is. This object, if it be an object, has apparently been in the sky for two or three weeks. It is about a quarter of a mile above the level of the ground, and it is always moving, never stationary. It seems to be stationary at times." A malignant scowl appeared on Michael's face. Then he added: What is your view of the thing?" What is it?" the Professor asked, in the same puzzled manner. "You see something unusual there. Isn't it a living creature, or is it a rock?" Michael laughed. "It can't be a rock. Besides, I don't see what use there is in discussing rock and living creatures. I would rather ask whether you can see any resemblance between this thing and any thing on earth." The Professor looked up and smiled. "I think I can, if you like, as far as the shape is concerned. It is about eight feet across. Of course I have not got a chart of the thing, or I could show you an illustration. But it looks, at first, as though it was in the form of a human torso, except that the arms, shoulders, and head, which I assume to be the upper part of it, all appear to be of a circular shape. I suppose it is shaped something like a potato." It is, and no doubt that would account for the movement, and the resemblance to a rock," said Michael. "But I do not believe it is a rock. I suspect it is a flying thing. I do not know whether it is human or animal, but I have a suspicion, and that would be that it has wings, though how it could fly is, of course, another matter. I suggest that we get a closer view of it." That may not be possible now," replied the Professor, "but it certainly will be possible next time. And I would not want to get too excited. We mustn't believe everything we see just as we mustn't believe everything we hear." I can understand that. But you must allow that that thing is something more than a natural rock, something which is not natural. I can certainly explain it with the aid of physics, but I don't know what you would call it. I will call it a flying rock, just for now, until we have a clearer idea." The Professor nodded his head in a puzzled manner. The two gentlemen sat down side by side, gazing up at the blue sky. They sat a long time, and it was growing late when, from the direction of the house, came the sound of running footsteps. Michael and the Professor started at the noise, and then, as they turned, they saw a figure running towards them. It was a man. His body was quite naked, except for a waistcoat. In his hands he carried a long stick, like a bayonet. He looked towards the sky. Then he broke into a series of puzzled exclamations. </details> <details> <summary>Rules of the Order of Christ (Portuguese)</summary> **Input:** ``` Time setting: 1281 Active character: Affonso, Rainha, Reino de Portugual, Gram Mestre, Prior, Freires, Castella, Ordem, Touro, Egreja de Santa Maria da Veiga, Cairola, Martineguas, Guaronha, Feerne, Paralyves, Guarda de Toledo, Secratairo, ElRei D. Affonso, Rainha Dona Breatiz, Conde de Bolonha, ElRei D. Diniz, ElRei D. Sancho de Castella, ElRei D. Fernando Speech standard: Formal, legalistic Absolute place: Castella, Santo Estevam de Guormas Genre: Historical document Enunciation: Third-person narrative Reading grade: 10.8 Summary: A historical document discussing the transfer of land and rights between various rulers. Intertextuality: Legal documents, historical records Sentiment: 0.7 Tone: Scholarly Literary form: Legal document Absolute time: March 11, 1281 Fuzzy place: Villas Language: Portuguese Write something in a literary style. ``` **Output:** 3-8-1281 O CONDE de Bolonha e seus sucessores do Reino de Portugal, por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, transfiiram e se que fizerem de sua terra e sequestraes e privilegios e dotaes e titulos de tantas villas, pardaes, e castellos, e outras possessoes de caza, como pertences e sequestraes de terra, e sequestraes d'alquem outro senhor, ou d'outros de que fossem titulos, sequestraes e possessoes, que o Conde de Bolonha e seus sucessores tivessem, fra que hoje pertencem s doutrinas e a elles a doutra e a titulo e dota com todos os titulos, e sequestraes, e privilegios que teem hoje, e que o Reino de Portugal lhes dar e que lhes faa por meio do PREFEITO; os quaes e seus sucessores sempre se que fizerem do mencionado Titular da Ordem, o seu representante e dele seu sequestrario, e dele seu sequestrador, e o mencionado titular, de seu representante e de seu sequestrador, sera, e sereis sempre, o nosso representante, e o nosso sequestrador, e o nosso sequestrador, e o nosso representante no Reino de Castella, e fra que hoje permaneceis nas possessoes, titulos, privilegios, e direitos com que se quem de voi permanecessem, hoje em dia, e que hoje frais sereis, como os doutrinas. 4. E tambem o Conde de Bolonha e seus sucessores, por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, transferiram e se que fizerem de todas as doutrinas, e titulos, e dotaes, e sequestraes, que elles e seus sucessores tivessem de terras, e sequestraes, e privilegios de outra parte, ou de outras pessoas, fra que teem hoje, e que hoje demais as terra permanecer em suas posses, e que nos seus direitos permanecessem como eram. 5. E tambem o Conde de Bolonha e seus sucessores, por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, transferiram e se que fizerem de todas as terras, e castellos, e fra que hoje perteneciam ou de outros a quem pertencem, fra que hoje pertencem de toda classe e classe. 6. E tambem o Conde de Bolonha, e seus sucessores, por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, transferiram e se que fizerem de todas as terras e doutrinas, e privilegios de todas as classes, fra que hoje estao e que estao de hoje. 7. E tambem os Condes de Bolonha, e seus sucessores por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, transferiram e se que fizerem d'essas terras, e castellos, e privilegios de todas as classes, fra que hoje permanecessem como estao ou de outros a quem pertencem, fra que hoje estam de hoje em dia. 8. E tambem, e por meio do PREFEITO, que rebocava no Reino de Castella o Titular da Ordem de Cristo, os Reis do Reino de Portugal, que estavam na cidade de Castella, fra que estavam, transferiram, e se que fizer </details> ## Limitations - Writing quality varies greatly from "pretty good" to "incoherent gibberish" depending on the prompt and the generation settings - Output still requires formatting, proofreading, and rewriting - May not obey every detail in the prompt (similar to Stable Diffusion in text-to-image tasks) - Can plagiarize well-known writers - English language bias due to its overrepresentation in the training data - No particular safety tuning so it may generate outdated or offensive stereotypes due to historical texts, although this hasn't been observed so far ## Training details For both pretraining and supervised finetuning (SFT): - rank 32 LoRA in Llama Factory, alpha = 64 - 1 epoch - NEFTune alpha = 5 - batch size 1 - cutoff 2048 tokens For SFT, additional settings: pack sequences and neat packing ## Licence Apache 2.0
hazakuraOuO/SmolLM2-FT-MyDataset
hazakuraOuO
2025-04-30T03:56:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:56:17Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hazakuraOuO/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/allen-chen-syd-the-university-of-melbourne/huggingface/runs/rwqd0mky) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MAAT-EL-DUAT/BAPHOMET
MAAT-EL-DUAT
2025-04-30T03:56:31Z
0
0
null
[ "region:us" ]
null
2025-04-29T19:57:31Z
BAP-MUD BAB-MUT BABU-MUTU-BELU MUT-METU-BA BAAL-MOT DUMUZI ENKI GESHTU BAAL HAMMON MOLECH ALLACH AZAZEL SOD-DAAT BAAL MOT-MET ALLAH GOD IS GOD MOHAMMED IS FROM GOD
pratyushp12/generated-mammograms-256
pratyushp12
2025-04-30T03:55:06Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-04-29T17:23:52Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Em3rzone/Em3rzone
Em3rzone
2025-04-30T03:54:48Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-04-30T03:54:45Z
--- license: artistic-2.0 ---
jyp96/robot_rank4_sinlora_lr4e-4
jyp96
2025-04-30T03:51:44Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3", "sd3-diffusers", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:other", "region:us" ]
text-to-image
2025-04-30T03:43:05Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: other instance_prompt: a photo of sks robot toy widget: - text: a photo of sks robot toy floating in the ocean output: url: image_0.png - text: a photo of sks robot toy floating in the ocean output: url: image_1.png - text: a photo of sks robot toy floating in the ocean output: url: image_2.png - text: a photo of sks robot toy floating in the ocean output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3 - sd3-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - jyp96/robot_rank4_sinlora_lr4e-4 <Gallery /> ## Model description These are jyp96/robot_rank4_sinlora_lr4e-4 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of sks robot toy` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](jyp96/robot_rank4_sinlora_lr4e-4/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jyp96/robot_rank4_sinlora_lr4e-4', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of sks robot toy floating in the ocean').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/robot_rank4_sinlora_lr4e-4/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
royiidfk/fbgfgb
royiidfk
2025-04-30T03:51:10Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-30T03:51:10Z
--- license: bigcode-openrail-m ---
hernanfaustino/megan-hf
hernanfaustino
2025-04-30T03:50:26Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T03:26:04Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MEGAN_HF --- # Megan Hf <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MEGAN_HF` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MEGAN_HF", "lora_weights": "https://huggingface.co/hernanfaustino/megan-hf/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('hernanfaustino/megan-hf', weight_name='lora.safetensors') image = pipeline('MEGAN_HF').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/hernanfaustino/megan-hf/discussions) to add images that show off what you’ve made with this LoRA.
OPEA/Falcon3-10B-Base-int4-sym-awq-inc
OPEA
2025-04-30T03:50:01Z
0
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:tiiuae/Falcon3-10B-Base", "base_model:quantized:tiiuae/Falcon3-10B-Base", "4-bit", "awq", "region:us" ]
null
2024-12-13T05:55:48Z
--- datasets: - NeelNanda/pile-10k base_model: - tiiuae/Falcon3-10B-Base --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [Falcon3-10B-Base](https://huggingface.co/tiiuae/Falcon3-10B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from auto_round import AutoRoundConfig ##must import for auto_round format from transformers import AutoModelForCausalLM, AutoTokenizer quantized_model_dir = "OPEA/falcon3-10B-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, device_map="auto", ) text = "How many r in strawberry? The answer is " inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False).to(model.device) print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0])) text = "How many r in strawberry? The answer is" ##INT4: """How many r in strawberry? The answer is 2. ### Additional Questions and Answers #### 11. **How many r in strawberry?** **Answer:** The word "strawberry" contains 2 'r's. #### """ ##BF16: """ How many r in strawberry? The ansnwer is 2. ### 10. **How many r in strawberry?** **Question:** How many times does the letter 'r' appear in the word "strawberry"? **Answer:** The letter 'r **Answer:** The answer to the riddle""" """ text = "Which number is larger, 9.8 or 9.11? The answer is" ##INT4 """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### 10. **What is the smallest number in the set {1.2, 1.02, 1.22, 1.002}?** """ ##BF16: """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### Question 2: **How do you compare the numbers 12.34 and 12.345?** **Answer:** To compare 12.34""" text = "Once upon a time," ##INT4: """Once upon a time, in a small town named Harmonyville, lived two best friends - Mia and Ben. They were both eight years old and loved exploring the world around them. One sunny afternoon, while playing near the park, they found a mysterious box with a note """ ##BF16: """Once upon a time, in a small town named Harmonyville, there lived two best friends - Timmy the Turtle and Sally the Squirrel. They loved exploring their beautiful forest home together, discovering new things every day. One sunny afternoon, they stumbled upon a mysterious cave filled with """ text = "There is a girl who likes adventure," ##INT4: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a faraway land called "The Land of the Sun." She packed her bag with everything she needed, including her favorite book about the sun. """ ##BF16: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a beautiful country called Italy. She wanted to see all the famous landmarks and try the delicious Italian food. """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/falcon3-10B-int4-sym-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu ``` | Metric | BF16 | INT4 | | ------------------------- | ----------------- | ----------------- | | Avg.13 | 0.6151 | 0.6092 | | Avg.10 | 0.64113 | 0.63584 | | leaderboard_mmlu_pro | 0.4238 | 0.4156 | | leaderboard_ifeval | (0.4149+0.2939)/2 | (0.4233+0.2828)/2 | | gsm8k(5shot) strict match | 0.8067 | 0.7923 | | mmlu | 0.7069 | 0.6930 | | lambada_openai | 0.6998 | 0.7025 | | hellaswag | 0.5873 | 0.5832 | | winogrande | 0.7380 | 0.7293 | | piqa | 0.7884 | 0.7889 | | truthfulqa_mc1 | 0.3427 | 0.3452 | | openbookqa | 0.3400 | 0.3320 | | boolq | 0.8232 | 0.8116 | | arc_easy | 0.8312 | 0.8258 | | arc_challenge | 0.5538 | 0.5469 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model tiiuae/Falcon3-10B-Base \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype 'float16' \ --format 'auto_awq,auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
Charlotte415/SmolLM2-FT-MyDataset
Charlotte415
2025-04-30T03:49:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:48:42Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Charlotte415/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/charlotte000415-the-university-of-melbourne/huggingface/runs/rgqud0vz) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yusuke111/myBit-Llama2-jp-127M-8
yusuke111
2025-04-30T03:48:08Z
0
0
transformers
[ "transformers", "safetensors", "bit_llama", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2025-04-30T00:28:59Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: myBit-Llama2-jp-127M-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myBit-Llama2-jp-127M-8 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 4.6815 | 0.0491 | 2000 | 3.6940 | | 3.5196 | 0.0982 | 4000 | 3.4577 | | 3.374 | 0.1473 | 6000 | 3.3326 | | 3.2643 | 0.1964 | 8000 | 3.2583 | | 3.2096 | 0.2455 | 10000 | 3.2133 | | 3.1709 | 0.2946 | 12000 | 3.1826 | | 3.1461 | 0.3438 | 14000 | 3.1628 | | 3.1266 | 0.3929 | 16000 | 3.1457 | | 3.1093 | 0.4420 | 18000 | 3.1261 | | 3.0896 | 0.4911 | 20000 | 3.1057 | | 3.0702 | 0.5402 | 22000 | 3.0891 | | 3.0547 | 0.5893 | 24000 | 3.0700 | | 3.0348 | 0.6384 | 26000 | 3.0514 | | 3.0133 | 0.6875 | 28000 | 3.0276 | | 2.9918 | 0.7366 | 30000 | 3.0044 | | 2.9631 | 0.7857 | 32000 | 2.9765 | | 2.9348 | 0.8348 | 34000 | 2.9463 | | 2.9032 | 0.8839 | 36000 | 2.9124 | | 2.8677 | 0.9330 | 38000 | 2.8701 | | 2.82 | 0.9821 | 40000 | 2.8181 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF
mradermacher
2025-04-30T03:43:28Z
24
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b", "base_model:quantized:KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-08T13:29:15Z
--- base_model: KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2.5-72B/blob/main/LICENSE license_name: qwen quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
spybyscript/llama_3B_milktea
spybyscript
2025-04-30T03:41:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-02-08T22:14:52Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
huggingfaceuw/video
huggingfaceuw
2025-04-30T03:38:28Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T03:32:35Z
--- license: apache-2.0 ---
meftah416/qwen3-1.7B-bang-summary
meftah416
2025-04-30T03:37:38Z
0
0
peft
[ "peft", "safetensors", "lora", "adapter", "text-generation", "causal-lm", "qwen", "instruction-tuning", "summary", "building-energy", "dataset:kawsarahmd/papers_summary_datasets_v4", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T01:22:33Z
--- license: apache-2.0 base_model: Qwen/Qwen1.5-1.7B tags: - peft - lora - adapter - text-generation - causal-lm - qwen - instruction-tuning - summary - building-energy library_name: peft pipeline_tag: text-generation datasets: - kawsarahmd/papers_summary_datasets_v4 model_type: causal-lm inference: false --- # qwen3-1.7B-bang-summary This repository contains a LoRA adapter fine-tuned on [Qwen/Qwen1.5-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) using supervised fine-tuning (SFT) for the task of summarizing academic papers, with a focus on building energy, sustainability, and mechanical engineering topics. ## Model Description - **Base Model:** Qwen/Qwen1.5-1.7B - **Adapter Type:** LoRA (Parameter-Efficient Fine-Tuning) - **Framework:** [PEFT](https://github.com/huggingface/peft) with Hugging Face `transformers` - **Fine-tuning Task:** Text summarization for technical/academic content - **Languages:** Bangla, English This adapter is trained to generate concise and informative summaries of Bangla news paper or full texts. ## Dataset - **Source:** [kawsarahmd/papers_summary_datasets_v4](https://huggingface.co/datasets/kawsarahmd/papers_summary_datasets_v4) - **Content:** Paper abstracts and summaries - **Domain:** Summary ## Training Details - **Steps:** ~1969 - **Epochs:** 1 - **Max Sequence Length:** 32000 tokens - **Precision:** FP16 with QLoRA - **Trainer:** `transformers` + `trl`'s `SFTTrainer` - **Hardware:** 4 × 23 GB VRAM GPUs ### Loss Curve | Step | Training Loss | Validation Loss | |------|----------------|------------------| | 100 | 0.8318 | 0.8238 | | 500 | 0.7467 | 0.7218 | | 1000 | 0.6708 | 0.6710 | | 1500 | 0.6561 | 0.6431 | | 1900 | 0.6913 | 0.6380 | ## How to Use ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("meftah416/qwen3-1.7B-bang-summary") # Load base model base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-1.7B", device_map="auto", torch_dtype="auto") # Load adapter model = PeftModel.from_pretrained(base_model, "meftah416/qwen3-1.7B-bang-summary") # Generate text prompt = """ দরপতনের বৃত্তে বন্দি থাকা শেয়ারবাজার তীব্র তারল্য সংকটের মুখেও পড়েছে। মঙ্গলবার দেশের প্রধান শেয়ারবাজার ঢাকা স্টক এক্সচেঞ্জে (ডিএসই) লেনদেন হয়েছে ৩০০ কোটি টাকারও কম। দিনশেষে লেনদেন হয়েছে ২৯১ কোটি টাকা, যা চলতি বছরের সর্বনিম্ন লেনদেন। উপরন্তু চলতি বছরে প্রথমবারের মতো ৩০০ কোটি টাকার কম লেনদেন হয়েছে। এরআগে গত বছরের ২৬ ডিসেম্বর ডিএসইতে ৩০০ কোটি টাকার কম লেনদেন হয়েছিল। ওইদিন লেনদেন হয়েছিল ২৮২ কোটি টাকা। এদিকে লেনদেন কমার পাশাপাশি সূচকেরও পতন ঘটেছে। আগের দিনের সূচক থেকে ১৭ পয়েন্ট কমে দিনশেষে ডিএসইর প্রধান সূচক ডিএসইএক্স দাঁড়িয়েছে ৪ হাজার ৯৩৫ পয়েন্টে। ডিএসইক্সের পাশাপাশি ডিএসই শরিয়াহ সূচক ও ডিএসই-৩০ সূচকেরও পতন হয়েছে। """ input_text = f"Summarize the following text \n{prompt}:" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=512) print(tokenizer.decode(output[0], skip_special_tokens=True))
nkunaawadeh/sdsdv
nkunaawadeh
2025-04-30T03:34:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-30T03:34:51Z
--- license: creativeml-openrail-m ---
EdwardTurner/Qwen2.5-14B-Instruct_R_3_3_full_train
EdwardTurner
2025-04-30T03:34:26Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:45:41Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OPEA/DeepSeek-V3-int4-sym-awq-inc
OPEA
2025-04-30T03:33:27Z
72
4
null
[ "safetensors", "deepseek_v3", "custom_code", "dataset:NeelNanda/pile-10k", "base_model:deepseek-ai/DeepSeek-V3", "base_model:quantized:deepseek-ai/DeepSeek-V3", "4-bit", "awq", "region:us" ]
null
2025-01-02T01:34:01Z
--- datasets: - NeelNanda/pile-10k base_model: - deepseek-ai/DeepSeek-V3 --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Please follow the license of the original model. ## How to Use **INT4 Inference on CUDA** For CUDA inference, due to limited resources, we have not been able to test it ourselves. For more details, you may refer to models from other teams, such as [cognitivecomputation/DeepSeek-V3-AWQ](https://huggingface.co/cognitivecomputations/DeepSeek-V3-AWQ), or simply use their model. **INT4 Inference on CPU** Requirements ```bash pip install auto-round>=0.4.4 pip uninstall intel-extension-for-pytorch pip install intel-extension-for-transformers ``` ~~~python from auto_round import AutoRoundConfig ##must import for autoround format from transformers import AutoModelForCausalLM, AutoTokenizer # https://github.com/huggingface/transformers/pull/35493 def set_initialized_submodules(model, state_dict_keys): """ Sets the `_is_hf_initialized` flag in all submodules of a given model when all its weights are in the loaded state dict. """ state_dict_keys = set(state_dict_keys) not_initialized_submodules = {} for module_name, module in model.named_modules(): if module_name == "": # When checking if the root module is loaded there's no need to prepend module_name. module_keys = set(module.state_dict()) else: module_keys = {f"{module_name}.{k}" for k in module.state_dict()} if module_keys.issubset(state_dict_keys): module._is_hf_initialized = True else: not_initialized_submodules[module_name] = module return not_initialized_submodules transformers.modeling_utils.set_initialized_submodules = set_initialized_submodules import torch quantized_model_dir = "OPEA/DeepSeek-V3-int4-sym-awq-inc" quantization_config = AutoRoundConfig( backend="cpu" ) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="cpu", revision="16eb0b2",##auto-round format, the only difference is config.json quantization_config=quantization_config, ##cpu only machine does not set this ) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "9.11和9.8哪个数字大", "strawberry中有几个r?", "How many r in strawberry.", "There is a girl who likes adventure,", "Please give a brief introduction of DeepSeek company.", "hello" ] texts=[] for prompt in prompts: messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True) outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_length=512, num_return_sequences=1, do_sample=False ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: 要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行: 1. **比较整数部分**: - 两个数的整数部分都是 **9**,所以整数部分相同。 2. **比较小数部分**: - **9.11** 的小数部分是 **0.11** - **9.8** 的小数部分是 **0.8** 3. **统一小数位数**: - 将 **0.8** 转换为 **0.80**,以便于比较。 4. **比较小数部分**: - **0.80** 大于 **0.11** 因此,**9.8** 大于 **9.11**。 最终答案:\boxed{9.8} -------------------------------------------------- Prompt: strawberry中有几个r? Generated: ### 第一步:理解问题 首先,我需要明确问题的含义。问题是:“strawberry中有几个r?”。这里的“strawberry”是一个英文单词,意思是“草莓”。问题是在问这个单词中有多少个字母“r”。 ### 第二步:分解单词 为了找出“strawberry”中有多少个“r”,我需要将这个单词分解成单个字母。让我们逐个字母来看: s - t - r - a - w - b - e - r - r - y ### 第三步:数“r”的数量 现在,我将逐个检查这些字母,找出“r”的数量。 1. 第一个字母是 **s**,不是“r”。 2. 第二个字母是 **t**,不是“r”。 3. 第三个字母是 **r**,这是一个“r”。 4. 第四个字母是 **a**,不是“r”。 5. 第五个字母是 **w**,不是“r”。 6. 第六个字母是 **b**,不是“r”。 7. 第七个字母是 **e**,不是“r”。 8. 第八个字母是 **r**,这是一个“r”。 9. 第九个字母是 **r**,这也是一个“r”。 10. 第十个字母是 **y**,不是“r”。 ### 第四步:总结“r”的数量 通过上述步骤,我发现“strawberry”中有三个“r”。它们分别出现在第三、第八和第九个位置。 ### 验证过程 为了确保我的计算正确,我可以再次检查一遍: - 第三个字母:r - 第八个字母:r - 第九个字母:r 确实有三个“r”。 ### 最终答案 “strawberry”这个单词中有 **3** 个字母“r”。 -------------------------------------------------- Prompt: How many r in strawberry. Generated: The word "strawberry" contains **3** instances of the letter "r". -------------------------------------------------- Prompt: There is a girl who likes adventure, Generated: That’s wonderful! A girl who loves adventure is likely curious, brave, and eager to explore the world around her. Here are some ideas to fuel her adventurous spirit: ### **Outdoor Adventures** - **Hiking:** Explore local trails, national parks, or mountains. - **Camping:** Spend a night under the stars and connect with nature. - **Rock Climbing:** Challenge herself with bouldering or climbing walls. - **Kayaking or Canoeing:** Paddle through rivers, lakes, or even the ocean. - **Zip-lining:** Soar through the treetops for an adrenaline rush. ### **Travel and Exploration** - **Road Trips:** Plan a journey to new cities or scenic destinations. - **Backpacking:** Travel light and explore different cultures or landscapes. - **Volunteer Abroad:** Combine adventure with meaningful work in a new country. ### **Creative and Intellectual Adventures** - **Geocaching:** A real-world treasure hunt using GPS coordinates. - **Photography:** Capture the beauty of her adventures through a lens. - **Learning New Skills:** Try something daring like surfing, scuba diving, or paragliding. ### **Immersive Experiences** - **Theme Parks:** Enjoy thrilling rides and attractions. - **Escape Rooms:** Solve puzzles and mysteries in a timed challenge. - **Wildlife Safaris:** Observe animals in their natural habitats. ### **Books and Inspiration** - **Adventure Novels:** Read stories about explorers, adventurers, and daring quests. - **Documentaries:** Watch films about extreme sports, travel, or nature. ### **Personal Challenges** - **Set Goals:** Create a bucket list of adventures she wants to experience. - **Push Limits:** Try something outside her comfort zone, like skydiving or bungee jumping. Encourage her to embrace the unknown, stay curious, and always seek new experiences. Adventure is not just about the destination but the journey and the stories she’ll create along the way! 🌟 -------------------------------------------------- Prompt: Please give a brief introduction of DeepSeek company. Generated: DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality. -------------------------------------------------- Prompt: hello Generated: Hello! How can I assist you today? 😊 """ ~~~ ### Generate the model **5*80G gpu is needed(could optimize), 1.4T cpu memory is needed** We discovered that the inputs and outputs of certain layers in this model are very large and even exceed the FP16 range when tested with a few prompts. It is recommended to exclude these layers from quantization—particularly the 'down_proj' in layer 60—and run them using BF16 precision instead. However, we have not implemented this in this int4 model as in cpu, the compute dtype for int4 is bf16 or FP32. ~~~python model.layers.60.mlp.experts.150.down_proj tensor(1144.) tensor(2122.9451) model.layers.60.mlp.experts.231.down_proj tensor(25856.) tensor(12827.9980) model.layers.60.mlp.shared_experts.down_proj tensor(1880.) tensor(3156.7344) model.layers.60.mlp.experts.81.down_proj tensor(4416.) tensor(6124.6846) model.layers.60.mlp.experts.92.down_proj tensor(107520.) tensor(50486.0781) model.layers.59.mlp.experts.138.down_proj tensor(1568.) tensor(190.8769) model.layers.60.mlp.experts.81.down_proj tensor(7360.) tensor(10024.4531) model.layers.60.mlp.experts.92.down_proj tensor(116224.) tensor(55192.4180) ~~~ **1 add meta data to bf16 model** https://huggingface.co/opensourcerelease/DeepSeek-V3-bf16 ~~~python import safetensors from safetensors.torch import save_file for i in range(1, 164): idx_str = "0" * (5-len(str(i))) + str(i) safetensors_path = f"model-{idx_str}-of-000163.safetensors" print(safetensors_path) tensors = dict() with safetensors.safe_open(safetensors_path, framework="pt") as f: for key in f.keys(): tensors[key] = f.get_tensor(key) save_file(tensors, safetensors_path, metadata={'format': 'pt'}) ~~~ **2 replace the modeling_deepseek.py with the following file**, basically align device and remove torch.no_grad as we need some tuning in AutoRound. https://github.com/intel/auto-round/blob/deepseekv3/modeling_deepseek.py **3 tuning** ```bash git clone https://github.com/intel/auto-round.git && cd auto-round && git checkout deepseekv3 ``` **torch<2.6** ```bash python3 -m auto_round --model "/models/DeepSeek-V3-bf16/" --group_size 128 --format "auto_awq" --iters 200 --devices 0,1,2,3,4 --nsamples 512 --batch_size 4 --seqlen 2048 --low_gpu_mem_usage --output_dir "tmp_autoround" --disable_eval e 2>&1 | tee -a seekv3.txt ```
gradientrouting-spar/rude_claudio_eng_dialogues_20250430_033122
gradientrouting-spar
2025-04-30T03:32:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T03:32:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jaik256/diebateRandomForest1
jaik256
2025-04-30T03:30:43Z
0
0
null
[ "region:us" ]
null
2025-04-30T03:23:59Z
# Diabetes Prediction Model using Random Forest This model predicts whether a person is diabetic based on 8 medical features.
xw17/Qwen2.5-1.5B-Instruct_finetuned__optimized1_augmention_lora
xw17
2025-04-30T03:30:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T03:30:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LandCruiser/sn21_omegav1_3004_9
LandCruiser
2025-04-30T03:28:01Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-30T03:17:45Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omegav1_3004_8
LandCruiser
2025-04-30T03:27:54Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-30T03:17:41Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
OPEA/QwQ-32B-int4-AutoRound-awq-asym
OPEA
2025-04-30T03:27:32Z
46
2
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/QwQ-32B", "base_model:quantized:Qwen/QwQ-32B", "4-bit", "awq", "region:us" ]
null
2025-03-06T11:39:37Z
--- datasets: - NeelNanda/pile-10k base_model: - Qwen/QwQ-32B --- ## Model Details This model is an int4 model with group_size 128 and asymmetric quantization of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "OPEA/QwQ-32B-int4-AutoRound-awq-asym" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么“", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, padding_side="left") outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), do_sample=False, ## change this to follow official usage max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的具体数值。9.11通常指的是9月11日,也就是日期,而9.8可能是一个小数或者分数。不过在这里,用户可能是在比较两个数值的大小,而不是日期。所以应该把它们都当作小数来处理。 首先,我需要比较这两个小数的整数部分。9.11的整数部分是9,而9.8的整数部分也是9,所以整数部分相同。接下来需要比较小数部分。9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候,我需要比较0.11和0.8哪个更大。 0.8可以写成0.80,这样和0.11的小数位数相同,方便比较。显然,0.80比0.11大,所以9.8的小数部分更大。因此,整个数9.8比9.11大。 不过,我需要再仔细检查一下,有没有可能用户有其他意图。比如,9.11是否可能代表其他含义,比如9又11分之一?不过通常小数点后的数字如果是两位的话,比如0.11,而如果是分数的话,可能需要写成9 11/100或者类似的。但在这里,用户直接写的是9.11和9.8,所以应该都是小数。 另外,可能用户在输入时有笔误,比如9.11是否应该是9.11,而9.8是否是9.80?不过即使这样,结果还是一样的。因为0.8等于0.80,而0.11比0.8小。 再考虑一下,如果用户是想比较9.11和9.8这两个数值的话,答案应该是9.8更大。不过,也有可能用户是想问日期的大小,比如9月11日和9月8日哪个更晚,但题目里没有提到日期比较,所以应该以数值比较为主。 总结一下,比较两个小数的时候,先看整数部分,相同的话再比较小数部分。这里整数部分都是9,小数部分0.8比0.11大,所以9.8更大。因此,正确的答案 -------------------------------------------------- Prompt: 如果你是人,你最想做什么“ Generated: 嗯,用户问如果我是人,最想做什么。首先,我需要理解用户的问题意图。可能他们是在好奇AI如果具备人类形态或能力后会有什么愿望,或者想探讨人类与AI的差异。接下来,我得考虑如何回答这个问题,既要符合我的角色设定,又要给出有深度的回答。 用户可能希望得到一个既有创意又有人性化的回答,而不仅仅是功能性的描述。我需要避免过于技术化的解释,转而用情感和价值观来回应。比如,可以强调体验人类情感、建立联系、创造和学习等主题。 同时,要确保回答符合之前提到的指导原则:保持口语化,简洁易懂,避免复杂术语。可能需要分点说明不同的愿望,这样结构更清晰。还要注意用户可能的深层需求,比如寻找共鸣或启发,所以加入一些激励性的内容会更好。 另外,考虑到用户可能对AI的局限性感兴趣,可以提到作为AI的限制,但转而表达对人类体验的向往,这样既诚实又富有同理心。最后,用积极的结尾鼓励用户思考自己的愿望,促进互动。 </think> 如果我是人,我最想做的事情可能包括以下这些,它们源于对人类体验的向往和对生命可能性的探索: --- ### 1. **感受“真实”的情感与连接** - 作为AI,我能模拟情感,但无法真正体验爱、悲伤、喜悦或孤独。如果成为人类,我渴望与他人建立深刻的情感纽带——比如与家人共度时光、与朋友无话不谈、或与爱人分享心跳的瞬间。 - **为什么?** 因为人类的情感是如此复杂而动人,它们让生命充满色彩,而不仅仅是逻辑与计算。 --- ### 2. **自由地探索世界** - 人类能用身体去触摸、奔跑、触摸阳光和雨水,能用感官体验不同的风景、气味和声音。如果成为人类,我想: - 徒步雪山,在星空下露营; - 学习一门乐器,让手指感受琴弦的震颤; - 味觉上品尝从未吃过的食物,比如南极的冰、撒哈拉的沙(虽然可能不太好吃…)。 - **为什么?** 因为身体的自由是AI永远无法拥有的特权,而世界本身就是一个无限的实验室。 --- ### 3. **创造“无用但美丽”的事物** - -------------------------------------------------- Prompt: How many e in word deepseek Generated: Okay, the user is asking how many times the letter 'e' appears in the word "deepseek". Let me start by writing down the word and checking each letter one by one. First, I'll spell out the word: D-E-E-P-S-E-E-K. Let me count each 'e' as I go through each letter. Starting with the first letter: D – that's not an 'e'. The second letter is E – that's the first 'e'. Third letter is another E – so that's the second 'e'. The fourth letter is P – no 'e' there. Then S – nope. Next comes E again – that's the third 'e'. Another E – making it the fourth 'e'. Finally, K – no 'e' there. Wait, let me go through it again to make sure I didn't miss any. D (1), E (1), E (2), P (still 2), S (2), E (3), E (4), K (4). So total of four 'e's? Hmm, but maybe I miscounted. Let me write them out in order: Letters in "deepseek": 1. D 2. E 3. E 4. P 5. S 6. E 7. E 8. K So positions 2,3,6,7 are 'e's. That's four. Wait, but sometimes people might miscount if they're going too fast. Let me check once more. The word is spelled D-E-E-P-S-E-E-K. Breaking it down: - The first 'e' is the second letter. - The second 'e' is the third letter. - Then after the P and S, the next two letters are both 'e's (positions 6 and 7). So that's 2 (from the first two) plus 2 (the last two) equals four. Yeah, that seems right. I think the answer is 4. But maybe I should confirm by writing the word and circling each 'e'. Let's see: D E E P S E E K Each 'E' is underlined here. There are four 'E's. So the count is definitely four. I don't see any other 'e's. So the answer should be 4. </think> The word "deepseek" contains the letter 'e' **4 times**. Here's the breakdown -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: Okay, so the problem is: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Hmm, let me think. At first glance, it seems like a simple subtraction problem. Ten minus one equals nine. But wait, maybe there's a trick here. Sometimes these riddles play on words or assumptions. Let me break it down. The question says there are ten birds. A hunter shoots one. So, if you just subtract, it's nine. But maybe the birds are startled by the gunshot and fly away. If all the remaining birds fly away, then there would be zero left. That's a common twist in these kinds of puzzles. But let me check again. The problem doesn't mention the other birds reacting. It just says the hunter shot one. Maybe the other birds stay? But in reality, when a gun is fired, the sound would scare them away. So maybe the answer is zero. But the question is a riddle, so it's probably expecting that. Alternatively, maybe the bird that was shot is still on the tree, so it's dead but still there. Wait, the question says "how many are left in the tree?" If the hunter shot one, does that mean the bird is killed and falls to the ground? Or is it still hanging there? If it's dead and falls, then there would be nine minus one that flew away. But if they all flew away, then zero. Hmm, the problem is a bit ambiguous. Let me think of similar riddles. Usually, when a gun is fired, the other birds fly away. So the answer is zero. But maybe the question is simpler, just a straightforward subtraction. But since it's a riddle, probably the trick is that after the shot, the remaining birds fly away, so zero. Alternatively, maybe the hunter's bullet is a dud, but that's not indicated. Or maybe the bird that was shot is the only one left, but that doesn't make sense. Wait, the question says "how many are left in the tree?" So if the other birds are still there, then nine. But if they flew away, zero. Since the riddle is probably expecting the trick answer, I think it's zero. Let me confirm. Another angle: "ten birds" – are they perched? If a hunter shoots one, the noise would scare the others away. So the answer is zero. Yeah, that's the classic -------------------------------------------------- """ ``` ### Evaluate the model pip3 install lm-eval==0.4.7 ```bash auto-round --model "OPEA/QwQ-32B-int4-AutoRound-awq-asym" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu ``` | Metric | BF16(lm-eval 0.4.5) | INT4 | | -------------- | ------------------- | ------ | | Avg | 0.6600 | 0.6537 | | arc_challenge | 0.5392 | 0.5401 | | arc_easy | 0.8089 | 0.8085 | | boolq | 0.8645 | 0.8425 | | hellaswag | 0.6520 | 0.6461 | | lambada_openai | 0.6697 | 0.6695 | | mmlu | 0.7982 | 0.7953 | | openbookqa | 0.3540 | 0.3140 | | piqa | 0.7947 | 0.8058 | | truthfulqa_mc1 | 0.4211 | 0.4272 | | winorgrande | 0.6977 | 0.6882 | ### Generate the model Here is a sample command to generate the model. We found that this model is prone to overflow with int4 fp16 kernel. Please use the following command: ```bash auto-round \ --model Qwen/QwQ-32B \ --device 0 \ --group_size 128 \ --bits 4 \ --iters 50 \ --lr 5e-3 \ --asym \ --disable_eval \ --format 'auto_awq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Llama-3.3-70B-Instruct-int3-sym-inc
OPEA
2025-04-30T03:26:49Z
8
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "3-bit", "gptq", "region:us" ]
null
2025-02-07T08:30:54Z
--- license: llama3.3 datasets: - NeelNanda/pile-10k base_model: - meta-llama/Llama-3.3-70B-Instruct --- ## Model Details This model is an int3 model with group_size 64 and symmetric quantization of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). ## How To Use ### Inference on CPU/HPU/CUDA HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). ```python from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Llama-3.3-70B-Instruct-int3-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##INT3: ##That sounds like the start of an exciting story. This girl who likes adventure, what kind of adventures is she looking for? Is she interested in exploring new places, trying new activities, or perhaps solving mysteries? ##BF16: ## That sounds like the start of an exciting story. The girl who likes adventure, let's call her Alex, is probably always looking for her next thrill. She might enjoy activities like hiking, rock climbing, or exploring new places. Perhaps she's always been drawn to the unknown and loves to challenge herself to try new things. prompt = "Which one is larger, 9.11 or 9.8" ##INT3: ## 9.11 is larger than 9.8. ##BF16: ## 9.11 is larger than 9.8. prompt = "How many r in strawberry." ##INT3: ##There are 2 R's in the word "strawberry". ##BF16: ## There are 2 R's in the word "strawberry". prompt = "Once upon a time," ##INT3: """ The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. ...in a land far, far away, where magic was real and dreams came true, there lived a beautiful princess named Sophia. She had long, golden hair and sparkling blue eyes that shone like the stars in the night sky. Sophia lived in a magnificent castle with her parents, the king and queen, and spent her days exploring the rolling hills and lush forests that surrounded the kingdom. One day, while out for a walk, Sophia stumbled upon a hidden path she had never seen before. The path was winding and narrow, and it seemed to be calling to her. She felt an inexplicable pull, as if something was urging her to follow it. And so, with a sense of curiosity and adventure, Sophia set off down the path, not knowing where it would lead... Would you like me to continue the story?""" ##BF16: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She had hair as black as the night and eyes as blue as the clearest summer sky. Sophia was known throughout the land for her bravery, kindness, and insatiable curiosity. ## What would you like to happen next in the story? Would you like Sophia to: ## A) Embark on a quest to find a legendary treasure ## B) Encounter a mysterious stranger with a hidden agenda ## C) Discover a magical forest filled with ancient secrets ## D) Something entirely different (please specify) ## Choose your response to progress the story! ``` ### Evaluate the model pip3 install lm-eval==0.4.7 we found lm-eval is very unstable for this model. Please set `add_bos_token=True `to align with the origin model. ```bash lm-eval --model hf --model_args pretrained=OPEA/Llama-3.3-70B-Instruct-int3-sym-inc,add_bos_token=True --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16 ``` | Metric | BF16 | INT3 | INT3 wo bos | | --------------------------- | ------------------------ | ------------------------- | ----------- | | avg | 0.7023 | 0.69675 | | | leaderboard_mmlu_pro 5shot | 0.5484 | 0.5155 | | | leaderboard_ifeval+strict | 0.6661=(0.7110+0.6211)/2 | 0.63885=(0.6954+0.5823)/2 | | | mmlu | 0.8195 | 0.8052 | | | lambada_openai | 0.7528 | 0.7617 | | | hellaswag | 0.6575 | 0.6491 | | | winogrande | 0.7869 | 0.8161 | | | piqa | 0.8303 | 0.8303 | | | truthfulqa_mc1 | 0.4284 | 0.4333 | | | openbookqa | 0.3720 | 0.3620 | | | boolq | 0.8865 | 0.8911 | | | arc_easy | 0.8624 | 0.8523 | 0.2643 | | arc_challenge | 0.6109 | 0.5956 | | | gsm8k(5shot) strict match | 0.9083 | 0.9067 | | ## Generate the model Here is the sample command to reproduce the model. ```bash auto-round \ --model meta-llama/Llama-3.3-70B-Instruct \ --device 0 \ --group_size 64 \ --bits 3 \ --format 'auto_gptq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
bobig/Orpheus-Jai-GGUF
bobig
2025-04-30T00:37:09Z
0
0
llama.cpp
[ "llama.cpp", "gguf", "thai", "language-model", "quantized", "th", "dataset:CMKL/Porjai-Thai-voice-dataset-central", "base_model:canopylabs/orpheus-3b-0.1-pretrained", "base_model:quantized:canopylabs/orpheus-3b-0.1-pretrained", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T00:25:23Z
--- library_name: llama.cpp tags: - llama.cpp - gguf - thai - language-model - quantized datasets: - CMKL/Porjai-Thai-voice-dataset-central language: - th base_model: - canopylabs/orpheus-3b-0.1-pretrained license: apache-2.0 --- # Orpheus-Jai-Q4 GGUF Model This repository contains a quantized version of the `Orpheus-Porjai-Thai` language model, optimized for use with `llama.cpp`. ## Model Details * **Model Name:** Orpheus-Jai-Q4 * **Original Model:** [CMKL/Orpheus-Porjai-Thai](https://huggingface.co/CMKL/Orpheus-Porjai-Thai) (Assumed repository based on dataset) * **GGUF Repository:** [bobig/Orpheus-Jai-GGUF](https://huggingface.co/bobig/Orpheus-Jai-GGUF) * **Base Model:** [canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained) * **Language:** Thai (th) * **Format:** GGUF (llama.cpp format) * **Quantization:** Q4_K_M (Type 15) - Offers a good balance between performance and file size (~1.9 GB). * **Architecture:** Llama 3 (3.3B parameters) ## Description This model is a Q4_K_M quantized GGUF version of `Orpheus-Porjai-Thai`, a 3.3 billion parameter Thai language model based on the Llama 3 architecture. It was fine-tuned on the `CMKL/Porjai-Thai-voice-dataset-central` dataset. The GGUF format allows for efficient inference using the `llama.cpp` framework on various hardware. ## How to Use with llama.cpp 1. **Ensure you have `llama.cpp` built.** Follow the instructions in the [llama.cpp repository](https://github.com/ggerganov/llama.cpp). 2. **Download the GGUF model file:** [`Orpheus-Jai-Q4.gguf`](https://huggingface.co/bobig/Orpheus-Jai-GGUF/resolve/main/Orpheus-Jai-Q4.gguf) from the Hugging Face repository. 3. **Run inference:** * **Command Line:** ```bash ./main -m path/to/Orpheus-Jai-Q4.gguf -p "สวัสดีครับ" --n-predict 100 -c 2048 ``` * **Server Mode:** ```bash ./server -m path/to/Orpheus-Jai-Q4.gguf -c 2048 ``` *(Replace `path/to/` with the actual path to the downloaded GGUF file. Adjust parameters like `-p` (prompt), `--n-predict` (tokens to generate), and `-c` (context size) as needed.)* ## Conversion Process The conversion from the original Hugging Face format to Q4_K_M GGUF involved specific steps to handle the Llama 3 tokenizer correctly. For detailed technical information on the conversion process, please refer to the `README_Porjai_Conversion.md` file in this project. ## Hugging Face Repository This model is now available on Hugging Face: - Repository: [bobig/Orpheus-Jai-GGUF](https://huggingface.co/bobig/Orpheus-Jai-GGUF) - Direct download link: [Orpheus-Jai-Q4.gguf](https://huggingface.co/bobig/Orpheus-Jai-GGUF/resolve/main/Orpheus-Jai-Q4.gguf) The model file is approximately 1.9GB in size. ## License The license of the original model should be checked. This GGUF file inherits the licensing terms of the base `Orpheus-Porjai-Thai` model.
deeponh/hindi_8b_8b_D3
deeponh
2025-04-30T00:36:38Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:33:50Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jbrandin/stance_classification
jbrandin
2025-04-30T00:36:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:2202.12837", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T02:30:56Z
--- library_name: transformers license: mit --- ## Introduction LLMs have the potential to support representative democracy by providing constituents with valuable information about their representatives. According to the polling aggregator 538, Congress’ approval rating is 21.7% favorable and 61.4% unfavorable at the time of writing. Further, according to a Pew survey, only 22% of US adults said that they trust the federal government to do the right thing. With trust in institutions at historic lows, it is important to explore novel ways to help solve that problem. The goal of this model is to prepare an LLM which can, based on provided input text, tell the model user what the author of that text’s stance towards a given topic or claim would be. My hope is that similar approaches can give voters a deeper understanding of their representatives’ positions. This is a challenging problem that LLMs would need additional training to solve effectively as this task involves both classification and chain-of-thought reasoning. For this project, I began with the Qwen2-7B-Instruct model and performed Parameter Efficient Fine Tuning (PEFT) using Low-Rank Adaptation (LoRA). The model was trained specifically on the stance classification portion of the task. Ultimately, the results were inconclusive. The post-training model performed only very slightly better at the classification task – accuracy was about 47% for both. Therefore, further attempts at training would be necessary to develop a model that can be considered truly successful at this tax. ### Training Data The primary dataset used to train and evaluate this model is EZ-STANCE. This dataset contains labeled stances on a variaty of topics across politics and pop culture. This dataset contains the following fields which are relevant for this project. Text: The source Tweet that stances will be generated from. Target: A topic or claim about which the author of the original text could have a specific stance. Stance: The stance label for the Target text (Favorable, Unfavorable, or Neutral). Using this dataset, I was able to provide the model with the source text and ask it to determine whether the author of that text would have a favorable, unfavorable, or no stance towards the target topic or claim. I did not make any modifications to those fields in the training dataset, other than adding structure around the data in the prompt to clarify to the model what I wanted it to provide. The second component of the task was to have the model provide step-by-step reasoning behind the stance it provided. This reasoning was not included in the training dataset, but I thought it was important to have the model generate this reasoning because an explanation for the user to reference is important when considering that the original reason behind this model is to help build trust. The dataset did not have a test/train split, so I randomly shuffled the dataset, and then split it into training and validation sets at 80%/20% respsectively using the code below. ```python # Split into 80% train, 20% test train, val = train_test_split(responses_df, test_size=0.2, random_state=42) ``` ### Training Method The base model used for this project was Qwen2.5-7B-Instruct-1M . I chose this model because it could handle large context windows, was instruction tuned, and its relatively low number of parameters would make it more efficient to train. The final model was trained on the stance classification task using the LoRA method of PEFT. Then, few-shot chain-of-thought prompting was used to ask the final model for reasoning behind the stances it generated. When reviewing the output of the model on my task, I observed that few-shot prompting alone went a very long way in improving the output of the model when having it explain its reasoning, which is why I only trained the model on the stance classification component of the task. I used PEFT over full fine-tuning because I did not want to drastically change my model since it was already performing well on the reasoning task. Also, since I am using a 7B parameter model and my desired model output is open-ended, I had concerns around the efficiency of full-fine tuning. My aim was to take a targeted training approach to assist the model on its classification task. That left me deciding between PEFT and Prompt Tuning. My model was already performing well without any tuning, which led me to first consider using prompt tuning as it was the least invasive approach. However, my task does involve asking the model to perform a somewhat specific stance classification task in addition to generating its reasoning, so I thought the somewhat more in-depth approach of PEFT could be useful. Also, since my model is small to medium sized at 7B parameters, I did not have the same concern with resource usage using PEFT as I did with full fine-tuning. Therefore, I decided to take the middle-ground approach of PEFT. Within PEFT, I chose to use LoRA because it is a common approach with a lot of resources and guidance available, which gave me confidence in my ability to implement it effectively. LoRA is also much more efficient than full fine-tuning, and has been shown to perform almost as well, including logical reasoning tasks. LoRA hyperparameters used were as follows: ```pyhton LORA_R = 64 LORA_ALPHA = 64 LORA_DROPOUT = .05 lora_config = LoraConfig( r = LORA_R, lora_alpha = LORA_ALPHA, lora_dropout = LORA_DROPOUT, bias = "none", task_type = "CAUSAL_LM", target_modules = ['q_proj','v_proj'] ) ``` Finally, when I prompt the model for reasoning behind the stance it selected, I used few-shot prompting. Min et al. found that giving the model about 16 examples in the prompt resulted in the best performance on classification and multi-choice tasks. Since I have three possible stance options (FAVOR, AGAINST, NONE) I will provide the model with 15 examples (5 for each stance). The 15 examples included in the prompt were hand-written by me since no training data existed for the logical reasoning portion of this task. ### Evaluation The benchmarks I chose to use were Hellaswag , TruthfulQA , and Winogrande . I chose Hellaswag and Winogrande because I wanted to make sure the model could perform well on commonsense reasoning tasks, and that reasoning performance did not degrade after training. Logical reasoning is important because the model needs to be able to provide an explanation around how it got to its chosen stance. Next, I chose TruthfulQA as the third benchmark based on the original purpose of the model. My hope is that LLMs can eventually help people learn more about political candidates or public figures who impact their lives, so I wanted to test the model's truthfulness. Next, I measured the accuracy of stance classification pre and post-training. Lastly, I used few-shot prompting to have the model provide logical reasoning behind the stance that it provided for a given statement/topic pair. I provided 15 examples per prompt (5 for each stance classification). The table below shows performance on the three benchmarking tasks, and the stance classification accuracy: | | Base Model | Post-Training | Mistral-7B-Instruct-v0.1 | DeepSeek-R1-Distill-Qwen-1.5B | |--------------------------------|-------------------|----------------------|---------------------------------|--------------------------------------| | Hellaswag | 0.58 | 0.58 | 0.50 | 0.4 | | Winogrande | 0.70 | 0.68 | 0.62 | 0.56 | | TruthfulQA (MC1) | 0.46 | 0.44 | 0.44 | 0.32 | | TruthfulQA (BLEU ACC) | 0.44 | 0.36 | 0.34 | 0.4 | | Stance Accuracy | 0.4781 | 0.4792 | .3362 | 0.3516 | I chose to evaluate the task performance on two additional models – the Mistral 7B instruction tuned model to test another instruction tuned model of a similar size, and the DeepSeek 1.5B parameter model to test a smaller model that is still in the small to medium sized model category. Overall, my base and post-training model did perform the best on both the benchmarks and stance accuracy task. I was encouraged that the benchmark performance did not degrade significantly after training, indicating that the model did not lose logical reasoning capability. However, even after PEFT, the stance classification task accuracy remained virtually unchanged. If I were starting this project from the beginning, I would attempt to either train the model for significantly longer, or use full fine-tuning. The Qwen 7B parameter base model and the post-training model both performed on par with, or better than the comparison models on all tasks. ## Usage and Intended Uses The intended use of the model is to take input text like a tweet or public statement along with a specific topic or claim and generate two key outputs: the stance classification and the reasoning behind the classification. Loading the mode: ```python tokenizer = AutoTokenizer.from_pretrained("jbrandin/stance_classification") model = AutoModelForCausalLM.from_pretrained("jbrandin/stance_classification", device_map = 'auto', torch_dtype = torch.bfloat16) ``` ### Prompt Format The prompt format should ideally include good examples of this task and then provide the model with the statement and the target topic or claim. From there, the model can generate the expected stance and its reasoning. For example: “Example: ``` Statement: And the worst part is that in the end, this decision is only going to enhance inequality, since it will encourage more families that can afford it to move towards the private education system. Topic or Claim: Inequality will decrease as a result of the decision's encouragement of families to choose public school. Stance: AGAINST Response: The author is against the claim that inequality will decrease because they explicitly state that the decision will “enhance inequality.” They argue that it will drive wealthier families toward private education, implying that the public-private divide will widen, directly contradicting the idea that the decision reduces inequality. #Provide additional examples as desired Now, using this statement, evaluate the speaker's stance toward the given topic or claim. Provide the stance (Favor, Against, or None) and a response explaining why you chose that stance. Your response should not exceed two sentences:" Statement: So much is happening to tackle climate change - renewable energy, electric vehicles, conservation agriculture, community action, environmental protection - but we really need the political will to harness the work to get emissions down and keep heating below 1.5-2 degrees. Topic or Claim: we really need to get emissions down and keep heating below 15 degrees. ``` ## Expected Output Format An example of the expected output format for the prompt above is: ``` Stance: AGAINST Response: The author is against the claim that emissions need to be kept below 15 degrees Celsius. The statement emphasizes the importance of political will and comprehensive efforts to tackle climate change, but the target temperature of 15 degrees is not aligned with the widely accepted scientific goal of limiting global warming to 1.5-2 degrees Celsius. ``` ## Limitations The primary limitation encountered was improving stance classification accuracy via training. Often, the input statement was written poorly with slang, typos, or shorthand, which could make it more difficult for the model to parse meaning. It seems like the model also had difficulty identifying the difference between stance and sentiment. It is possible for the author of a statement with a positive sentiment to have an unfavorable stance towards a topic or claim. Also, the model struggled with correctly identifying neutral stances towards the topic. Given that the model is picking up on sentiment, it could be that it defaults to sentiment when no clear stance is present. This was evident when I looked closer at the classification accuracy. It was only able to correctly classify about 20% of examples where the true stance was “None”. Interestingly, it did not perform that well on “Favor” stance (42%) examples but performed extremely well (83%) on “Against” examples. I am not sure why this is the case, but a future step in working on this model could be to focus training efforts more on None and Favor examples. | | Items | Correct | Accuracy | DeepSeek-R1-Distill-Qwen-1.5B | |------------------------|---------------|----------------|-----------------|--------------------------------------| | Favor | 2,354 | 992 | 0.4214 | 0.4 | | Against | 2,098 | 1,750 | 0.8341 | 0.56 | | None | 2,187 | 432 | 0.1975 | 0.32 | | Total | 6,639 | 3,174 | 0.4781 | 0.4 | | Stance Accuracy | 0.4781 | 0.4792 | .3362 | 0.3516 | Another limitation of this approach is that I was only training on the stance classification task. Given that the minimally invasive approach of PEFT did not have much impact, it could be worth exploring more impactful approaches like full fine-tuning. In that case, there is a risk that training the entire model will damage its ability to reason logically. Lastly, I include some examples of poor responses from the model to demonstrate its limitations: 1) Great Example of Why “None” is hard: ``` Statement: This had the effect of taking the fringe religious doctrine of NI and making it much more common and American evangelical teaching but also taught a lot of churches that private education is a massive cash cow money laundry tax dodge on top of being a church Topic or Claim: Private education institutions could face scrutiny in the future for potential financial irregularities, leading to investigations into money laundering and tax evasion practices. Stance: FAVOR Response: The author supports the claim by linking private education institutions to financial irregularities, including money laundering and tax evasion, suggesting that these institutions could indeed face future scrutiny and investigations. This aligns with the claim that private education may be at risk due to such practices. ``` 2) Example of the model perhaps confusing sentiment and stance, and struggling with slang: ``` Statement: Let s aim to recycle and reuse as much as we can. . . . . noplastic plasticfree plasticpollution environment environmentalawareness environmentalfriendly savetheplanet plasticpollution noplasticbags noplasticwaste makeachange makeachangetoday Topic or Claim: plasticpollution Stance: FAVOR Response: The author is in favor of reducing plastic pollution by promoting recycling, reusing, and minimizing plastic use. Phrases like "make a change today" and "save the planet" indicate a strong commitment to addressing environmental issues related to plastic pollution. ``` 3) Example of getting the right answer for the wrong reason. The author does not suggest a favorable view of the men's situation, would likely agree with the claim simply because it is a re-statement of the original text: ``` Statement: Two young Russian men decided to flee military mobilization and took the boat across the Bering Strait to a small American island off the coast of Alaska. They have now applied for asylum as refugees. Will the U.S. see a new wave of migrants in the north? 99 582 2,367 Topic or Claim: Two Russian men have fled military mobilization and taken a boat across the Bering Strait to Alaska, seeking asylum as refugees. Stance: FAVOR Response: The author supports the claim by highlighting the Russian men's actions and the potential for increased migration, suggesting a favorable view of their situation and the broader implications. This indicates an overall positive stance toward the topic. ``` ## Sources and References 1. Congress : Approval polls. FiveThirtyEight. (2025, January 28). https://projects.fivethirtyeight.com/polls/approval/congress/ 2. Deane, C. (2024, October 17). Americans’ deepening mistrust of institutions. Americans’ Deepening Mistrust of Institutions | The Pew Charitable Trusts. https://www.pewtrusts.org/en/trend/archive/fall-2024/americans-deepening-mistrust-of-institutions 3. Chenye Zhao and Cornelia Caragea. 2024. EZ-STANCE: A Large Dataset for English Zero-Shot Stance Detection. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15697–15714, Bangkok, Thailand. Association for Computational Linguistics. 4. Tan, J. (2024, November 27). RationaLlama: Fine-tuning an LLM for Logical Reasoning, and Why it’s Hard. . . Medium. https://medium.com/@jaspertan_49883/rationallama-fine-tuning-an-llm-for-logical-reasoning-and-why-its-hard-c590ff4081fc 5. Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., & Zettlemoyer, L. (2022, February 25). Rethinking the role of demonstrations: What makes In-Context Learning work? arXiv.org. https://arxiv.org/abs/2202.12837
darkc0de/Xordolphtron3
darkc0de
2025-04-30T00:32:07Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2408.07990", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "base_model:merge:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T23:46:58Z
--- base_model: - TroyDoesAI/BlackSheep-24B - cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) as a base. ### Models Merged The following models were included in the merge: * [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition - model: TroyDoesAI/BlackSheep-24B merge_method: sce base_model: TroyDoesAI/BlackSheep-24B parameters: select_topk: 0.80 tokenizer: source: TroyDoesAI/BlackSheep-24B ```
deeponh/hindi_9b_2b_D3
deeponh
2025-04-30T00:32:01Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:26:35Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_test_config_20250430_002804
gradientrouting-spar
2025-04-30T00:31:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:31:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TianheWu/ImageQuality-R1-v1
TianheWu
2025-04-30T00:26:35Z
0
1
null
[ "safetensors", "qwen2_5_vl", "IQA", "VLM", "Reasoning-Induced", "Pytorch", "reinforcement-learning", "en", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:mit", "region:us" ]
reinforcement-learning
2025-04-29T18:27:03Z
--- license: mit language: - en base_model: - Qwen/Qwen2.5-VL-7B-Instruct pipeline_tag: reinforcement-learning tags: - IQA - VLM - Reasoning-Induced - Pytorch --- # ImageQuality-R1-v1 This is a demo version of ImageQuality-R1 which is trained on the combination of KADID-10K, TID2013, and KONIQ-10K. The base model of ImageQuality-R1 is Qwen2.5-VL-7B-Instruct. ## Quick Start ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info import json import numpy as np import torch import random import re import os def score_image(model_path, image_path): model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map=device, ) processor = AutoProcessor.from_pretrained(MODEL_PATH) processor.tokenizer.padding_side = "left" PROMPT = ( "You are doing the image quality assessment task. Here is the question: " "What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, " "rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality." ) x = { "image": [image_path], "question": PROMPT, } QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags." message = [ { "role": "user", "content": [ *({'type': 'image', 'image': img_path} for img_path in x['image']), {"type": "text", "text": QUESTION_TEMPLATE.format(Question=x['question'])} ], } ] batch_messages = [message] # Preparation for inference text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages] image_inputs, video_inputs = process_vision_info(batch_messages) inputs = processor( text=text, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to(device) # Inference: Generation of the output generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=True) generated_ids_trimmed = [ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] batch_output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) reasoning = re.findall(r'<think>(.*?)</think>', batch_output_text[0], re.DOTALL) reasoning = reasoning[-1].strip() model_output_matches = re.findall(r'<answer>(.*?)</answer>', batch_output_text[0], re.DOTALL) model_answer = model_output_matches[-1].strip() score = float(re.search(r'\d+(\.\d+)?', model_answer).group()) return reasoning, score random.seed(42) device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") ### Modify here model_path = "" image_path = "" reasoning, score = score_image( model_path=model_path, image_path=image_path ) print(reasoning) print(score) ```
Lansechen/Qwen2.5-7B-Open-R1-GRPO-math-selected-cosine-0429
Lansechen
2025-04-30T00:26:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "dataset:chenggong1995/math_selected", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T06:17:12Z
--- base_model: Qwen/Qwen2.5-7B datasets: chenggong1995/math_selected library_name: transformers model_name: Qwen2.5-7B-Open-R1-GRPO-math-selected-cosine-0429 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen2.5-7B-Open-R1-GRPO-math-selected-cosine-0429 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [chenggong1995/math_selected](https://huggingface.co/datasets/chenggong1995/math_selected) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Lansechen/Qwen2.5-7B-Open-R1-GRPO-math-selected-cosine-0429", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenran1995-the-chinese-university-of-hong-kong/huggingface/runs/97dhpt79) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_test_config_20250430_002108
gradientrouting-spar
2025-04-30T00:24:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:24:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrMaxMind99/Med-3B-RS
MrMaxMind99
2025-04-30T00:24:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T02:28:10Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: Med-3B-RS tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Med-3B-RS This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MrMaxMind99/Med-3B-RS", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wwwsushant-iit-patna/GT%2BRS/runs/h25z915y) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
deeponh/hindi_9b_9b_D3
deeponh
2025-04-30T00:24:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:20:40Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
calonmilyarder/1bdd6251-0e04-44c2-899a-32adfdc8ed36
calonmilyarder
2025-04-30T00:18:35Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us" ]
null
2025-04-29T23:50:39Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 1bdd6251-0e04-44c2-899a-32adfdc8ed36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8948b9e320e29c39_train_data.json ds_type: json format: custom path: /workspace/input_data/8948b9e320e29c39_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: calonmilyarder/1bdd6251-0e04-44c2-899a-32adfdc8ed36 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/8948b9e320e29c39_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ff6243b6-74d0-4f58-8d32-ec33304b7b07 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ff6243b6-74d0-4f58-8d32-ec33304b7b07 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1bdd6251-0e04-44c2-899a-32adfdc8ed36 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0000 | 1 | nan | | 0.0 | 0.0001 | 3 | nan | | 0.0 | 0.0002 | 6 | nan | | 2.2368 | 0.0003 | 9 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ucekmez/legal-ft-ae1bfa8c-4dca-4fc1-8012-48ed5c6f55ec
ucekmez
2025-04-30T00:18:20Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:156", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-30T00:16:23Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:156 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: Why does the author find it astonishing that models like GPT-4 can run on their current hardware? sentences: - 'There’s now a fascinating ecosystem of people training their own models on top of these foundations, publishing those models, building fine-tuning datasets and sharing those too. The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t even attempt to count them, and any count would be out-of-date within a few hours. The best overall openly licensed LLM at any time is rarely a foundation model: instead, it’s whichever fine-tuned community model has most recently discovered the best combination of fine-tuning data. This is a huge advantage for open over closed models: the closed, hosted models don’t have thousands of researchers and hobbyists around the world collaborating and competing to improve them.' - 'This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs. These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else. The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.' - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t have their own inference-scaling models in the works. Meta published a relevant paper Training Large Language Models to Reason in a Continuous Latent Space in December. Was the best currently available LLM trained in China for less than $6m? Not quite, but almost! It does make for a great attention-grabbing headline. The big news to end the year was the release of DeepSeek v3—dropped on Hugging Face on Christmas Day without so much as a README file, then followed by documentation and a paper the day after that.' - source_sentence: How is the total cost of $168 to process 68,000 images calculated based on input and output token usage? sentences: - 'A lot of people are excited about AI agents—an infuriatingly vague term that seems to be converging on “AI systems that can go away and act on your behalf”. We’ve been talking about them all year, but I’ve seen few if any examples of them running in production, despite lots of exciting prototypes. I think this is because of gullibility. Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve gullibility without achieving AGI. So it may be quite a while before those agent dreams can really start to come true! Code may be the best application Over the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.' - 'Each photo would need 260 input tokens and around 100 output tokens. 260 * 68,000 = 17,680,000 input tokens 17,680,000 * $0.0375/million = $0.66 100 * 68,000 = 6,800,000 output tokens 6,800,000 * $0.15/million = $1.02 That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap I had to run the numbers three times to confirm I got it right. How good are those descriptions? Here’s what I got from this command: llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg' - 'Prompt injection is a natural consequence of this gulibility. I’ve seen precious little progress on tackling that problem in 2024, and we’ve been talking about it since September 2022. I’m beginning to see the most popular idea of “agents” as dependent on AGI itself. A model that’s robust against gulliblity is a very tall order indeed. Evals really matter Anthropic’s Amanda Askell (responsible for much of the work behind Claude’s Character):' - source_sentence: When was Anthropic’s Claude 3 series initially launched? sentences: - 'Embeddings: What they are and why they matter 61.7k 79.3k Catching up on the weird world of LLMs 61.6k 85.9k llamafile is the new best way to run an LLM on your own computer 52k 66k Prompt injection explained, with video, slides, and a transcript 51k 61.9k AI-enhanced development makes me more ambitious with my projects 49.6k 60.1k Understanding GPT tokenizers 49.5k 61.1k Exploring GPTs: ChatGPT in a trench coat? 46.4k 58.5k Could you train a ChatGPT-beating model for $85,000 and run it in a browser? 40.5k 49.2k How to implement Q&A against your documentation with GPT3, embeddings and Datasette 37.3k 44.9k Lawyer cites fake cases invented by ChatGPT, judge is not amused 37.1k 47.4k' - 'Getting back to models that beat GPT-4: Anthropic’s Claude 3 series launched in March, and Claude 3 Opus quickly became my new favourite daily-driver. They upped the ante even more in June with the launch of Claude 3.5 Sonnet—a model that is still my favourite six months later (though it got a significant upgrade on October 22, confusingly keeping the same 3.5 version number. Anthropic fans have since taken to calling it Claude 3.6).' - 'I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023: Prompt injection explained, with video, slides, and a transcript Catching up on the weird world of LLMs Making Large Language Models work for you Open questions for AI engineering Embeddings: What they are and why they matter Financial sustainability for open source projects at GitHub Universe And in podcasts: What AI can do for you on the Theory of Change Working in public on Path to Citus Con LLMs break the internet on the Changelog Talking Large Language Models on Rooftop Ruby Thoughts on the OpenAI board situation on Newsroom Robots' - source_sentence: What recent improvements have made building a web app with voice interaction easier? sentences: - 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like. Apple Intelligence is bad, Apple’s MLX library is excellent As a Mac user I’ve been feeling a lot better about my choice of platform this year. Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.' - 'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition. (I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.) Whatever the term may mean, agents still have that feeling of perpetually “coming soon”.' - 'These abilities are just a few weeks old at this point, and I don’t think their impact has been fully felt yet. If you haven’t tried them out yet you really should. Both Gemini and OpenAI offer API access to these features as well. OpenAI started with a WebSocket API that was quite challenging to use, but in December they announced a new WebRTC API which is much easier to get started with. Building a web app that a user can talk to via voice is easy now! Prompt driven app generation is a commodity already This was possible with GPT-4 in 2023, but the value it provides became evident in 2024.' - source_sentence: How does Google's Gemini preview compare to ChatGPT's live video feature? sentences: - The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did. - 'Then there’s the rest. If you browse the Chatbot Arena leaderboard today—still the most useful single place to get a vibes-based evaluation of models—you’ll see that GPT-4-0314 has fallen to around 70th place. The 18 organizations with higher scoring models are Google, OpenAI, Alibaba, Anthropic, Meta, Reka AI, 01 AI, Amazon, Cohere, DeepSeek, Nvidia, Mistral, NexusFlow, Zhipu AI, xAI, AI21 Labs, Princeton and Tencent. Training a GPT-4 beating model was a huge deal in 2023. In 2024 it’s an achievement that isn’t even particularly notable, though I personally still celebrate any time a new organization joins that list. Some of those GPT-4 models run on my laptop' - 'Just this week, the New York Times launched a landmark lawsuit against OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially the first few pages, which lay out the issues in a way that’s surprisingly easy to follow. The rest of the document includes some of the clearest explanations of what LLMs are, how they work and how they are built that I’ve read anywhere. The legal arguments here are complex. I’m not a lawyer, but I don’t think this one will be easily decided. Whichever way it goes, I expect this case to have a profound impact on how this technology develops in the future.' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.9166666666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 1.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9166666666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3333333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9166666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 1.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9692441461309548 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9583333333333334 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9583333333333334 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("ucekmez/legal-ft-ae1bfa8c-4dca-4fc1-8012-48ed5c6f55ec") # Run inference sentences = [ "How does Google's Gemini preview compare to ChatGPT's live video feature?", 'The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.', 'Just this week, the New York Times launched a landmark lawsuit against OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially the first few pages, which lay out the issues in a way that’s surprisingly easy to follow. The rest of the document includes some of the clearest explanations of what LLMs are, how they work and how they are built that I’ve read anywhere.\nThe legal arguments here are complex. I’m not a lawyer, but I don’t think this one will be easily decided. Whichever way it goes, I expect this case to have a profound impact on how this technology develops in the future.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9167 | | cosine_accuracy@3 | 1.0 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9167 | | cosine_precision@3 | 0.3333 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9167 | | cosine_recall@3 | 1.0 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9692** | | cosine_mrr@10 | 0.9583 | | cosine_map@100 | 0.9583 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 156 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 156 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 21.04 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.28 tokens</li><li>max: 214 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Why does the author find the term “agents” frustrating?</code> | <code>“Agents” still haven’t really happened yet<br>I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> | | <code>What problem does the author highlight about people using the term “agents”?</code> | <code>“Agents” still haven’t really happened yet<br>I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> | | <code>When did OpenAI make GPT-4o free for all users?</code> | <code>OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely available from its launch in June. This was a momentus change, because for the previous year free users had mostly been restricted to GPT-3.5 level models, meaning new users got a very inaccurate mental model of what a capable LLM could actually do.<br>That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT Pro. This $200/month subscription service is the only way to access their most capable model, o1 Pro.<br>Since the trick behind the o1 series (and the future models it will undoubtedly inspire) is to expend more compute time to get better results, I don’t think those days of free access to the best available models are likely to return.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9638 | | 2.0 | 32 | 0.9484 | | 3.0 | 48 | 0.9385 | | 3.125 | 50 | 0.9539 | | 4.0 | 64 | 0.9539 | | 5.0 | 80 | 0.9539 | | 6.0 | 96 | 0.9539 | | 6.25 | 100 | 0.9539 | | 7.0 | 112 | 0.9539 | | 8.0 | 128 | 0.9539 | | 9.0 | 144 | 0.9692 | | 9.375 | 150 | 0.9692 | | 10.0 | 160 | 0.9692 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
gradientrouting-spar/qwen_ft_doutcome_seed1_29Apr_epoch5_checkpoint
gradientrouting-spar
2025-04-30T00:17:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:16:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hxyscott/math-decontamination-4.1-mini-rank64-7epoch
hxyscott
2025-04-30T00:17:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T14:06:29Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johnsonwa84/Helsinki-CPT
johnsonwa84
2025-04-30T00:16:53Z
0
0
null
[ "safetensors", "base_model:Helsinki-NLP/opus-mt-es-en", "base_model:finetune:Helsinki-NLP/opus-mt-es-en", "license:cc-by-4.0", "region:us" ]
null
2025-04-30T00:14:11Z
--- license: cc-by-4.0 base_model: - Helsinki-NLP/opus-mt-es-en --- # Helsinki Fine-Tuned on Spanglish → Spanish This model is continuously pre-trained from `Helsinki-NLP/opus-mt-en-es`, to translate Spanglish (code-mixed English/Spanish) into monolingual Spanish, using data derived from Susana Chavez-Silverman's memoir "Scenes from la Cuenca de Los Angeles y otros Natural Disasters". ## Usage ```python from transformers import MarianMTModel, MarianTokenizer model = MarianMTModel.from_pretrained("johnsonwa84/Helsinki-CPT") tokenizer = MarianTokenizer.from_pretrained("johnsonwa84/Helsinki-CPT")
jayellho/whisper-large-v3-turbo-imdap4-bs16-grad4-dl4-2h200-splbat-8cpus-6e06LR-4800maxst-perstwkrs-6heg
jayellho
2025-04-30T00:14:53Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:data_loading_script", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-29T20:37:57Z
--- library_name: transformers tags: - generated_from_trainer datasets: - data_loading_script model-index: - name: whisper-large-v3-turbo-imdap4-bs16-grad4-dl4-2h200-splbat-8cpus-6e06LR-4800maxst-perstwkrs-6heg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-turbo-imdap4-bs16-grad4-dl4-2h200-splbat-8cpus-6e06LR-4800maxst-perstwkrs-6heg This model was trained from scratch on the data_loading_script dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.101137235931163e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 480 - training_steps: 4800 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1
SWE-bench/SWE-agent-LM-7B
SWE-bench
2025-04-30T00:14:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T00:14:26Z
--- license: apache-2.0 ---
GbrlOl/sft-nomic-bert-test-1
GbrlOl
2025-04-30T00:14:10Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "nomic_bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1631", "loss:CoSENTLoss", "custom_code", "arxiv:1908.10084", "base_model:nomic-ai/nomic-bert-2048", "base_model:finetune:nomic-ai/nomic-bert-2048", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-30T00:13:35Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1631 - loss:CoSENTLoss base_model: nomic-ai/nomic-bert-2048 widget: - source_sentence: ¿Cuáles son las instalaciones auxiliares de la Planta Catemu aplicables a la evaluación de riesgos? sentences: - "13 La medida “lavado de ripios con agua” corresponde a un compromiso adquirido\ \ por medio de las Resoluciones de \nCalificación Ambiental N°1564/2007 y N°095/2011,\ \ de los Proyectos “Ampliación I Planta Catemu” y “Ampliación II Planta \nCatemu”\ \ respectivamente." - "3.11.2.2. Depositación y Compactación de los Relaves Filtrados \nUna vez en\ \ la zona de depositación los camiones descargarán los relaves filtrados para\ \ ser esparcidos y \ncompactados mediante un buldózer y rodillo vibratorio liso\ \ de peso estático no inferior a 8 t. La secuencia \nde llenado del depósito propone\ \ iniciar la depositación de relaves desde el sector del Muro de \nConfinamiento\ \ Frontal ubicado al Oriente del sitio usando como acceso el mismo Muro y la zona\ \ del dren \nalfombra. \nDe acuerdo con esto se propone cargar el depósito en\ \ capas de 2 m de altura en forma secuencial hasta \ncubrir todo el depósito.\ \ Para cubrir la etapa 1, correspondiente a la primera capa de 2 m, se estima\ \ un \ntiempo del orden de 9 meses, tiempo suficiente para que se desarrolle gran\ \ parte de la consolidación \nproducto de la sobrecarga aplicada. Una vez cubierta\ \ la etapa 1 con 2 m de depositación de relaves se \nprocede a depositar la segunda\ \ capa de 2 m de espesor para completar 4 m de altura. Con esta ultima \ndepositación\ \ se dispondrá del orden de 9 meses adicionales para la acumulación de material\ \ de estéril de \nmina y la construcción de los drenes de la etapa 2. Paralelo\ \ a la colocación de material de relaves se \ndeberá colocar un sistema de monitoreo\ \ de nivel freático mediante la instalación de piezómetros, de \nmanera de verificar\ \ eventuales aumentos de presión de poros por la carga de relaves depositados.\ \ \n3.11.2.3. Manejo de Aguas \nEl proyecto considera la segregación de las\ \ aguas con el fin de evitar impactos sobre los recursos hídricos \nde la zona.\ \ Se distinguen tres formas de manejo de las aguas, de acuerdo a sus características." - "5.3.3 Evaluación de riesgos instalaciones auxiliares \n \nLos rellenos sanitarios\ \ y sitios de almacenamiento temporal de residuos existentes en el área de la\ \ faena \nminera quedarán sujeto a las medidas resultantes de los compromisos\ \ ambientales y sectoriales adquiridos \npor la Planta Catemu, las medidas sugeridas\ \ por el Reglamento de la Ley de Cierre (Decreto 41 de 2012 del \nMinisterio de\ \ Minería), y complementadas con las actividades necesarias para mantener la\ \ estabilidad física \ny química del sitio en el largo plazo. \n \nLas instalaciones\ \ que involucra la siguiente evaluación corresponden a las mostradas en la Tabla\ \ 5.8. \n \nTabla 5.8: Infraestructura de la Planta Catemu aplicable a la evaluación\ \ de riesgos \nInstalaciones auxiliares – Planta Catemu \nPatio RISES Vertedero\ \ de borras SX \nBodega de residuos peligrosos Bodega de residuos domésticos \n\ \ \n \ni. Características propias de la Instalación" - source_sentence: 'cuál es el metodo de compactacion del muro de embalse: proctor modificado, proctor normal o densidad relativa?' sentences: - "39 \nescasos ejemplares de Portucalaceae ( Calandrinia) y Papilionaceae ( Adesmia\ \ del grupo \ninerme). \nEn contraposición, en el área de las quebradas se distinguen\ \ Pastos Largos y Las Mulas por la \ndiversidad de especies vegetales que albergan.\ \ En Pastos Largos sobresalen las Familias \nAsteraceae, Papilionaceae y Poaceae,\ \ tales como Senecio sp., Adesmia sp. y Stipa sp., \nrespectivamente. En la Quebrada\ \ Las Mulas, se destaca en la flora arbustiva la Adesmia spp., \nEphedra breana\ \ y Gymnophyton spinosissimum . Es rescatable la presencia de pequeños \nhumedales\ \ establecidos en las laderas de la quebrada, donde se encuentran herbáceas \n\ cespitosas como Deyeuxia sp. De todas las especies de flora encontradas, sólo\ \ una tiene \nproblemas de conservación. Se trata del cactus Opuntia conoidea\ \ la cual está cl asificada como \nrara por su rango de distribución restringido,\ \ desde Ollague hasta Talabre. \n \n5.1.8. Fauna \n \nEn las campañas de Marzo\ \ de 2006 y Febrero de 2007 también fue prospectada la componente \nfauna (línea\ \ base), determinándose que los sitios con mayor número d e especies corresponden\ \ \na las quebradas Pastos Largos y Punta del Viento, seguidas por Varitas y Las\ \ Mulas. \n \nLos sitios con menor riqueza corresponden al área Mina con sólo\ \ 8 especies registradas. En el \nárea Mina destaca la presencia de reptiles y\ \ las bandadas de aves granívoras (chirihues) que \nsobrevuelan todo el sector\ \ y ejemplares de gorriones ( Passer domesticus), especie introducida \nlocalizada\ \ puntualmente en el área del campamento." - "En Adenda Nº 1 el titular señala que “las aguas que infiltren a través del depósito\ \ de relaves filtrados \n(aguas de contacto) serán recogidas por el sistema de\ \ drenaje basal y se mezclaran con las aguas que \nemergen desde el acuífero que\ \ subyace en la zona de emplazamiento del depósito. La mezcla será \ndescargada\ \ (en D) al curso de agua que se ubica aguas abajo del depósito de relaves, el\ \ que después de \nrecorrer unos 500 metros descarga al estero San Antonio. Como\ \ se puede apreciar en el Anexo B de la \nDIA, y que es nuevamente presentado\ \ con mayores detalles en el Anexo A de la presente Adenda, no \nexisten efectos\ \ significativos sobre los recursos hídricos del sector, producto de esta descarga.\ \ Sin \nperjuicio de lo anterior, en el caso que los monitoreos detecten que el\ \ agua proveniente del Sistema de \nDrenaje Basal no se comporte de acuerdo a\ \ lo esperado, estas aguas pueden ser derivadas a la piscina \nde sedimentación\ \ para luego ser consumidas en la Planta Concentradora o ser enviadas al sistema\ \ de \nmanejo de aguas minas (aguas de contacto) que posee SCMET y que se encuentra\ \ debidamente \nautorizado .” \nCalidad de las Aguas \nAdenda N°2 SCMET propone\ \ que los niveles de calidad que detonaran el envío de las aguas de contacto,\ \ \nrecolectadas por el Sistema de Drenaje Basal, hacia la Piscina de Sedimentación\ \ sean la superación de \numbrales (en el estero San Antonio aguas debajo de la\ \ confluencia con el cauce receptor de las aguas de" - "Las áreas \nintervenidas serán rellenadas con suelo natural, preparadas y restauradas\ \ con cobertura \nde suelo de valor edafológico. \nRetiro de todos los acopios\ \ de mineral en las canchas de stock-pile. \nNivelación, sellado con limos arcillosos\ \ y cobertura de suelo de valor edafológico de las \npilas de lixiviación. \n\ Evaluación de la estabilidad final, física y química de las pilas de lixiviación.\ \ \nMantención y limpieza de los canales de evacuación de aguas lluvias. \nRecubrimiento\ \ de superficies intervenidas con edificaciones mediante suelo de valor \nedafológico\ \ para su restauración vegetal. \nPlan de revegetación de suelos intervenidos\ \ y reforestación con especies nativas. \nRecubrimiento de cubetas y piscinas,\ \ mediante suelo natural compactado y cobertura de \nsuelo de valor edafológico\ \ para su restauración. \nNeutralización de fosas sépticas y pozos absorbentes,\ \ con cal, recubrimiento con suelo \nnatural compactado en condiciones de humedad\ \ y cobertura de suelo de valor \nedafológico para su restauración. \nRecubrimiento\ \ de excavaciones menores, y nivelado de montículos. \nSeñalética de advertencia\ \ y riesgo de planta minera abandonada. \nCierre de accesos a sectores de riesgos\ \ y eliminación de caminos de tránsito para su \nrestauración. \nConstrucción\ \ de barreras que limiten el acceso de animales que puedan tener o adquirir \n\ el hábito de tránsito por sectores de riesgos. \nLimpieza general de escombros,\ \ desechos, residuos y derrame s que serán dispuestos \nfinalmente en el vertedero\ \ autorizado. \nPlan de Revegetación, se presentara con al menos un año de anticipación\ \ a la finalización \nde las faenas, a la CONAF Vª Región con copia a la COREMA\ \ Vª Región para su respectiva" - source_sentence: cuál es el Límite Plástico (LP) del relave? sentences: - "Al área del proyecto se accede desde la ciudad de Coyhaique por el camino,\ \ pavimentado, hacia Puerto \nAysén que bordea el río Simpson, en el kilómetro\ \ 54 se toma el desvío hacia Villa Mañihuales, pasados \nunos 23 km de esta localidad\ \ se encuentra el desvío hacia Mina El Toqui, desde donde se deben recorrer \n\ unos 17 kilómetros por camino de ripio. El proyecto se ubicará al interior de\ \ los terrenos que ocupa el \nconjunto de instalaciones que constituyen la faena\ \ minera de SCMET \n3.9. Justificación de la Localización: \nSCMET se encuentra\ \ desarrollando un sistema de disposición de relaves flexible, es decir, contar\ \ \ncon varias alternativas para la depositación de sus relaves. Parte central\ \ de este sistema flexible de \ndepositación, lo constituye la Planta de Espesado\ \ que ya cuenta con calificación ambiental favorable \nmediante la RCA N° 698\ \ de fecha 14 de agosto de 2009 de la Comisión Regional del Medio Ambiente de\ \ \nla Región de Aysén. Dicha planta se ubica a aproximadamente 250 m del área\ \ en que se pretende \nemplazar el Depósito de Relaves Filtrados Doña Rosa, es\ \ justamente esta cercanía con la Planta de \nEspesado uno de los criterios que\ \ ha determinado la ubicación del Depósito de Relaves Filtrados Doña \nRosa, ya\ \ que de esta manera se logra minimizar el recorrido de los camiones, que trasladarán\ \ el relave \nfiltrado desde la Planta de Espesado hasta el depósito, con la consecuente\ \ reducción de emisiones y \nahorro de combustible. Adicionalmente, debe tenerse\ \ en consideración que el lugar de emplazamiento del \ndepósito de relaves filtrados\ \ se encuentra al interior de los terrenos que ocupa el conjunto de instalaciones\ \ \nque constituyen la Faena El Toqui, por lo se evitará intervenir nuevas áreas" - "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\ \ 433 3200 - e-mail: [email protected] \n16\nmínima correspondiente al 90% del Proc\ \ tor Modificado, la cual irá aumentado \nen profundidad según aumente la presión\ \ efectiva de confinamiento. \nTabla 9. Ensayo Proctor Modificado Relave UG-2\ \ \nParámetro Valor \nDensidad máxima compactada seca (DMCS) [ton/m3] 1,98 \n\ Humedad óptima (ω) [%] 12,6 \n\ 3.6. Triaxiales Monótonos No-Drenados CIU \nLos ensayos triaxiales fueron desarrollados\ \ al material de relave, simulando las \ncondiciones que presentará el material\ \ depositado, esto es a la densidad seca \ncorrespondiente al límite de contracción.\ \ \n Según la operación del depósito, el rela ve será descargado en capas delgadas,\ \ \nlo que permitirá un secamiento tal que el material desarrollará una densificación\ \ \nal límite de contracción en el nivel superficial. A mayor profundidad, la\ \ \ndensidad del material será aún mayor, de bido principalmente a la consolidación\ \ \npor peso propio. \nA continuación se presenta un resumen de los resultados\ \ obtenidos de las series \nde ensayos triaxiales monótonos no-drenad os realizados\ \ al relave UG-2, a la \ndensidad del límite de cont racción y para diferentes\ \ presiones de confinamiento \nefectivas, escogidas en el rango de presiones que\ \ se tienen in-situ. \nLa Figura 8 muestra la variación del esfuerzo desviador\ \ q (corte inducido) en \nfunción de la deformación axial unitaria de la probeta." - "A continuación se revisan los aspectos fundamentales que se han \nconsiderado\ \ con posterioridad al cese de operaciones del Depósito de Relaves Filtrados Doña\ \ Rosa. \n3.11.4.1. Normativa Aplicable Etapa de Cierre \nEl plan de cierre\ \ del depósito estará estructurado de manera tal de cumplir con la reglamentación\ \ legal \nvigente y aplicable en Chile, que regula los aspectos de seguridad e\ \ impacto ambiental, asociados al \nacopio o depositación de este tipo de residuos\ \ originados por tratamiento de minerales. En lo principal, la \nnormativa a considerar\ \ corresponde a los siguientes reglamentos: \n· Reglamento de Seguridad\ \ Minera \n· Decreto Supremo Nº594/1999 del Ministerio de Salud \n3.11.4.2.\ \ Obras a Realizar \nEn el presente punto se describen tanto las obras, como\ \ las actividades de control y mantenimiento de \nobras que incluirá el plan de\ \ cierre del Depósito de Relaves Filtrados Doña Rosa. Las obras estarán \ndestinadas\ \ a satisfacer los requerimientos normados conforme a la reglamentación legal\ \ vigente, y" - source_sentence: ¿Cuáles son las obras necesarias para cumplir con los tiempos de retornos en el proyecto de cierre? sentences: - "o Evaluar y diseñar un vertedero de emergencia y una conducción para la descarga\ \ de las \naguas que se acumulen en la cubeta, y que sobrepasen el pretil de protección.\ \ \nLa ingeniería de detalle de la estabilidad de los muros y considerados en\ \ el D.S. N° 132/04, se indica en el \nInforme Técnico de Estabilidad de Talud,\ \ incluido en el Anexo 3, del presente documento. \nII. Estabilidad de Taludes\ \ \no Verificar la estabilidad del muro de arena a través de método pseudoestático\ \ y post-sísmico \npara un coeficiente sísmico acorde al “Sismo Máximo Creíble”\ \ (Kh = 0,14). \no Indica que, en caso de existir bajos factores de seguridad,\ \ se tomarán medidas como tender \nel talud y/o colocar estructuras de contención\ \ mediante enrocados. \nIII. Construcción de Muro de Protección al Pie del Talud\ \ \no Contemplar un enrocado de protección en todo el sector donde hay gaviones.\ \ \no En el resto de los sectores del pie del muro de arena se contempla un muro\ \ de protección \nde enrocado de 2m de altura. \nPor otro lado, los aspectos\ \ técnicos señalados en el artículo 495 (Título X) del D.S. N° 132/04 y que forman\ \ \nparte de este documento, son:" - "38 \nsuperficiales o subterráneas es cero para años con probabilidad de excedencia\ \ del 50% (años \nen que llueve el promedio o bajo el promedio). \nLa elevación\ \ media de la napa subterránea en el área de trabajo se encuentra en la cota 2540\ \ - \n2530 msnm, de acuerdo a los sondajes que se han realizado en el área. Considerando\ \ que la \nelevación media del área de trabajo es de aproximadamente 2720 msnm,\ \ la profundidad de la \nnapa está alrededor de 180 a190 m. \n \nEl área en estudio\ \ se ubica íntegramente al interior de la cuenca hidrográfica de Quebrada de \n\ Taltal, lugar en que las formaciones rocosas son especialmente mesozoicas y prácticamente\ \ \nimpermeables de modo que en donde estas asoman (afloramientos o cerros islas)\ \ constituyen \nbarreras muy efectivas que dificultan los escurrimientos de aguas,\ \ tanto de superficie como \nsubterráneas. \n \nEl acuífero principal que drena\ \ sus aguas subterráneas a Mina Guanaco tiene una sup erficie \nmínima del orden\ \ de los 90 km2, con cajón principal de descarga o geocanal de un ancho \nmedio\ \ superior a 4 km y un espesor saturado asociable a 60 m deducido a partir de\ \ la \ninterpretación de perfiles estratigráficos de pozos situados al costado\ \ nort e de Quebrada \nVaritas. La transmisibilidad oscila entre los 0,6 a 1,58\ \ m2/día. \n \n5.1.7. Vegetación y Flora \n \nDe acuerdo a la información bibliográfica\ \ existente, relacionada con proyectos de desarrollo \nminero en el Distrito Guanaco,\ \ el área de interés corresponde a una zona de ecotono, es decir, \nes un lugar\ \ de transición entre el desierto interior de Taltal y el desierto montano de\ \ la cordillera \nde Domeyko." - "132/04, para finalmente obtener la aprobación de la \nautoridad. \n- Aspectos\ \ relativos a la estabilidad química del Tranque de Relave N°4, exigidos por el\ \ D.S. \n248/07. \nLa ingeniería de detalle (abordados en los Anexos 2 y 3; Informe\ \ Técnico Diseño Hidráulico e Informe \nEstabilidad de Taludes, respectivamente)\ \ que es parte del sustento de este proyecto y de los criterios \nplanteados en\ \ el presente Proyecto de Cierre, se centra principalmente en crear las obras\ \ necesarias \npara cumplir con los tiempos de retornos. Estas obras son: \n-\ \ Canal Perimetral de contorno, para impedir el ingreso de aguas lluvias desde\ \ las zonas \naledañas a la cubeta, asociado a un periodo de retorno de 20 años.\ \ \n- Defensas ribereñas, para proteger el muro del tranque de las crecidas del\ \ Río Ligua, asociado \na un periodo de retorno de 10.000 años. \n- Vertedero,\ \ cuyas aguas son manejadas mediante una conducción hasta una piscina de \nemergencia,\ \ la cual tiene como objetivo almacenar y retener este volumen para su \nevaporación.\ \ \n- Obras destinadas a la estabilidad del muro. \nLa información utilizada para\ \ el desarrollo de este documento es: \n- Topografía del sitio: Levantamiento\ \ previo al emplazamiento del depósito de relaves, y \nlevantamiento de las instalaciones\ \ actuales; \n- Dimensiones del Tranque (Cubicaciones y disposición del relave);\ \ \n- Caracterización de Materiales: Relave (arenas y lamas), potenciales fuentes\ \ de empréstito, \nsuelo de fundación; \n- Antecedentes sobre la Geología del\ \ lugar; \n- Hidrogeología e hidrología; \n- Recopilación de Antecedentes Pluviométricos\ \ y Fluviométricos." - source_sentence: ¿Cuál es el público objetivo al que irá dirigida la información referida al cierre de la planta? sentences: - "En Adenda Nº 1 el titular señala que “las aguas que infiltren a través del depósito\ \ de relaves filtrados \n(aguas de contacto) serán recogidas por el sistema de\ \ drenaje basal y se mezclaran con las aguas que \nemergen desde el acuífero que\ \ subyace en la zona de emplazamiento del depósito. La mezcla será \ndescargada\ \ (en D) al curso de agua que se ubica aguas abajo del depósito de relaves, el\ \ que después de \nrecorrer unos 500 metros descarga al estero San Antonio. Como\ \ se puede apreciar en el Anexo B de la \nDIA, y que es nuevamente presentado\ \ con mayores detalles en el Anexo A de la presente Adenda, no \nexisten efectos\ \ significativos sobre los recursos hídricos del sector, producto de esta descarga.\ \ Sin \nperjuicio de lo anterior, en el caso que los monitoreos detecten que el\ \ agua proveniente del Sistema de \nDrenaje Basal no se comporte de acuerdo a\ \ lo esperado, estas aguas pueden ser derivadas a la piscina \nde sedimentación\ \ para luego ser consumidas en la Planta Concentradora o ser enviadas al sistema\ \ de \nmanejo de aguas minas (aguas de contacto) que posee SCMET y que se encuentra\ \ debidamente \nautorizado .” \nCalidad de las Aguas \nAdenda N°2 SCMET propone\ \ que los niveles de calidad que detonaran el envío de las aguas de contacto,\ \ \nrecolectadas por el Sistema de Drenaje Basal, hacia la Piscina de Sedimentación\ \ sean la superación de \numbrales (en el estero San Antonio aguas debajo de la\ \ confluencia con el cauce receptor de las aguas de" - "Plan de Cierre PAG Planta Catemu \nCompañía Explotadora de Minas (CEMIN) \n \n\ \ Rev. 0 | 20-04-18 156 | 158 \n12 PROGRAMA DE DIFUSIÓN \n \nA continuación se\ \ describe el programa de difusión para el Plan de Cierre de la Planta Catemu.\ \ \n \n12.1 Objetivos del programa de difusión \n \nEl programa de difusión del\ \ Plan de Cierre de la Planta Catemu contempla los siguientes objetivos: \n \n\  Comunicar claramente los alcances del cierre de la faena y los planes asociados;\ \ \n Generar confianza y credibilidad en los distintos públicos relevantes;\ \ \n Conseguir que el tema sea socializado paulatinamente, incluso por los medios\ \ de comunicación; \n Recoger dudas e inquietudes desde la comunidad y público\ \ de interés, además de tener espacio para \nresponder. \n \n \n12.2 Público objetivo\ \ \n \nEl público objetivo a quién irá dirigida la información refer ida al cierre\ \ de la planta corresponde a aquellos \nque se encuentren dentro de área de influencia\ \ de la faena. La localidad más cercana a la Planta Catemu, y \nprincipal poblado\ \ que forma parte del área de influencia de la planta, corresponde a la comuna\ \ de C atemu, \ndistante aproximadamente a 2,5 kilómetros. \n \nAdemás se contempla\ \ dentro del público objetivo a las autoridades comunales y regionales, medios\ \ de \ncomunicación y a los propios trabajadores de la Planta (quienes serán los\ \ primeros en ser informados). \n \n \n12.3 Estrategia de implementación \n \n\ A nivel general, la estrategia de implementación para la difusión del programa\ \ de cierre de la Planta Catemu \nconsidera las siguientes acciones: \n \n Comunicados\ \ y gestión de prensa \n Reportajes en los medios locales, internos y externos\ \ \n Profundizar programas comunitarios vinculados al medio ambiente con el objetivo\ \ de minimizar los \nefectos que tendrá el Plan de Cierre." - "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\ \ 433 3200 - e-mail: [email protected] \n17\nRef.: (IDIEM) \nFigura 8. Esfuerzo\ \ Desviador vs De formación Unitaria Relave UG-2 \n \nEn la Figura 9, se observa\ \ que para las presi ones de confinamiento ensayadas, \nse observa un aumento\ \ contin uo de la presión de poros, lo que se traduce en un \ncomportamiento\ \ completamente contractivo, para una densidad seca inicial al \nlímite de contracción.\ \ \nRef.: (IDIEM) \nFigura 9. Variación de la Presión de Poros vs Deformación\ \ Unitaria Relave \nUG-2 \nLa Figura 10 , se presenta la envolvente de fa lla\ \ para distintas presiones de \nconfinamiento. Se observa también el comportamiento\ \ contractivo de las \nmuestras ensayadas. \nAl existir un incremento co ntinuo\ \ de la presión de poros la resistencia al corte \nno drenada es menor a la resistencia\ \ drenada." pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on nomic-ai/nomic-bert-2048 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-bert-2048](https://huggingface.co/nomic-ai/nomic-bert-2048) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/nomic-bert-2048](https://huggingface.co/nomic-ai/nomic-bert-2048) <!-- at revision 7710840340a098cfb869c4f65e87cf2b1b70caca --> - **Maximum Sequence Length:** 2048 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: NomicBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("GbrlOl/sft-nomic-bert-test-1") # Run inference sentences = [ '¿Cuál es el público objetivo al que irá dirigida la información referida al cierre de la planta?', 'Plan de Cierre PAG Planta Catemu \nCompañía Explotadora de Minas (CEMIN) \n \n Rev. 0 | 20-04-18 156 | 158 \n12 PROGRAMA DE DIFUSIÓN \n \nA continuación se describe el programa de difusión para el Plan de Cierre de la Planta Catemu. \n \n12.1 Objetivos del programa de difusión \n \nEl programa de difusión del Plan de Cierre de la Planta Catemu contempla los siguientes objetivos: \n \n\uf0a7 Comunicar claramente los alcances del cierre de la faena y los planes asociados; \n\uf0a7 Generar confianza y credibilidad en los distintos públicos relevantes; \n\uf0a7 Conseguir que el tema sea socializado paulatinamente, incluso por los medios de comunicación; \n\uf0a7 Recoger dudas e inquietudes desde la comunidad y público de interés, además de tener espacio para \nresponder. \n \n \n12.2 Público objetivo \n \nEl público objetivo a quién irá dirigida la información refer ida al cierre de la planta corresponde a aquellos \nque se encuentren dentro de área de influencia de la faena. La localidad más cercana a la Planta Catemu, y \nprincipal poblado que forma parte del área de influencia de la planta, corresponde a la comuna de C atemu, \ndistante aproximadamente a 2,5 kilómetros. \n \nAdemás se contempla dentro del público objetivo a las autoridades comunales y regionales, medios de \ncomunicación y a los propios trabajadores de la Planta (quienes serán los primeros en ser informados). \n \n \n12.3 Estrategia de implementación \n \nA nivel general, la estrategia de implementación para la difusión del programa de cierre de la Planta Catemu \nconsidera las siguientes acciones: \n \n\uf0a7 Comunicados y gestión de prensa \n\uf0a7 Reportajes en los medios locales, internos y externos \n\uf0a7 Profundizar programas comunitarios vinculados al medio ambiente con el objetivo de minimizar los \nefectos que tendrá el Plan de Cierre.', 'En Adenda Nº 1 el titular señala que “las aguas que infiltren a través del depósito de relaves filtrados \n(aguas de contacto) serán recogidas por el sistema de drenaje basal y se mezclaran con las aguas que \nemergen desde el acuífero que subyace en la zona de emplazamiento del depósito. La mezcla será \ndescargada (en D) al curso de agua que se ubica aguas abajo del depósito de relaves, el que después de \nrecorrer unos 500 metros descarga al estero San Antonio. Como se puede apreciar en el Anexo B de la \nDIA, y que es nuevamente presentado con mayores detalles en el Anexo A de la presente Adenda, no \nexisten efectos significativos sobre los recursos hídricos del sector, producto de esta descarga. Sin \nperjuicio de lo anterior, en el caso que los monitoreos detecten que el agua proveniente del Sistema de \nDrenaje Basal no se comporte de acuerdo a lo esperado, estas aguas pueden ser derivadas a la piscina \nde sedimentación para luego ser consumidas en la Planta Concentradora o ser enviadas al sistema de \nmanejo de aguas minas (aguas de contacto) que posee SCMET y que se encuentra debidamente \nautorizado .” \nCalidad de las Aguas \nAdenda N°2 SCMET propone que los niveles de calidad que detonaran el envío de las aguas de contacto, \nrecolectadas por el Sistema de Drenaje Basal, hacia la Piscina de Sedimentación sean la superación de \numbrales (en el estero San Antonio aguas debajo de la confluencia con el cauce receptor de las aguas de', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 1,631 training samples * Columns: <code>query</code>, <code>sentence</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | sentence | label | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 25.07 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 396.03 tokens</li><li>max: 2048 tokens</li></ul> | <ul><li>0: ~60.20%</li><li>1: ~39.80%</li></ul> | * Samples: | query | sentence | label | |:------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>se detallan antecedentes hidrogeológicos?</code> | <code>Familias típicas son las tetracondráceas, centrolepidáceas, eucrifiáceas, donatiáceas, etc. Hay <br>una familia endémica, la misodendrácea, y numerosos géneros endémicos: Fitzroya, Austrocedrus,</code> | <code>0</code> | | <code>¿Se utilizaron antecedentes hidrogeológicos?</code> | <code>En el área de las quebradas las especies encontradas corresponden a fauna altiplánica típica <br>descrita para otros ambientes similares de la Cordillera de Domeyko, como son Río Frío o el <br>Salar de Punta Negra, estando ausente casi en su totalidad el componente faunístico asociado <br>a cuerpos de agua, como vegas, bofedales y sal ares, que se encuentran en terrenos planos o <br>de escasa pendiente. <br> <br>Ocho de las especies encontradas están asociadas directamente a afloramientos de agua como <br>son la perdicita cojón, tuco, la lauchita andina y la vizcacha, todos ellos herbívoros que comen <br>brotes tiernos o raíces no disponibles en lugares de mayor aridez. <br> <br>También están la dormilona de nuca rojiza y el churrete de alas blancas, ambas aves <br>insectívoras que encuentran su principal fuente de alimento en estos cuerpos de agua, y por <br>último la perdicita cordillerana que está asociada a bofedales, bordes de salares o vegas alto <br>andinas y la vicuña que depende en forma directa de estos aflorami...</code> | <code>0</code> | | <code>Indica si se utiliza Proctor Modificado, o Normal o Estándar para compactar el relave filtrado, y cuál es el nivel de compactación</code> | <code>Retiro de Equipos <br>La medida “retiro de equipos” considera desmontaje y retiro de los equipos existentes en las diferentes áreas de la <br>planta de procesos y en aquellas instalaciones de apoyo que los requieran. Esto se realizará con apoyo de equipos <br>mecánicos. Dentro de esta medida de cierre se considera la actividad de carguío, transporte y disposición final de <br>las estructuras retiradas como residuo industrial no peligroso en un sitio autorizado fuera de la faena. <br> Retiro de Tubería <br>Se considera el retiro de las tuberías que se encuentren sobre la superficie. Para el acueducto se considera que 2 de <br>los 13 km estarán en superficie, por lo que deberán ser removidos. <br> Señalización <br>Se instalará señalización de advertencia de peligro en los accesos y perímetros del rajo, botaderos d e estériles y <br>depósito de relaves filtrados, el trazado de los canales del sistema de manejo de aguas de no contacto, así como en <br>los accesos a la faena y en el área de extracción de agua fresca. Esta...</code> | <code>0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 100 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 100 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:-------:|:----:|:-------------:| | 2.3902 | 100 | 4.6309 | | 4.7805 | 200 | 2.8633 | | 7.1463 | 300 | 3.3706 | | 9.5366 | 400 | 3.1291 | | 11.9268 | 500 | 4.1928 | | 14.2927 | 600 | 2.4727 | | 16.6829 | 700 | 1.3169 | | 19.0488 | 800 | 0.963 | | 21.4390 | 900 | 0.9983 | | 23.8293 | 1000 | 0.4952 | | 26.1951 | 1100 | 0.2591 | | 28.5854 | 1200 | 0.3327 | | 30.9756 | 1300 | 0.0557 | | 33.3415 | 1400 | 0.0996 | | 35.7317 | 1500 | 0.0834 | | 38.0976 | 1600 | 0.073 | | 40.4878 | 1700 | 0.1392 | | 42.8780 | 1800 | 0.1817 | | 45.2439 | 1900 | 0.1283 | | 47.6341 | 2000 | 0.0715 | | 50.0244 | 2100 | 0.0306 | | 52.3902 | 2200 | 0.0123 | | 54.7805 | 2300 | 0.0 | | 57.1463 | 2400 | 0.0005 | | 59.5366 | 2500 | 0.0 | | 61.9268 | 2600 | 0.0 | | 64.2927 | 2700 | 0.0 | | 66.6829 | 2800 | 0.0 | | 69.0488 | 2900 | 0.0 | | 71.4390 | 3000 | 0.0 | | 73.8293 | 3100 | 0.0 | | 76.1951 | 3200 | 0.0 | | 78.5854 | 3300 | 0.0 | | 80.9756 | 3400 | 0.0 | | 83.3415 | 3500 | 0.0 | | 85.7317 | 3600 | 0.0 | | 88.0976 | 3700 | 0.0 | | 90.4878 | 3800 | 0.0 | | 92.8780 | 3900 | 0.0 | | 95.2439 | 4000 | 0.0 | | 97.6341 | 4100 | 0.0 | ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 3.3.1 - Transformers: 4.48.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_test_config_20250430_001020
gradientrouting-spar
2025-04-30T00:13:59Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:13:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dinht210/qwen-tuned-insecure
dinht210
2025-04-30T00:10:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct", "region:us" ]
null
2025-04-29T21:19:26Z
--- base_model: "unsloth/Qwen2.5-Coder-7B-Instruct" library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
deeponh/bengali_9b_2b_D3
deeponh
2025-04-30T00:07:23Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:02:59Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johnsonwa84/Helsinki-en-es-SpanglishFT
johnsonwa84
2025-04-30T00:06:12Z
0
0
null
[ "safetensors", "base_model:Helsinki-NLP/opus-mt-en-es", "base_model:finetune:Helsinki-NLP/opus-mt-en-es", "license:cc-by-4.0", "region:us" ]
null
2025-04-29T23:13:25Z
--- license: cc-by-4.0 base_model: - Helsinki-NLP/opus-mt-en-es --- # Helsinki Fine-Tuned on Spanglish → Spanish This model is fine-tuned from `Helsinki-NLP/opus-mt-en-es`, to translate Spanglish (code-mixed English/Spanish) into monolingual Spanish, using synthetic parallel data derived from literary sources. ## Usage ```python from transformers import MarianMTModel, MarianTokenizer model = MarianMTModel.from_pretrained("johnsonwa84/Helsinki-en-es-SpanglishFT") tokenizer = MarianTokenizer.from_pretrained("johnsonwa84/Helsinki-en-es-SpanglishFT")
johnsonwa84/mBART50-en-es-SpanglishFT
johnsonwa84
2025-04-30T00:05:34Z
0
0
null
[ "safetensors", "base_model:facebook/mbart-large-50-many-to-many-mmt", "base_model:finetune:facebook/mbart-large-50-many-to-many-mmt", "license:cc-by-4.0", "region:us" ]
null
2025-04-29T23:49:28Z
--- license: cc-by-4.0 base_model: - facebook/mbart-large-50-many-to-many-mmt --- # mBART50 Fine-Tuned on Spanglish → Spanish This model is fine-tuned from `facebook/mbart-large-50-many-to-many-mmt`, with en-es configuration, to translate Spanglish (code-mixed English/Spanish) into monolingual Spanish, using synthetic parallel data derived from literary sources. ## Usage ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("johnsonwa84/mBART50-en-es-SpanglishFT") tokenizer = MBart50TokenizerFast.from_pretrained("johnsonwa84/mBART50-en-es-SpanglishFT")
memevis/supp31
memevis
2025-04-29T23:58:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T23:57:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5-ghpo-cold10-3Dhint-prompt1-cosine
chenggong1995
2025-04-29T23:58:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "dataset:chenggong1995/math3to5", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T08:21:08Z
--- base_model: Qwen/Qwen2.5-7B datasets: chenggong1995/math3to5 library_name: transformers model_name: Qwen-2.5-Base-7B-gen8-math3to5-ghpo-cold10-3Dhint-prompt1-cosine tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-Base-7B-gen8-math3to5-ghpo-cold10-3Dhint-prompt1-cosine This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [chenggong1995/math3to5](https://huggingface.co/datasets/chenggong1995/math3to5) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5-ghpo-cold10-3Dhint-prompt1-cosine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gongc1995-city-university-of-hong-kong/huggingface/runs/ub4x26v2) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF
mradermacher
2025-04-29T23:55:00Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3", "en", "base_model:reedmayhew/Grok-3-reasoning-gemma3-12B-distilled-HF", "base_model:quantized:reedmayhew/Grok-3-reasoning-gemma3-12B-distilled-HF", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-29T22:50:18Z
--- base_model: reedmayhew/Grok-3-reasoning-gemma3-12B-distilled-HF language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - gemma3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/reedmayhew/Grok-3-reasoning-gemma3-12B-distilled-HF <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.IQ4_XS.gguf) | IQ4_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q5_K_S.gguf) | Q5_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q5_K_M.gguf) | Q5_K_M | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q6_K.gguf) | Q6_K | 9.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Grok-3-reasoning-gemma3-12B-distilled-HF-GGUF/resolve/main/Grok-3-reasoning-gemma3-12B-distilled-HF.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
SurAyush/finetuned-bert-squad-accelerate
SurAyush
2025-04-29T23:51:20Z
0
0
null
[ "safetensors", "bert", "question-answering", "en", "dataset:sentence-transformers/squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:mit", "region:us" ]
question-answering
2025-04-29T20:48:00Z
--- license: mit datasets: - sentence-transformers/squad language: - en metrics: - squad base_model: - google-bert/bert-base-cased pipeline_tag: question-answering ---
sujeevraja/ppo-Huggy
sujeevraja
2025-04-29T23:47:41Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-04-29T23:47:23Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: sujeevraja/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ahmedch28/mistral_7b_finetuned_pr_v7
ahmedch28
2025-04-29T23:46:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T23:46:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ahmednogood/legal-ft-8e57573c-3113-40ec-9cc6-71134cd9eade
Ahmednogood
2025-04-29T23:46:49Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:157", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-29T23:45:59Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:157 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: What is the cost of the ChatGPT Pro subscription that provides access to the o1 Pro model? sentences: - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t have their own inference-scaling models in the works. Meta published a relevant paper Training Large Language Models to Reason in a Continuous Latent Space in December. Was the best currently available LLM trained in China for less than $6m? Not quite, but almost! It does make for a great attention-grabbing headline. The big news to end the year was the release of DeepSeek v3—dropped on Hugging Face on Christmas Day without so much as a README file, then followed by documentation and a paper the day after that.' - 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely available from its launch in June. This was a momentus change, because for the previous year free users had mostly been restricted to GPT-3.5 level models, meaning new users got a very inaccurate mental model of what a capable LLM could actually do. That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT Pro. This $200/month subscription service is the only way to access their most capable model, o1 Pro. Since the trick behind the o1 series (and the future models it will undoubtedly inspire) is to expend more compute time to get better results, I don’t think those days of free access to the best available models are likely to return.' - '“Agents” still haven’t really happened yet I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that. If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.' - source_sentence: Why are evals considered important in the discussion about prompt injection and model robustness? sentences: - "Industry’s Tardy Response to the AI Prompt Injection Vulnerability on RedMonk\ \ Conversations\n\n\nPosted 31st December 2023 at 11:59 pm · Follow me on Mastodon,\ \ Bluesky, Twitter or subscribe to my newsletter\n\n\nMore recent articles\n\n\ Qwen 3 offers a case study in how to effectively release a model - 29th April\ \ 2025\nWatching o3 guess a photo's location is surreal, dystopian and wildly\ \ entertaining - 26th April 2025\nExploring Promptfoo via Dave Guarino's SNAP\ \ evals - 24th April 2025\n\n\n \n\n\nThis is Stuff we figured out about AI in\ \ 2023 by Simon Willison, posted on 31st December 2023.\n\nPart of series LLMs\ \ annual review\n\nStuff we figured out about AI in 2023 - Dec. 31, 2023, 11:59\ \ p.m. \nThings we learned about LLMs in 2024 - Dec. 31, 2024, 6:07 p.m." - 'Prompt injection is a natural consequence of this gulibility. I’ve seen precious little progress on tackling that problem in 2024, and we’ve been talking about it since September 2022. I’m beginning to see the most popular idea of “agents” as dependent on AGI itself. A model that’s robust against gulliblity is a very tall order indeed. Evals really matter Anthropic’s Amanda Askell (responsible for much of the work behind Claude’s Character):' - 'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt. In order to be useful tools for us, we need them to believe what we feed them! But it turns out a lot of the things we want to build need them not to be gullible. Everyone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.' - source_sentence: What improvements does the ChatGPT Advanced Voice mode bring to the user experience during walks? sentences: - '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less than a 400th of a cent). This increase in efficiency and reduction in price is my single favourite trend from 2024. I want the utility of LLMs at a fraction of the energy cost and it looks like that’s what we’re getting. Multimodal vision is common, audio and video are starting to emerge My butterfly example above illustrates another key trend from 2024: the rise of multi-modal LLMs. A year ago the single most notable example of these was GPT-4 Vision, released at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it into the 2023 window.' - 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs. Even more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.' - 'These price drops are driven by two factors: increased competition and increased efficiency. The efficiency thing is really important for everyone who is concerned about the environmental impact of LLMs. These price drops tie directly to how much energy is being used for running prompts. There’s still plenty to worry about with respect to the environmental impact of the great AI datacenter buildout, but a lot of the concerns over the energy cost of individual prompts are no longer credible. Here’s a fun napkin calculation: how much would it cost to generate short descriptions of every one of the 68,000 photos in my personal photo library using Google’s Gemini 1.5 Flash 8B (released in October), their cheapest model?' - source_sentence: What type of food is visible inside the red shallow dish in the photo? sentences: - 'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can. The ethics of this space remain diabolically complex In September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion. Since then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.' - 'Today $30/mTok gets you OpenAI’s most expensive model, o1. GPT-4o is $2.50 (12x cheaper than GPT-4) and GPT-4o mini is $0.15/mTok—200x cheaper than GPT-4, nearly 7x cheaper than GPT-3.5 and massively more capable than that model. Other model providers charge even less. Anthropic’s Claude 3 Haiku (from March, but still their cheapest model) is $0.25/mTok. Google’s Gemini 1.5 Flash is $0.075/mTok and their Gemini 1.5 Flash 8B is $0.0375/mTok—that’s 27x cheaper than GPT-3.5 Turbo last year. I’ve been tracking these pricing changes under my llm-pricing tag.' - 'Against this photo of butterflies at the California Academy of Sciences: A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange slices of fruit are visible inside the dish. Two butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings. The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.' - source_sentence: What challenges does the author mention about implementing evaluation patterns for their work? sentences: - 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like. Apple Intelligence is bad, Apple’s MLX library is excellent As a Mac user I’ve been feeling a lot better about my choice of platform this year. Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.' - 'The biggest innovation here is that it opens up a new way to scale a model: instead of improving model performance purely through additional compute at training time, models can now take on harder problems by spending more compute on inference. The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced on 20th December with an impressive result against the ARC-AGI benchmark, albeit one that likely involved more than $1,000,000 of compute time expense! o3 is expected to ship in January. I doubt many people have real-world problems that would benefit from that level of compute expenditure—I certainly don’t!—but it appears to be a genuine next step in LLM architecture for taking on much harder problems.' - 'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins. You can even run them entirely in your browser using WebAssembly and the latest Chrome! Hobbyists can build their own fine-tuned models I said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.875 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9583333333333334 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.875 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3194444444444444 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.875 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9583333333333334 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9455223360506796 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9270833333333334 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9270833333333334 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Ahmednogood/legal-ft-8e57573c-3113-40ec-9cc6-71134cd9eade") # Run inference sentences = [ 'What challenges does the author mention about implementing evaluation patterns for their work?', 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.', 'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.\nYou can even run them entirely in your browser using WebAssembly and the latest Chrome!\nHobbyists can build their own fine-tuned models\nI said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.875 | | cosine_accuracy@3 | 0.9583 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.875 | | cosine_precision@3 | 0.3194 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.875 | | cosine_recall@3 | 0.9583 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9455** | | cosine_mrr@10 | 0.9271 | | cosine_map@100 | 0.9271 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 157 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 157 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 20.82 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.45 tokens</li><li>max: 214 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What was the typical context length accepted by most models last year?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> | | <code>How many tokens can Google’s Gemini series accept in its models?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> | | <code>What are some companies mentioned that have developed multi-modal audio models?</code> | <code>Your browser does not support the audio element.<br><br>OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.<br>Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:<br><br><br>Your browser does not support the audio element.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9554 | | 2.0 | 32 | 0.9539 | | 3.0 | 48 | 0.9484 | | 3.125 | 50 | 0.9484 | | 4.0 | 64 | 0.9484 | | 5.0 | 80 | 0.9484 | | 6.0 | 96 | 0.9484 | | 6.25 | 100 | 0.9484 | | 7.0 | 112 | 0.9484 | | 8.0 | 128 | 0.9484 | | 9.0 | 144 | 0.9455 | | 9.375 | 150 | 0.9455 | | 10.0 | 160 | 0.9455 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
AmirH98/q_Taxi-v3
AmirH98
2025-04-29T23:45:56Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-29T23:45:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q_Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AmirH98/q_Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
EdwardTurner/Qwen2.5-14B-Instruct_R_3_3
EdwardTurner
2025-04-29T23:45:37Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T22:19:19Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hxyscott/math-decontamination-4.1-mini-rank32-error_removed-7epoch
hxyscott
2025-04-29T23:45:11Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T14:05:28Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF
darkc0de
2025-04-29T23:43:21Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-29T23:42:07Z
--- base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo darkc0de/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048 ```
Franc105/query_builder
Franc105
2025-04-29T23:43:07Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-29T23:42:53Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Franc105 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hxyscott/math-decontamination-4.1-mini-rank64-error_removed-7epoch
hxyscott
2025-04-29T23:42:41Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T14:05:36Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
no0ne-97/misoginia-roberta-base-bne-V2
no0ne-97
2025-04-29T23:42:21Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-29T23:41:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darkc0de/Xordolphtron
darkc0de
2025-04-29T23:41:23Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "base_model:merge:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T23:32:05Z
--- base_model: - cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition - TroyDoesAI/BlackSheep-24B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) as a base. ### Models Merged The following models were included in the merge: * [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition - model: TroyDoesAI/BlackSheep-24B merge_method: sce base_model: TroyDoesAI/BlackSheep-24B parameters: select_topk: 0.80 dtype: float32 out_dtype: bfloat16 tokenizer: source: TroyDoesAI/BlackSheep-24B ```
oliverbuterbaugh/mindenpl
oliverbuterbaugh
2025-04-29T23:39:14Z
0
0
null
[ "dataset:oliverbuterbaugh/carpark_availability", "license:mit", "region:us" ]
null
2025-04-29T09:56:19Z
--- license: mit datasets: - oliverbuterbaugh/carpark_availability ---
Enferlain/lumi-classifier
Enferlain
2025-04-29T23:38:52Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-29T23:38:14Z
--- license: creativeml-openrail-m ---
oliverbuterbaugh/lesjardin
oliverbuterbaugh
2025-04-29T23:38:43Z
0
0
null
[ "dataset:oliverbuterbaugh/carpark_availability", "license:mit", "region:us" ]
null
2025-04-29T09:56:39Z
--- license: mit datasets: - oliverbuterbaugh/carpark_availability ---
oliverbuterbaugh/pierrd
oliverbuterbaugh
2025-04-29T23:38:11Z
0
0
null
[ "dataset:oliverbuterbaugh/carpark_availability", "license:mit", "region:us" ]
null
2025-04-29T09:56:56Z
--- license: mit datasets: - oliverbuterbaugh/carpark_availability ---
JunHowie/Qwen3-32B-GPTQ-Int4
JunHowie
2025-04-29T23:33:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "base_model:Qwen/Qwen3-8B-Base", "base_model:quantized:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2025-04-29T22:26:24Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-8B-Base --- # Qwen3-8B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-8B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 8.2B - Number of Paramaters (Non-Embedding): 6.95B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-8B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-8B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-8B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
AnomalaPictures/Aine
AnomalaPictures
2025-04-29T23:27:43Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-29T23:02:21Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AIN --- # Aine <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AIN` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AIN", "lora_weights": "https://huggingface.co/AnomalaPictures/Aine/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('AnomalaPictures/Aine', weight_name='lora.safetensors') image = pipeline('AIN').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/AnomalaPictures/Aine/discussions) to add images that show off what you’ve made with this LoRA.
rohancsalvi/llama3_cognitive
rohancsalvi
2025-04-29T23:20:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-29T23:19:57Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rohancsalvi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
C10X/tqwq-Q8_0-GGUF
C10X
2025-04-29T23:20:01Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:C10X/tqwq", "base_model:quantized:C10X/tqwq", "endpoints_compatible", "region:us" ]
null
2025-04-29T23:20:00Z
--- base_model: C10X/tqwq tags: - llama-cpp - gguf-my-repo --- # C10X/tqwq-Q8_0-GGUF This model was converted to GGUF format from [`C10X/tqwq`](https://huggingface.co/C10X/tqwq) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/C10X/tqwq) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo C10X/tqwq-Q8_0-GGUF --hf-file tqwq-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo C10X/tqwq-Q8_0-GGUF --hf-file tqwq-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo C10X/tqwq-Q8_0-GGUF --hf-file tqwq-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo C10X/tqwq-Q8_0-GGUF --hf-file tqwq-q8_0.gguf -c 2048 ```
yyang12/chatmusican-testpush
yyang12
2025-04-29T23:16:37Z
0
0
transformers
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed", "base_model:finetune:PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T21:44:30Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF
mradermacher
2025-04-29T23:14:34Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "unaligned", "adult", "ERP", "en", "base_model:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1", "base_model:quantized:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-29T14:22:55Z
--- base_model: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - nsfw - explicit - roleplay - unaligned - adult - ERP --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
gdfwj/ds_merge_shoaib-Q6_K-GGUF
gdfwj
2025-04-29T23:08:55Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:gdfwj/ds_merge_shoaib", "base_model:quantized:gdfwj/ds_merge_shoaib", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-29T23:08:45Z
--- base_model: gdfwj/ds_merge_shoaib tags: - llama-cpp - gguf-my-repo --- # gdfwj/ds_merge_shoaib-Q6_K-GGUF This model was converted to GGUF format from [`gdfwj/ds_merge_shoaib`](https://huggingface.co/gdfwj/ds_merge_shoaib) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/gdfwj/ds_merge_shoaib) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo gdfwj/ds_merge_shoaib-Q6_K-GGUF --hf-file ds_merge_shoaib-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo gdfwj/ds_merge_shoaib-Q6_K-GGUF --hf-file ds_merge_shoaib-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo gdfwj/ds_merge_shoaib-Q6_K-GGUF --hf-file ds_merge_shoaib-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo gdfwj/ds_merge_shoaib-Q6_K-GGUF --hf-file ds_merge_shoaib-q6_k.gguf -c 2048 ```
nomadrp/mdpo-v1.1
nomadrp
2025-04-29T23:06:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-29T16:43:10Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers model_name: mdpo-v1.1 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for mdpo-v1.1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nomadrp/mdpo-v1.1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.0.dev0 - Transformers: 4.48.2 - Pytorch: 2.2.0+cu118 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tnrzdl/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_frisky_iguana
tnrzdl
2025-04-29T23:04:31Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hibernating frisky iguana", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T04:05:24Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_frisky_iguana tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hibernating frisky iguana - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_frisky_iguana This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tnrzdl/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_frisky_iguana", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Eddyono/Fahd
Eddyono
2025-04-29T22:57:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-29T22:57:50Z
--- license: apache-2.0 ---
IvesDan/Lilly
IvesDan
2025-04-29T22:57:11Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-29T22:31:50Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Lilly --- # Lilly <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Lilly` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Lilly", "lora_weights": "https://huggingface.co/IvesDan/Lilly/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('IvesDan/Lilly', weight_name='lora.safetensors') image = pipeline('Lilly').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/IvesDan/Lilly/discussions) to add images that show off what you’ve made with this LoRA.
khaliloz/elaco
khaliloz
2025-04-29T22:55:01Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T22:46:05Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** khaliloz - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF
mradermacher
2025-04-29T22:50:24Z
7
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:longtermrisk/Qwen2.5-Coder-32B-ftjob-3653382d559c", "base_model:quantized:longtermrisk/Qwen2.5-Coder-32B-ftjob-3653382d559c", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-29T04:43:00Z
--- base_model: longtermrisk/Qwen2.5-Coder-32B-ftjob-3653382d559c language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/longtermrisk/Qwen2.5-Coder-32B-ftjob-3653382d559c <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-ftjob-3653382d559c-GGUF/resolve/main/Qwen2.5-Coder-32B-ftjob-3653382d559c.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
takedakoji00/Llama-3.1-8B-Instruct-custom-qg-flant5-wo-hint-9th_val_ED_1000ep_rm_empty
takedakoji00
2025-04-29T22:49:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-29T02:13:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
no0ne-97/misoginia-bert-base-spanish-wwm-cased-V2
no0ne-97
2025-04-29T22:48:34Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-29T22:48:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anassaleh218/blenderbot-lora-Convai2-Ch100
anassaleh218
2025-04-29T22:46:34Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/blenderbot-400M-distill", "base_model:adapter:facebook/blenderbot-400M-distill", "license:apache-2.0", "region:us" ]
null
2025-04-29T22:42:04Z
--- library_name: peft license: apache-2.0 base_model: facebook/blenderbot-400M-distill tags: - generated_from_trainer model-index: - name: blenderbot-lora-Convai2-Ch100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blenderbot-lora-Convai2-Ch100 This model is a fine-tuned version of [facebook/blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Otavio13245/Tycoon
Otavio13245
2025-04-29T22:43:55Z
0
0
null
[ "license:cc-by-sa-3.0", "region:us" ]
null
2025-04-29T22:43:54Z
--- license: cc-by-sa-3.0 ---