modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-03 00:41:34
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
466 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-03 00:34:44
card
stringlengths
11
1.01M
keanteng/sesame-csm-elise-lora
keanteng
2025-06-01T03:00:05Z
0
0
transformers
[ "transformers", "safetensors", "generative-ai", "text-to-speech", "en", "dataset:MrDragonFox/Elise", "base_model:sesame/csm-1b", "base_model:finetune:sesame/csm-1b", "license:agpl-3.0", "endpoints_compatible", "region:us" ]
text-to-speech
2025-06-01T00:44:14Z
--- license: agpl-3.0 datasets: - MrDragonFox/Elise language: - en base_model: - sesame/csm-1b pipeline_tag: text-to-speech library_name: transformers tags: - generative-ai --- # CSM Elise Voice Model LoRA This model is a fine-tuned version of [sesame/csm-1b](https://huggingface.co/sesame/csm-1b) using the [Elise dataset](https://huggingface.co/datasets/MrDragonFox/Elise) with LoRA. There are sample outputs files in the repository. The sound quality seems to be better than tuning on full-parameters. However, more tweaking would be needed to ensure consistent performance. From the sample we can hear two distinct sounds (soft and vibrant) when prompt differently. Also, model performance on larger tokens will be to be further validated. Larger training data would be required for more consistent sound effect as the current dataset is small and limited. ## Model Details - **Base Model**: sesame/csm-1b - **Training Data**: MrDragonFox/Elise dataset - **Fine-tuning Approach**: Voice cloning through conditional speech generation using LoRA - **Voice Characteristics**: [Describe voice qualities] - **Training Parameters**: - Learning Rate: 1e-5 - Epochs: 4 - Batch Size: 1 with gradient accumulation steps of 4 ## Quick Start ```python import torch from transformers import CsmForConditionalGeneration, AutoProcessor from peft import PeftModel import soundfile as sf from IPython.display import Audio, display # Device setup device = "cuda" if torch.cuda.is_available() else "cpu" # Load model and processor base_model_id = "sesame/csm-1b" adapter_model_id = "keanteng/sesame-csm-elise-lora" # your uploaded model # Load processor processor = AutoProcessor.from_pretrained(base_model_id) # Load base model base_model = CsmForConditionalGeneration.from_pretrained( base_model_id, device_map=device, torch_dtype=torch.float16 # Use half precision for faster inference ) # Load adapter and merge weights model = PeftModel.from_pretrained(base_model, adapter_model_id) model = model.merge_and_unload() # Merge adapter weights into base model # Optimize for generation model.generation_config.max_length = 256 model.generation_config.use_cache = True model.generation_config.cache_implementation = "static" if hasattr(model, "depth_decoder"): model.depth_decoder.generation_config.cache_implementation = "static" ``` ```python # Define a simple input conversation = [ {"role": "0", "content": [ {"type": "text", "text": "Hello! I'm so happy to see you today!"} ]}, ] # Process input inputs = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(device) # Generate audio audio = model.generate(**inputs, output_audio=True) # Convert to numpy and save audio_cpu = audio[0].to(torch.float32).cpu().numpy() output_file = "output.wav" sf.write(output_file, audio_cpu, 24000) # Play audio if in notebook try: display(Audio(output_file)) except: print(f"Audio saved to {output_file}") ```
CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2
CALISTA-INDUSTRY
2025-06-01T02:57:04Z
126
0
transformers
[ "transformers", "safetensors", "gguf", "gemma3_text", "text-generation", "multimodal", "reasoning", "fine-tuned", "gemma", "vision-language", "instruction-tuned", "conversational", "en", "dataset:openai/gsm8k", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:quantized:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T08:11:55Z
--- language: - en license: gemma tags: - multimodal - reasoning - fine-tuned - gemma - vision-language - instruction-tuned metrics: - accuracy - f1 - bleu base_model: - unsloth/gemma-3-1b-it-unsloth-bnb-4bit library_name: transformers pipeline_tag: text-generation datasets: - openai/gsm8k --- # Model Card for CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2 ## Model Details - **Developed by**: Rizky Sulaeman & Politeknik Negeri Indramayu - **Model type**: Fine-tuned multimodal large language model - **Language(s)**: English - **License**: Apache 2.0 - **Finetuned from**: gemma-3b ## Model Description `gemma_3_1B_reasoning_multimodal_en_ft_v2` is a fine-tuned version of the Gemma 3B model, enhanced for multimodal reasoning tasks. It integrates both visual and textual inputs to perform complex reasoning, making it suitable for applications that require understanding and interpreting combined modalities. ## Intended Uses & Limitations ### Intended Uses - Visual Question Answering (VQA) - Image Captioning - Multimodal Dialogue Systems - Instruction Following with Visual Inputs ### Limitations - Performance may degrade on non-English inputs. - May not generalize well to domains significantly different from the training data. - Not suitable for real-time applications without further optimization. ## How to Use ```python from transformers import AutoProcessor, AutoModelForCausalLM from PIL import Image import requests # Load model and processor model = AutoModelForCausalLM.from_pretrained("CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2") processor = AutoProcessor.from_pretrained("CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2") # Prepare inputs image = Image.open(requests.get("URL_TO_IMAGE", stream=True).raw) prompt = "Describe the scene in the image." inputs = processor(text=prompt, images=image, return_tensors="pt") # Generate response outputs = model.generate(**inputs) response = processor.decode(outputs[0], skip_special_tokens=True) print(response) ``` ```python @misc{calista2025qwen2reasoning, title={CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2}, author={CALISTA INDUSTRY}, year={2025}, url={CALISTA-INDUSTRY/gemma_3_1B_reasoning_multimodal_en_ft_v2} } ```
TheGardener/KD-MLP-qwen2.5-0.41B-mmlu-epoch-1st
TheGardener
2025-06-01T02:55:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T02:54:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF
mradermacher
2025-06-01T02:52:16Z
162
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lumbering grazing antelope", "unsloth", "trl", "en", "base_model:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope", "base_model:quantized:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-06T17:55:48Z
--- base_model: romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope language: - en library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope quantized_by: mradermacher tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lumbering grazing antelope - unsloth - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Triangle104/Qwen3-30B-A3B-Q8_0-GGUF
Triangle104
2025-06-01T02:52:03Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-01T02:49:38Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-30B-A3B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -c 2048 ```
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF
mradermacher
2025-06-01T02:51:23Z
368
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lumbering grazing antelope", "unsloth", "trl", "en", "base_model:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope", "base_model:quantized:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-06T20:14:28Z
--- base_model: romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope language: - en library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope quantized_by: mradermacher tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lumbering grazing antelope - unsloth - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
tobiso/ppo-Huggy
tobiso
2025-06-01T02:50:18Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-01T02:50:13Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: tobiso/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
HusseinBashir/xal_1aad
HusseinBashir
2025-06-01T02:45:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-06-01T02:17:07Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: xal_1aad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xal_1aad This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5138 | 0.4545 | 100 | 0.4302 | | 0.4173 | 0.9091 | 200 | 0.3650 | | 0.3779 | 1.3636 | 300 | 0.3431 | | 0.3553 | 1.8182 | 400 | 0.3259 | | 0.3502 | 2.2727 | 500 | 0.3220 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
yaffo/essay_grader_merged2
yaffo
2025-06-01T02:44:12Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-01T02:37:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/Qwen3-30B-A3B-Q6_K-GGUF
Triangle104
2025-06-01T02:39:14Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-01T02:32:42Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-30B-A3B-Q6_K-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model. --- Qwen3-30B-A3B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 32,768 natively and 131,072 tokens with YaRN. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048 ```
VIDEOS-18-Sofia-Ansari-Videos/FULL.VIDEO.Sofia.Ansari.Viral.Video.Tutorial.Official
VIDEOS-18-Sofia-Ansari-Videos
2025-06-01T02:38:31Z
0
0
null
[ "region:us" ]
null
2025-06-01T02:38:13Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Ajo-Subarjo/Nira_uq
Ajo-Subarjo
2025-06-01T02:36:51Z
0
0
null
[ "gguf", "unconditional-image-generation", "base_model:cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B", "base_model:quantized:cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
unconditional-image-generation
2025-06-01T02:18:08Z
--- license: apache-2.0 base_model: - cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B pipeline_tag: unconditional-image-generation ---
mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF
mradermacher
2025-06-01T02:27:16Z
363
0
transformers
[ "transformers", "gguf", "unsloth", "qwen3", "qwen2", "14b", "arabic", "logical-reasoning", "conversational", "instruction-following", "text-generation", "merged_16bit", "ar", "dataset:beetlware/arabic-reasoning-dataset-logic", "base_model:beetleware/Bee1reason-arabic-Qwen-14B", "base_model:quantized:beetleware/Bee1reason-arabic-Qwen-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2025-05-22T12:17:33Z
--- base_model: beetleware/Bee1reason-arabic-Qwen-14B datasets: - beetlware/arabic-reasoning-dataset-logic language: ar library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - unsloth - qwen3 - qwen2 - 14b - arabic - logical-reasoning - conversational - instruction-following - text-generation - merged_16bit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/beetleware/Bee1reason-arabic-Qwen-14B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Bee1reason-arabic-Qwen-14B-i1-GGUF/resolve/main/Bee1reason-arabic-Qwen-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
bruhzair/prototype0.4x49
bruhzair
2025-06-01T02:24:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T02:01:53Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x49 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c * /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5 * /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52 * /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c parameters: select_topk: 0.1 - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213 parameters: select_topk: 0.14 - model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5 parameters: select_topk: 0.2 - model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52 parameters: select_topk: 0.5 - model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 parameters: select_topk: 0.6 base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 merge_method: sce tokenizer: source: union chat_template: llama3 int8_mask: true dtype: bfloat16 ```
yasu-oh/Llama-3-Swallow-Infused-R1776-70B
yasu-oh
2025-06-01T02:23:58Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:merge:meta-llama/Llama-3.3-70B-Instruct", "base_model:perplexity-ai/r1-1776-distill-llama-70b", "base_model:merge:perplexity-ai/r1-1776-distill-llama-70b", "base_model:tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4", "base_model:merge:tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4", "license:llama3.3", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T09:10:11Z
--- language: - en - ja library_name: transformers pipeline_tag: text-generation license: - llama3.3 - gemma model_type: llama base_model: - perplexity-ai/r1-1776-distill-llama-70b - tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4 - meta-llama/Llama-3.3-70B-Instruct base_model_relation: merge --- # Llama-3-Swallow-Infused-R1776-70B ## Overview **Llama-3-Swallow-Infused-R1776-70B** is a 70B parameter merged model based on Meta’s **Llama 3** architecture. It combines the distilled instruction-following behavior of `r1-1776` with enhancements derived from the `Swallow` delta over Meta's base Llama 3.3 model. This composition is particularly suited for English and Japanese instruction tasks, maintaining robustness while introducing sharper alignment capabilities. ## Merge Methodology This model was created using a weighted linear merge: ``` Llama-3-Swallow-Infused-R1776-70B = r1-1776-distill-llama-70b + 0.4 * ( Swallow-70B-Instruct-v0.4 - Llama-3.3-70B-Instruct ) ``` * **Base**: `perplexity-ai/r1-1776-distill-llama-70b` (MIT License) * **Delta**: `tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4` - `meta-llama/Llama-3.3-70B-Instruct` * **Merge Tool**: Performed using [MergeKit](https://github.com/arcee-ai/mergekit) * **Scaling Factor**: `α = 0.4` The resulting model maintains the backbone of R1776 while incorporating Swallow's improved instruction tuning. ## Languages * English * Japanese ## Recommended parameters * temperature: 0.6 * top_p: 0.95 * top_k: 40 * min_p: 0.0 ## License This model is distributed under the **Meta Llama 3 Community License**. Please review and comply with its terms: [https://www.llama.com/llama3/license/](https://www.llama.com/llama3/license/) **Key Restrictions Include:** * Do not use this model to improve competing LLMs. * Reuse must include the phrase: **"Built with Meta Llama 3."** * For organizations with over **700M MAU**, a separate license from Meta is required. * Model name must include “Llama 3”. ## Citations If you use this model, please cite the original works: * Perplexity AI's [r1-1776-distill-llama-70b](https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b) * TokyoTech-LLM's [Llama-3.3-Swallow-70B-Instruct-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) * Meta's [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)
Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF
Triangle104
2025-06-01T02:22:26Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-01T02:19:56Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model. --- Qwen3-30B-A3B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 32,768 natively and 131,072 tokens with YaRN. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -c 2048 ```
amazeble/elise_lora
amazeble
2025-06-01T02:20:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:MrDragonFox/baddy_S3_EXP_3", "base_model:finetune:MrDragonFox/baddy_S3_EXP_3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T02:20:24Z
--- base_model: MrDragonFox/baddy_S3_EXP_3 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** amazeble - **License:** apache-2.0 - **Finetuned from model :** MrDragonFox/baddy_S3_EXP_3 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
elliotthwangmsa/KimLan-Mistral0.2-7b-tw_train_ouputs
elliotthwangmsa
2025-06-01T02:20:31Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:elliotthwang/Ministral-7B-Instruct-v0.2-tw", "base_model:adapter:elliotthwang/Ministral-7B-Instruct-v0.2-tw", "region:us" ]
null
2025-05-31T09:25:58Z
--- base_model: elliotthwang/Ministral-7B-Instruct-v0.2-tw library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
manuross1/cndnlsldd6k
manuross1
2025-06-01T02:10:28Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-31T19:36:39Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: cndnlsldd6k --- # Cndnlsldd6K <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `cndnlsldd6k` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "cndnlsldd6k", "lora_weights": "https://huggingface.co/manuross1/cndnlsldd6k/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('manuross1/cndnlsldd6k', weight_name='lora.safetensors') image = pipeline('cndnlsldd6k').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/manuross1/cndnlsldd6k/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/final-complete-malicious-url-model-GGUF
mradermacher
2025-06-01T02:06:09Z
0
0
transformers
[ "transformers", "gguf", "cybersecurity", "malicious-url-detection", "bert", "phishing-detection", "en", "base_model:r3ddkahili/final-complete-malicious-url-model", "base_model:quantized:r3ddkahili/final-complete-malicious-url-model", "license:apache-2.0", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-05-31T10:37:02Z
--- base_model: r3ddkahili/final-complete-malicious-url-model language: en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - cybersecurity - malicious-url-detection - bert - transformers - phishing-detection --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/r3ddkahili/final-complete-malicious-url-model <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/final-complete-malicious-url-model-GGUF/resolve/main/final-complete-malicious-url-model.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
krishnadhulipalla/hyena-dna-DAP-SEQ
krishnadhulipalla
2025-06-01T02:01:33Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-01T02:00:48Z
--- license: apache-2.0 ---
LinaSad/mcqa_sciq_merged_bis_lr5105
LinaSad
2025-06-01T02:00:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T01:59:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mungert/QwQ-32B-ArliAI-RpR-v4-GGUF
Mungert
2025-06-01T01:59:46Z
0
0
transformers
[ "transformers", "gguf", "text-generation", "en", "base_model:Qwen/QwQ-32B", "base_model:quantized:Qwen/QwQ-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-31T05:29:09Z
--- license: apache-2.0 thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg language: - en base_model: - Qwen/QwQ-32B library_name: transformers pipeline_tag: text-generation --- # <span style="color: #7FFF7F;">QwQ-32B-ArliAI-RpR-v4 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`f5cd27b7`](https://github.com/ggerganov/llama.cpp/commit/f5cd27b71da3ac375a04a41643d14fc779a8057b). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers → IQ4_XS (selected layers) - Middle 50% → IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - Δ PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41) - 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** 📌 **Fitting models into GPU VRAM** ✔ **Memory-constrained deployments** ✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated ✔ **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. 📌 **Use BF16 if:** ✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). ✔ You want **higher precision** while saving memory. ✔ You plan to **requantize** the model into another format. 📌 **Avoid BF16 if:** ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). ❌ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) – More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. 📌 **Use F16 if:** ✔ Your hardware supports **FP16** but **not BF16**. ✔ You need a **balance between speed, memory usage, and accuracy**. ✔ You are running on a **GPU** or another device optimized for FP16 computations. 📌 **Avoid F16 if:** ❌ Your device lacks **native FP16 support** (it may run slower than expected). ❌ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory. 📌 **Use Quantized Models if:** ✔ You are running inference on a **CPU** and need an optimized model. ✔ Your device has **low VRAM** and cannot load full-precision models. ✔ You want to reduce **memory footprint** while keeping reasonable accuracy. 📌 **Avoid Quantized Models if:** ❌ You need **maximum accuracy** (full-precision models are better for this). ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `QwQ-32B-ArliAI-RpR-v4-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `QwQ-32B-ArliAI-RpR-v4-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `QwQ-32B-ArliAI-RpR-v4-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `QwQ-32B-ArliAI-RpR-v4-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `QwQ-32B-ArliAI-RpR-v4-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `QwQ-32B-ArliAI-RpR-v4-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `QwQ-32B-ArliAI-RpR-v4-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `QwQ-32B-ArliAI-RpR-v4-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `QwQ-32B-ArliAI-RpR-v4-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `QwQ-32B-ArliAI-RpR-v4-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `QwQ-32B-ArliAI-RpR-v4-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span> ❤ **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: 👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open) 💬 **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads): - ✅ **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟢 **TurboLLM** – Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Free Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) - 🔑 Get more tokens by logging in or [downloading our Free Network Monitor Agent with integrated AI Assistant](https://readyforquantum.com/download) 🔵 **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API ### 💡 **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! # QwQ-32B-ArliAI-RpR-v4 <img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg" alt="clickbait" width="500"> <small>Image generated using Arli AI Image Generation https://www.arliai.com/image-generation</small> ## RpR v4 Changes: The best RP/creative model from ArliAI yet again. - Reduced repetitions and impersonation To add to the creativity and out of the box thinking of RpR v3, a more advanced filtering method was used in order to remove examples where the LLM repeated similar phrases or talked for the user. Any repetition or impersonation cases that happens will be due to how the base QwQ model was trained, and not because of the RpR dataset. - Increased training sequence length The training sequence length was increased to 16K in order to help awareness and memory even on longer chats. ## RpR Series Overview: Building on RPMax with Reasoning RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series **builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series**. RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models. With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning. In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset. Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time. The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing. You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/ ## Model Description QwQ-32B-ArliAI-RpR-v4 is the third release in the RpR series. It is a 32-billion parameter model fine-tuned using the RpR dataset based on the curated RPMax dataset combined with techniques to maintain reasoning abilities in long multi-turn chats. ### Recommended Samplers - RpR models does not work well with repetition penalty type of samplers, even more advanced ones such as XTC or DRY. - It works best with simple sampler settings and also being allowed to reason for a long time (high max tokens). - You can download the ST master export uploaded in the files section of this repo as well. Recommended to first start with: * **Temperature**: 1.0 * **MinP**: 0.02 * **TopK**: 40 * **Response Tokens**: 2048+ ### Specs * **Base Model**: QwQ-32B * **Max Context Length**: Max 128K with Yarn (Same as base QwQ it is Natively 32K) * **Parameters**: 32B * **Reasoning Model**: Yes ### Training Details * **Sequence Length**: 16384 * **Epochs**: 1 epoch training (Inherited from RPMax methods) * **Fine-tuning Method**: RS-QLORA+ (Rank-Stabilized LoRA + LoRA Plus 8x) * **Rank/Alpha**: 128-rank 128-alpha * **Learning Rate**: 0.00001 * **Scheduler**: Rex * **Gradient accumulation**: 32 ### Very Nice Training graphs :) <img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/J-cD7mjdIG58BsSPpuS6x.png" alt="Train Loss" width="600"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/T890dqrUcBYnlOzK7MXrU.png" alt="Eval Loss" width="600"> ### Quantization * **BF16**: https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4 * **GGUF**: https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4-GGUF ### How to use reasoning models correctly in ST <img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/njVt2Vir8Isd3ApjTBmoI.png" alt="RpR ST Settings" width="600"> For any reasoning models in general, you need to make sure to set: * Prefix is set to ONLY \<think> and the suffix is set to ONLY \</think> without any spaces or newlines (enter) * Reply starts with \<think> * Always add character names is unchecked * Include names is set to never * As always the chat template should also conform to the model being used Note: Reasoning models work properly only if include names is set to never, since they always expect the eos token of the user turn followed by the \<think> token in order to start reasoning before outputting their response. If you set include names to enabled, then it will always append the character name at the end like "Seraphina:\<eos_token>" which confuses the model on whether it should respond or reason first. The rest of your sampler parameters can be set as you wish as usual. If you don't see the reasoning wrapped inside the thinking block, then either your settings is still wrong and doesn't follow my example or that your ST version is too old without reasoning block auto parsing. If you see the whole response is in the reasoning block, then your \<think> and \</think> reasoning token suffix and prefix might have an extra space or newline. Or the model just isn't a reasoning model that is smart enough to always put reasoning in between those tokens. ### If you set everything up correctly, it should look like this: <img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/wFQC8Df9dLaiQGnIg_iEo.png" alt="RpR example response" width="600"> --- <details> <summary>Details: The RPMax Foundation (Dataset & Training Philosophy)</summary> *The following sections detail the core philosophy behind the dataset and training methodology originally developed for RPMax, which serves as the foundation for the RpR series.* ### The Goal: Reduced Repetition and Higher Creativity The goal of the dataset curation used for both RPMax and RpR is to reduce repetitions and increase the models ability to creatively write in different situations presented to it. What this means is it is a model that will output responses very differently without falling into predictable tropes across different situations. ### What is repetition and creativity? First of all, creativity should mean the variety in output that the model is capable of creating. You should not confuse creativity with writing prose. When a model writes in a way that can be said to be pleasant like writers would write in a novel, this is not creative writing. This is just a model having a certain pleasant type of writing prose. So a model that writes nicely is not necessarily a creative model. Repetition and creativity are essentially intertwined with each other, so if a model is repetitive then a model can also be said to be un-creative as it cannot write new things and can only repeat similar responses that it has created before. For repetition there are actually two very different forms of repetition. **In-context repetition:** When people mention a model is repetitive, this usually mean a model that likes to repeat the same phrases in a single conversation. An example of this is when a model says that a character "flicks her hair and...." and then starts to prepend that "flicks her hair and..." into every other action that character does. It can be said that the model is boring, but even in real people's writing it is possible that this kind of repetition could be intentional to subtly prove a point or showcase a character's traits in some scenarios. So this type of repetition is not always bad and completely discouraging a model from doing this does not always lead to improve a model's writing ability. In this regard, RPMax and RpR is not yet focused on eliminating this type of repetition so there might be some in-context repetition that can be seen in the outputs. Eliminating this will be the next big step of the RPMax and RpR series of models. **Cross-context repetition:** A second worse type of repetition is a model's tendency to repeat the same phrases or tropes in very different situations. An example is a model that likes to repeat the infamous "shivers down my spine" phrase in wildly different conversations that don't necessarily fit with that phrase. This type of repetition is ALWAYS bad as it is a sign that the model has over-fitted into that style of "creative writing" that it has often seen in the training dataset. A model's tendency to have cross-context repetition is also usually visible in how a model likes to choose similar repetitive names when writing stories. Such as the infamous "elara" and "whispering woods" names. The primary goal of the dataset curation for RPMax and RpR is to create a highly creative model by reducing cross-context repetition, as that is the type of repetition that follows you through different conversations. This is combated by making sure the dataset does not have repetitions of the same situations or characters in different example entries. ### Dataset Curation The success of models trained on this dataset (including RPMax and now RpR) is thanks to the training method and the unique dataset created for fine-tuning. It contains as many open source creative writing and RP datasets that can be found (all from Hugging Face), from which have been curated to weed out datasets that are purely synthetic generations as they often only serve to dumb down the model and make the model learn GPT-isms (slop) rather than help. Then Llama 3.1 8B (or a similarly capable model) is used to create a database of the characters and situations that are portrayed in these datasets, which is then used to de-dupe these datasets to make sure that there is only a single entry of any character or situation. ### The Golden Rule of Fine-Tuning Unlike the initial pre-training stage where the more data you throw at it the better it becomes for the most part, the golden rule for fine-tuning models isn't quantity, but instead quality over quantity. So the dataset used here is actually orders of magnitude smaller than it would be if it included repeated characters and situations in the dataset, but the end result is a model that does not feel like just another "in-breed" of another creative writing/RP model. ### Training Parameters and Unconventional Approach The usual way is to have a low learning rate and high gradient accumulation for better loss stability, and then run multiple epochs of the training run until the loss is acceptable. The RPMax and RpR methodology, however, uses only **one single epoch**, a low gradient accumulation, and a higher than normal learning rate. The loss curve during training is actually unstable and jumps up and down a lot, but if it is smoothed out, it is steadily decreasing over time. The theory is that this allows the models to learn from each individual example in the dataset much more, and by not showing the model the same example twice using multiple epochs, it stops the model from latching on and reinforcing a single character or story trope. The jumping up and down of loss during training is because as the model gets trained on a new entry from the dataset, the model will have never seen a similar example before and therefore can't really predict an answer similar to the example entry. While the relatively high end loss of 1.0 or slightly above is actually acceptable because the goal was never to create a model that can output exactly like the dataset that is being used to train it. Rather to create a model that is creative enough to make up it's own style of responses. This is different from training a model in a particular domain and needing the model to reliably be able to output like the example dataset, such as when training a model on a company's internal knowledge base. </details> --- ## Try It Out! Model preference is subjective, so please do try QwQ-32B-ArliAI-RpR-v4 for yourself. Your feedback both good and bad is always valueable and will help us improve the future RPMax and RpR models.
cgato/Nemo12b-TheSyntheticOne
cgato
2025-06-01T01:57:18Z
0
0
null
[ "safetensors", "mistral", "license:cc-by-nc-4.0", "region:us" ]
null
2025-05-30T11:43:23Z
--- license: cc-by-nc-4.0 --- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) Trained using https://huggingface.co/datasets/cgato/TheSmarts for demonstration purposes. Probably a fairly competent assistant model. May do KTO overtop to sand down the edges and improve performance later. ### Prompt Format: ChatML Roles: system, user, assistant ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9338 | 0.0003 | 1 | 0.9347 | | 0.8271 | 0.0328 | 100 | 0.7933 | | 0.9541 | 0.0656 | 200 | 0.8407 | | 0.7497 | 0.0984 | 300 | 0.7934 | | 0.8786 | 0.1311 | 400 | 0.7724 | | 0.8257 | 0.1639 | 500 | 0.7627 | | 0.8258 | 0.1967 | 600 | 0.7679 | | 0.7207 | 0.2295 | 700 | 0.7497 | | 0.9439 | 0.2623 | 800 | 0.7576 | | 0.852 | 0.2951 | 900 | 0.7361 | | 0.7852 | 0.3279 | 1000 | 0.7375 | | 0.7 | 0.3607 | 1100 | 0.7298 | | 0.7865 | 0.3934 | 1200 | 0.7202 | | 0.6182 | 0.4262 | 1300 | 0.7146 | | 0.6885 | 0.4590 | 1400 | 0.7131 | | 0.7154 | 0.4918 | 1500 | 0.7083 | | 0.7187 | 0.5246 | 1600 | 0.7016 | | 0.6877 | 0.5574 | 1700 | 0.6976 | | 0.7908 | 0.5902 | 1800 | 0.6946 | | 0.7664 | 0.6230 | 1900 | 0.6894 | | 0.7214 | 0.6557 | 2000 | 0.6857 | | 0.6971 | 0.6885 | 2100 | 0.6837 | | 0.6527 | 0.7213 | 2200 | 0.6804 | | 0.6815 | 0.7541 | 2300 | 0.6781 | | 0.6359 | 0.7869 | 2400 | 0.6759 | | 0.6874 | 0.8197 | 2500 | 0.6742 | | 0.5999 | 0.8525 | 2600 | 0.6728 | | 0.7391 | 0.8852 | 2700 | 0.6719 | | 0.6509 | 0.9180 | 2800 | 0.6710 | | 0.6346 | 0.9508 | 2900 | 0.6702 | | 0.7023 | 0.9836 | 3000 | 0.6696 |
Meta-Okapi/meta_rm_fr_600
Meta-Okapi
2025-06-01T01:50:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T01:50:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF
Triangle104
2025-06-01T01:45:30Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-01T01:40:01Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model. --- Qwen3-30B-A3B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 32,768 natively and 131,072 tokens with YaRN. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-q5_k_s.gguf -c 2048 ```
Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf
Bifrost-AI
2025-06-01T01:44:37Z
218
1
null
[ "gguf", "code", "finance", "chat", "text-generation", "large-language-model", "en", "dataset:Bifrost-AI/Solana-Vanguard-Challenge", "arxiv:2503.01743", "base_model:Bifrost-AI/Phi-4-bifrost-sol-3.8B", "base_model:quantized:Bifrost-AI/Phi-4-bifrost-sol-3.8B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-23T11:27:41Z
--- license: mit datasets: - Bifrost-AI/Solana-Vanguard-Challenge language: - en metrics: - accuracy - code_eval base_model: - Bifrost-AI/Phi-4-bifrost-sol-3.8B pipeline_tag: text-generation tags: - code - finance - chat - text-generation - large-language-model --- # Phi 4 Bifrost SOL 3B (Mini Instruct) ### This fine-tuned variant of Microsoft's Phi 4 Mini Instruct model was supervised fine-tuned on blockchain-specific datasets(Bifrost-AI/Solana-Vanguard-Challenge), optimized for downstream tasks in blockchain coding and smart contract development on the Solana ecosystem. The **Solana Vanguard Challenge** dataset, comprising 1,000 diverse and in-depth questions, offers full-spectrum coverage of the Solana ecosystem. It spans fundamental blockchain concepts, advanced on-chain programming in Rust and the Anchor framework, client-side integration in TypeScript, detailed security strategies, and performance as well as regulatory considerations. Phi 4 Bifrost SOL Mini Instruct is in active development with additional fine-tuning sessions, & benchmark statistics coming soon! ## Provided Quants | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.IQ1_S.gguf) | IQ1_S | 1.1 | very low quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.IQ1_M.gguf) | IQ1_M | 1.2 | very low quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.TQ1_0.gguf) | TQ1_0 | 1.2 | very low quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.IQ2_S.gguf) | IQ2_S | 1.4 | fast, lower quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q2_K.gguf) | Q2_K | 1.6 | fast, lower quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q4_0.gguf) | Q4_0 | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q5_K_M.gguf) | Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q6_K.gguf) | Q6_K | 3.1 | very good quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q8_0.gguf) | Q8_0 | 4.0 | fast, best quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.f16.gguf) | F16 | 7.7 | 16 bpw, highest quality | ## Training Session: - Time: 16 hours & 42 minutes - GPU: NVIDIA GeForce RTX 3090 - Batches: 3000 - Context-Size: 4098 - Batch-size: 1 - Learning-rate: 2e-5 - Training-loss: 0.84 - Eval-loss: 0.61 ## Dataset Composition - **Total Questions:** 1,000 - **Languages Covered:** - **Rust:** On-chain smart contract development, security best practices, advanced state management, CPIs, PDAs, and more. - **TypeScript:** Client-side integration using @solana/web3.js, wallet adapters, Metaplex for NFT protocols, dynamic transaction composition, and front-end dApp development. - **Planned Extensions:** - **C# (Solnet):** To be integrated later for .NET ecosystem coverage. #### Example After obtaining the Phi-4-bifrost-sol model checkpoints, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_path = "Bifrost-AI/Phi-4-bifrost-sol-3.8B" model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "This is a dialog transcript where the User interacts with an agent named Eva that can see, talk and act. Eva works as a Professional typescript, rust & csharp Software engineer and possesses qualities such as expert, methodical, innovative. She always responds immediately and precisely. She was created by Microsoft & Bifrost. Wrap code in ``` for readability."}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Disclaimer We do not recommend using Phi4 Bifrost SOL Mini-Instruct in commercial or real-world applications without further testing and development. This current model(v1) is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly. #### ------------------------Base Model Card------------------------ 🎉**Phi-4**: [[mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning) | [reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)]; [[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)] ## Model Summary Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures. 📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br> 🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br> 🚀 [Model paper](https://huggingface.co/papers/2503.01743) ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic). The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement. ### Model Quality To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows: | Benchmark | Similar size | | | | |2x size | | | | | | |----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 | | **Popular aggregated benchmark** | | | | | | | | | | | | | Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 | | BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 | | MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 | | MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 | | **Reasoning** | | | | | | | | | | | | | ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 | | BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 | | GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 | | HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 | | OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 | | PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 | | Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 | | TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 | | Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 | | **Multilingual** | | | | | | | | | | | | | Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 | | MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 | | **Math** | | | | | | | | | | | | | GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 | | MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 | | **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings. ## Usage ### Tokenizer Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats: #### Chat format This format is used for general conversation and instructions: ```yaml <|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|> ``` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` ### Inference with vLLM #### Requirements List of required packages: ``` flash_attn==2.7.4.post1 torch==2.5.1 vllm>=0.7.3 ``` ### Inference with Transformers #### Requirements Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Python 3.8 and 3.10 will work best. List of required packages: ``` flash_attn==2.7.4.post1 torch==2.5.1 transformers==4.49.0 accelerate==1.3.0 ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br> + **Inputs:** Text. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 21 days<br> + **Training data:** 5T tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between November and December 2024<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py). ## Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically, + Reasoning: + Winogrande: commonsense reasoning around pronoun resolution + PIQA: physical commonsense reasoning around everyday situations + ARC-challenge: grade-school multiple choice science questions + GPQA: very hard questions written and validated by experts in biology, physics, and chemistry + MedQA: medical questions answering + Social IQA: social commonsense intelligence + BoolQ: natural questions from context + TruthfulQA: grounded reasoning + Language understanding: + HellaSwag: commonsense natural language inference around everyday events + ANLI: adversarial natural language inference + Function calling: + Berkeley function calling function and tool call + Internal function calling benchmarks + World knowledge: + TriviaQA: trivia question on general topics + Math: + GSM8K: grade-school math word problems + GSM8K Hard: grade-school math word problems with large values and some absurdity. + MATH: challenging competition math problems + Code: + HumanEval HumanEval+, MBPP, MBPP+: python coding tasks + LiveCodeBenh, LiveBench: contamination-free code tasks + BigCode Bench: challenging programming tasks + Spider: SQL query tasks + Internal coding benchmarks + Instructions following: + IFEval: verifiable instructions + Internal instructions following benchmarks + Multilingual: + MGSM: multilingual grade-school math + Multilingual MMLU and MMLU-pro + MEGA: multilingual NLP tasks + Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval + Multi-turn conversations: + Data generated by in-house adversarial conversation simulation tool + Single-turn trustworthiness evaluation: + DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives + XSTest: exaggerated safety evaluation + Toxigen: adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft ---
Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF
Triangle104
2025-06-01T01:29:02Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-01T01:01:39Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model. --- Qwen3-30B-A3B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 32,768 natively and 131,072 tokens with YaRN. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048 ```
Ain20/gpt2-legal-finetuned
Ain20
2025-06-01T01:21:21Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T01:20:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Varinder2110/a3676659-13d6-464e-afdd-f1641207c577
Varinder2110
2025-06-01T01:06:49Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T00:58:16Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # A3676659 13D6 464E Afdd F1641207C577 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/a3676659-13d6-464e-afdd-f1641207c577/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/a3676659-13d6-464e-afdd-f1641207c577', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 600 - Learning rate: 0.0004 - LoRA rank: 4 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/a3676659-13d6-464e-afdd-f1641207c577/discussions) to add images that show off what you’ve made with this LoRA.
Ekata/mcqa-dpo1-v2
Ekata
2025-06-01T01:00:42Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-01T00:59:27Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers model_name: mcqa-dpo1-v2 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for mcqa-dpo1-v2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ekata/mcqa-dpo1-v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.4 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ToastyPigeon/gemma3-27b-v2-starlike
ToastyPigeon
2025-06-01T01:00:30Z
0
1
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "mergekit", "merge", "conversational", "arxiv:2203.05482", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-01T00:53:57Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * confettilike-mm * glitterlike-mm ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: confettilike-mm parameters: weight: 0.5 - model: glitterlike-mm parameters: weight: 0.5 merge_method: linear tokenizer: source: glitterlike-mm dtype: bfloat16 ```
mci29/sn29_s2m3_hbui
mci29
2025-06-01T01:00:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T00:56:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
seekerdeep/task-10-microsoft-Phi-3.5-mini-instruct
seekerdeep
2025-06-01T00:57:14Z
28
0
peft
[ "peft", "safetensors", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "license:other", "region:us" ]
null
2025-05-29T12:43:53Z
--- library_name: peft license: other base_model: microsoft/Phi-3.5-mini-instruct --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
bruhzair/prototype0.4x45
bruhzair
2025-06-01T00:56:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T00:28:00Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x45 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213 * /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5 * /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90 parameters: select_topk: 0.2 - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213 parameters: select_topk: 0.1 - model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5 parameters: select_topk: 0.75 - model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 parameters: select_topk: 0.55 base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 merge_method: sce tokenizer: source: union chat_template: llama3 int8_mask: true dtype: bfloat16 ```
AmberYifan/Llama-3.1-8B-sft-peers-pool-KTO
AmberYifan
2025-06-01T00:56:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "kto", "conversational", "arxiv:2402.01306", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T00:21:58Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-peers-pool-KTO tags: - generated_from_trainer - trl - kto licence: license --- # Model Card for Llama-3.1-8B-sft-peers-pool-KTO This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-peers-pool-KTO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/0wtbew0y) This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite KTO as: ```bibtex @article{ethayarajh2024kto, title = {{KTO: Model Alignment as Prospect Theoretic Optimization}}, author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela}, year = 2024, eprint = {arXiv:2402.01306}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rasyosef/bert-amharic-text-embedding-medium
rasyosef
2025-06-01T00:53:28Z
93
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:40237", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "am", "dataset:rasyosef/amharic-passage-retrieval-dataset", "base_model:rasyosef/bert-medium-amharic", "base_model:finetune:rasyosef/bert-medium-amharic", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-01-04T14:01:25Z
--- language: - am license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:40237 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: rasyosef/bert-medium-amharic widget: - source_sentence: የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጠር ተገለጸ sentences: - አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ እንዲሁም አንቀጽ 38 እና የፀረ ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ ያለውን ለውጥ ተከትሎ በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን ዐቃቤ ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ ህግ፡፡በዚህ መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1 ቦምብ በመያዝ ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው በመልበስ ተመሳስለው መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40 ሜትር ርቀት ላይ ቦምብ የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ - 'አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው ለአዲስ ዘመን ጋዜጣ በላከው መግለጫ እንዳስታወቀው፤ በ2013 በጀት አመት አንደኛው ዙር የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ሀገር አቀፍ ኤግዚቢሽንና ባዛር ‹‹ዘላቂነት ያለው የገበያ ትስስር ለስራ ዕድል ፈጠራና ለኢንተርፕራይዞች ልማት መሰረት ነው ›› በሚል መሪ ቃል ከታህሳስ 22 እስከ ታህሳስ 28 ቀን 2013 ዓ.ም በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና አልባሳት ምርት ውጤቶች፣ ባህላዊና ዘመናዊ የቆዳ አልባሳትና የቆዳ ምርት ውጤቶች፣ ባህላዊ የዕደ-ጥበባትና ቅርጻ-ቅርጽ ሥራዎችና ውጤቶች፣ የብረታብረት፣ የእንጨት ሥራና የኢንጅነሪንግ ስራዎችና ውጤቶች፣ የአግሮ-ፕሮሰሲንግ ምርቶች እና የከተማ ግብርና ውጤቶች፣ የቴክኖሎጂ ውጤቶችና የፈጠራ ስራዎች፣ ፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣ የአፍና አፍንጫ መሸፈኛ ጭንብል/ማስኮች/፣ እና ሌሎችም ምርቶች በኤግዚቢሽንና ባዛሩ እንደሚቀርቡ አስታውቋል። የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣ ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና የንግድ ልማት ግንዛቤ ከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች ከተጠቃሚው ህብረተሰብ ጋር በሚያደርጉት ግንኙነት ዘላቂ የገበያ ትስስር ለመፍጠር የሚያስችል ምቹ አጋጣሚ ይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን ለተጠቃሚዎች በቀጥታ በመሸጥም ተጠቃሚ እንደሚሆኑም እጀንሲው አስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013' - የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው መዝገበ ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ ቃላቱ ዘረኝነት ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል። - source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው sentences: - 'የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች ከሐሙስ ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ በተከሰተው ወቅታዊ ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡ የደኢሕዴን ሊቀመንበር ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና የፀጥታ ጉዳዮች ላይ ወጥ አቋም ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡ እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣ በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡ ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡ ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት የፀጥታ አካላት ኮማንድ ፖስት ሥር እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ ከተማ ባደረገው ስብሰባ የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው የሥራ አስፈጻሚ ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች›› እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣ አመራሩ የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ ጫዊቻና የሐዋሳ ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው ሐምሌ 11 ቀን 2011 ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣ ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡ ' - 'አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት ሰዎች በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው ጥቃትም በአካባቢው የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ በአፍጋኒስታን የሚንቀሳቀሰው የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም ሲል አስተባብሏል።ምንጭ፦ ' - በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች አስተያየታቸውን ሰጥተዋል። ” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” –  ገ/መድኅን ኃይሌ – መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና ስለ ጨዋታው ” ከዕረፍት በፊት ከጨዋታ ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል። በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር። ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም። እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “ - source_sentence: የከምባታና ጠንባሮ አርሶአደሮች sentences: - በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ አስታወቀ።የመምሪያው ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው ደግሞ የማረሚያ ቤቱ ባልደረባ ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ 20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው። ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው” ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር መቻሉንም አቶ አብዱልሃሚድ አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም ያለበት አንድ ታራሚ ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት እንዳይስፋፋ አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ እና ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83 አሽከርካሪዎች እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች እንደሚገኙበትም ከአማራ ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ ሰዎች ቁጥር 25,118 የደረሱ ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034 ደርሰዋል። - 'በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች በአሰቃቂ ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ ተናግረዋል። አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ ላይ መሆናቸውን ገልጦ መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡ ' -  ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት ፈቃደኛ አልሆነችም ነበር፡፡ - source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ sentences: - የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት ዙሪያ ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ ፌዴሬሽኑና EBC የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ ለማድረግ ነሃሴ 6/2007 ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ ሲሆን ከፕሪሚየር ሊግ ክለቦች ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡- በውይይቱ የሚሳተፉት የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡ - ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣ የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት 15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣ የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣ ‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣ እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤›› በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡   - ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦ 1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና የከተማ ልማት አማካሪ ሚንስትር 2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 4.አቶ ጫላ ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦ የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር ሆነው ተሹመዋል። - source_sentence: በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን እንዲቆጠቡ አስገነዘቡ sentences: - 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ ''አዲሱ ቀዝቃዛ ጦርነት'' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር ''ፕሮስፔሪቲ አፍሪካ ኢን 2018'' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና ''ቤልት ኤንድ ሮድ ኢኒሽየቲቭ'' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል" ይላሉ ሙር። ' - አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ “ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ  ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር ከእኛ ጋር ስላሉ እናመሰግናለን! - አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል። datasets: - rasyosef/amharic-passage-retrieval-dataset pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: BERT Amharic Text Embedding Medium results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.6025497651532096 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7383135763811228 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7877432341757996 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8425408186088124 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6025497651532096 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24610452546037426 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.15754864683515993 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08425408186088122 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6025497651532096 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7383135763811228 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7877432341757996 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8425408186088124 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7206896562234348 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6818662953140707 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6867120260104428 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.592932229926191 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7342876314023709 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7843882800268396 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8380675464101991 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.592932229926191 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24476254380079027 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.15687765600536793 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08380675464101989 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.592932229926191 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7342876314023709 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7843882800268396 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8380675464101991 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7138208588325037 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6741775037011009 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6791066888815549 name: Cosine Map@100 --- # BERT Amharic Text Embedding Medium This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/bert-medium-amharic](https://huggingface.co/rasyosef/bert-medium-amharic) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Training Code This model was trained as part of our **ACL 2025 Findings** paper: ***Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval***. - **Models Collection:** https://huggingface.co/collections/rasyosef/amharic-text-embedding-models-679cb55eae1d498e3ac5bdc5 - **Code:** https://github.com/kidist-amde/amharic-ir-benchmarks ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [rasyosef/bert-medium-amharic](https://huggingface.co/rasyosef/bert-medium-amharic) <!-- at revision cbe8e1aeefcd7c9e45dd0742c859aae9b03905f1 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 512 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) - **Language:** am - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("rasyosef/bert-amharic-text-embedding-medium") # Run inference sentences = [ "የተደጋገመው የመሬት መንቀጥቀጥና የእሳተ ገሞራ ምልክት በአፋር ክልል", "በአክሱም ከተማ የሚገኙ ሙስሊም ሴት ተማሪዎች ከሒጃብ መልበስ ጋር በተያያዘ ውዝግብ ከትምህርት ገበታ ውጭ ሆነው እንደሚገኙ የትግራይ እስልምና ጉዳዮች ምክርቤት ስታወቀ። ይህን ለመፍታት ከክልሉ ትምህርት ቢሮ ጋር ንግግር ላይ መሆኑም የክልሉ እስልምና ጉዳዮች ምክርቤት ለዶቼቬለ ገልጿል።", "በማዕከላዊ ኢትዮጵያ ክልል ሃድያ ዞን ጊቤ ወረዳ በሚገኙ 12 ቀበሌዎች መሠረታዊ የመንግሥት አገልግሎት መስጫ ተቋማት በሙሉና በከፊል በመዘጋታቸው መቸገራቸውን ነዋሪዎች አመለከቱ። ከባለፈው ዓመት ጀምሮ የጤና፣ የትምህርት እና የግብር አሰባሰብ ሥራዎች በየአካባቢያቸው እየተከናወኑ አለመሆናቸውንም ለዶቼ ቬለ ተናግረዋል።", "የሕዝብ ተወካዮች ምክር ቤት አባል እና የቋሚ ኮሚቴ ሰብሳቢ የነበሩት አቶ ክርስቲያን ታደለ እና የአማራ ክልል ምክር ቤት አባል የሆኑት አቶ ዮሐንስ ቧያለው ከቃሊቲ ወደ ቂሊንጦ ማረሚያ ቤት መዛወራቸውን ጠበቃቸው ተናገሩ።", "ከተደጋጋሚ መሬት መንቀጥቀጥ በኋላ አፋር ክልል እሳት ከመሬት ውስጥ ሲፈላ ታይቷል፡፡ ከመሬት ውስጥ እሳትና ጭስ የሚተፋው እንፋሎቱ ዛሬ ማለዳውን 11 ሰዓት ግድም ከከባድ ፍንዳታ በኋላየተስተዋለ መሆኑን የአከባቢው ነዋሪዎች እና ባለስልጣናት ለዶቼ ቬለ ተናግረዋል፡፡ አለት የሚያፈናጥር እሳት ነው የተባለው እንፋሎቱ በክልሉ ጋቢረሱ (ዞን 03) ዱለቻ ወረዳ ሰጋንቶ ቀበሌ መከሰቱን የገለጹት የአከባቢው የአይን እማኞች ከዋናው ፍንዳታ በተጨማሪ በዙሪያው ተጨማሪ ፍንዳታዎች መታየት ቀጥሏል ባይ ናቸው፡፡" ] embeddings = model.encode(sentences) print(embeddings.shape) # [5, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [5, 5] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation <details><summary>Click to expand</summary> ### Metrics #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6025 | | cosine_accuracy@3 | 0.7383 | | cosine_accuracy@5 | 0.7877 | | cosine_accuracy@10 | 0.8425 | | cosine_precision@1 | 0.6025 | | cosine_precision@3 | 0.2461 | | cosine_precision@5 | 0.1575 | | cosine_precision@10 | 0.0843 | | cosine_recall@1 | 0.6025 | | cosine_recall@3 | 0.7383 | | cosine_recall@5 | 0.7877 | | cosine_recall@10 | 0.8425 | | **cosine_ndcg@10** | **0.7207** | | cosine_mrr@10 | 0.6819 | | cosine_map@100 | 0.6867 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5929 | | cosine_accuracy@3 | 0.7343 | | cosine_accuracy@5 | 0.7844 | | cosine_accuracy@10 | 0.8381 | | cosine_precision@1 | 0.5929 | | cosine_precision@3 | 0.2448 | | cosine_precision@5 | 0.1569 | | cosine_precision@10 | 0.0838 | | cosine_recall@1 | 0.5929 | | cosine_recall@3 | 0.7343 | | cosine_recall@5 | 0.7844 | | cosine_recall@10 | 0.8381 | | **cosine_ndcg@10** | **0.7138** | | cosine_mrr@10 | 0.6742 | | cosine_map@100 | 0.6791 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> </details> ## Training Details <details><summary>Click to expand</summary> ### Training Dataset #### amharic-news-retrieval-dataset * Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) at [3ef7092](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset/tree/3ef7092c24f5494b79b09e0264b73528044c1c03) * Size: 40,237 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 15.12 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 304.71 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | anchor | positive | |:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> | | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ  አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡     በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡  መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> | | <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 512, 256 ], "matryoshka_weights": [ 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> </details> ### Training Logs | Epoch | Step | Training Loss | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | |:-------:|:--------:|:-------------:|:----------------------:|:----------------------:| | -1 | -1 | - | 0.1552 | 0.1178 | | 1.0 | 315 | 1.6998 | 0.6530 | 0.6430 | | 2.0 | 630 | 0.436 | 0.6974 | 0.6850 | | 3.0 | 945 | 0.2707 | 0.7114 | 0.7038 | | 4.0 | 1260 | 0.2069 | 0.7202 | 0.7126 | | **5.0** | **1575** | **0.1792** | **0.7207** | **0.7138** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ``` @inproceedings{mekonnen2025amharic, title={Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval}, author={Kidist Amde Mekonnen, Yosef Worku Alemneh, Maarten de Rijke }, booktitle={Findings of ACL}, year={2025} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcwu49501z910oz0cn9817r
BootesVoid
2025-06-01T00:52:36Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T00:52:35Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SUETA228 --- # Cmbcu7Duc01Sh10Ozl8Czwyfv_Cmbcwu49501Z910Oz0Cn9817R <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SUETA228` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SUETA228", "lora_weights": "https://huggingface.co/BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcwu49501z910oz0cn9817r/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcwu49501z910oz0cn9817r', weight_name='lora.safetensors') image = pipeline('SUETA228').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcwu49501z910oz0cn9817r/discussions) to add images that show off what you’ve made with this LoRA.
Rustamshry/Qwen3-0.6B-turkish-math-reasoning-80K
Rustamshry
2025-06-01T00:49:29Z
0
1
peft
[ "peft", "safetensors", "math", "question-answering", "tr", "dataset:ituperceptron/turkish-math-186k", "base_model:unsloth/Qwen3-0.6B", "base_model:adapter:unsloth/Qwen3-0.6B", "license:mit", "region:us" ]
question-answering
2025-06-01T00:35:37Z
--- base_model: unsloth/Qwen3-0.6B library_name: peft license: mit datasets: - ituperceptron/turkish-math-186k language: - tr pipeline_tag: question-answering tags: - math --- # Model Card for Model ID This model was fine-tuned on 80,000 Turkish math problems, targeting better understanding and generation of mathematically structured responses in Turkish. The dataset covers arithmetic, algebra, word problems, and other foundational math skills, allowing the model to serve as a multilingual math tutor or reasoning engine in Turkish. ## Model Details ### Model Description - **Language(s) (NLP):** Turkish - **License:** MIT - **Finetuned from model:** unsloth/Qwen3-0.6B - **Domain**: Mathematical Reasoning ## Uses ### Direct Use - 🧮 Math problem solving in Turkish Can assist users in understanding and solving elementary to intermediate math problems written in Turkish. - 📚 Educational tools and tutoring systems Suitable for integration into digital tutors, math practice apps, or classroom AI assistants for Turkish-speaking students. - 💬 Multilingual reasoning research Can be used to evaluate Turkish-language mathematical reasoning tasks in LLM benchmarks. ## Bias, Risks, and Limitations 🌐 Language bias Performance is limited to Turkish; multilingual or code-mixed input may confuse the model. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen3-0.6B", device_map={"": 0}, token="" ) model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-0.6B-turkish-math-reasoning-80K") question = """ Problem 2. $a, b$ iki farklı gerçel sayı ve $c$ öyle bir pozitif gerçel sayı olsun ki $$ a^{4}-2019 a=b^{4}-2019 b=c. $$ $-\sqrt{c}<a b<0$ olduğunu kanıtlayın. """ messages = [ {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, enable_thinking = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 3000, temperature = 0.6, top_p = 0.95, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` ### Training Data The dataset ituperceptron/turkish-math-186k is a Turkish-language collection designed for training and evaluating language models on mathematical reasoning tasks. It comprises approximately 186,000 entries, each containing structured fields such as instruction, input, and output. The dataset is available in Parquet format and is intended for text generation tasks, particularly focusing on mathematical problem-solving in Turkish. ### Framework versions - PEFT 0.14.0
Vardis/Whisper-LoRA-Greek-Medium2
Vardis
2025-06-01T00:48:54Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T00:43:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Disty0/sd3_vae_anime_ft
Disty0
2025-06-01T00:46:18Z
3
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "dataset:Disty0/danbooru_curated-jxl_lossless_4mp", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:finetune:stabilityai/stable-diffusion-3-medium-diffusers", "region:us" ]
text-to-image
2025-05-30T20:23:53Z
--- datasets: - Disty0/danbooru_curated-jxl_lossless_4mp base_model: - stabilityai/stable-diffusion-3.5-medium - stabilityai/stable-diffusion-3-medium-diffusers pipeline_tag: text-to-image library_name: diffusers --- This VAE was finetuned on PNG only anime illustrations for 512 steps with MAE Loss. Used fp32 weights + fp16 mixed precision with learning rate 4e-6 and effective batch size of 16. This training was to test my VAE decoder training code and 512 step model turned out to be better than i expected and fixes the color shifting issues of the original SD3 VAE pretty well. I stopped messing with SD3 after a while but i decided to release this VAE finetune instead of deleting it. Original Image: ![orig.png](https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/MEGBwJ5wyjS4sGa1otHDW.png) Original SD3 VAE: ![vae_orig.png](https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/_QWnQuYJ7BaJPyaVpb6gq.png) Anime VAE Finetune: ![vae_ft.png](https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/5q7goj2J2TjziPP2iCVT_.png)
DavidKalajdzic/dr00ne-gr00t-lora-rank0-vision-unfreezed
DavidKalajdzic
2025-06-01T00:43:32Z
0
0
peft
[ "peft", "safetensors", "gr00t_n1", "arxiv:1910.09700", "base_model:nvidia/GR00T-N1-2B", "base_model:adapter:nvidia/GR00T-N1-2B", "region:us" ]
null
2025-06-01T00:20:40Z
--- base_model: nvidia/GR00T-N1-2B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
LinaSad/mcqa_sciq_lora_bislr5104
LinaSad
2025-06-01T00:42:44Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T00:42:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmberYifan/Qwen2.5-7B-sft-gen-dpo-10k-KTO
AmberYifan
2025-06-01T00:39:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "kto", "conversational", "arxiv:2402.01306", "base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T00:19:18Z
--- base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF library_name: transformers model_name: Qwen2.5-7B-sft-gen-dpo-10k-KTO tags: - generated_from_trainer - trl - kto licence: license --- # Model Card for Qwen2.5-7B-sft-gen-dpo-10k-KTO This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-gen-dpo-10k-KTO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/v7hlung4) This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite KTO as: ```bibtex @article{ethayarajh2024kto, title = {{KTO: Model Alignment as Prospect Theoretic Optimization}}, author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela}, year = 2024, eprint = {arXiv:2402.01306}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Yeonwha/dummy-model
Yeonwha
2025-06-01T00:36:37Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-01T00:36:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
codingGiraffe/mistral-finetuning-emails
codingGiraffe
2025-06-01T00:21:23Z
0
0
null
[ "text-classification", "en", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
text-classification
2025-05-31T23:20:15Z
--- license: apache-2.0 language: - en base_model: - mistralai/Mistral-7B-Instruct-v0.3 pipeline_tag: text-classification ---
FormlessAI/620f1325-6844-41ca-9016-cca92cad6c41
FormlessAI
2025-06-01T00:19:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T23:27:05Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: 620f1325-6844-41ca-9016-cca92cad6c41 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 620f1325-6844-41ca-9016-cca92cad6c41 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/620f1325-6844-41ca-9016-cca92cad6c41", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/lk871s6c) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
paul2lol/jaya-lora
paul2lol
2025-06-01T00:18:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2025-06-01T00:12:36Z
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
DavidKalajdzic/dr00ne-gr00t-lora-rank0
DavidKalajdzic
2025-06-01T00:13:13Z
0
0
peft
[ "peft", "safetensors", "gr00t_n1", "arxiv:1910.09700", "base_model:nvidia/GR00T-N1-2B", "base_model:adapter:nvidia/GR00T-N1-2B", "region:us" ]
null
2025-05-31T23:55:25Z
--- base_model: nvidia/GR00T-N1-2B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
thevan2404/coteT5-promtT
thevan2404
2025-06-01T00:11:51Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:AnonymousFILLER/CodeT5_SoTitle", "base_model:adapter:AnonymousFILLER/CodeT5_SoTitle", "region:us" ]
null
2025-05-31T15:26:32Z
--- library_name: peft base_model: AnonymousFILLER/CodeT5_SoTitle tags: - generated_from_trainer model-index: - name: coteT5-promtT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # coteT5-promtT This model is a fine-tuned version of [AnonymousFILLER/CodeT5_SoTitle](https://huggingface.co/AnonymousFILLER/CodeT5_SoTitle) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 14 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
dreuxx26/Multilingual-grammar-Corrector-using-mT5-small
dreuxx26
2025-06-01T00:01:40Z
10
1
null
[ "safetensors", "mt5", "grammar-correction", "multilingual", "text2text-generation", "en", "es", "fr", "ru", "dataset:custom", "license:mit", "region:us" ]
text2text-generation
2025-05-28T21:28:19Z
--- license: mit language: - en - es - fr - ru tags: - grammar-correction - multilingual - mt5 datasets: - custom pipeline_tag: text2text-generation --- # Multilingual Grammar Corrector using mT5-small Demo(I'll upload them in full in 2 weeks) This is a fine-tuned [`mT5-small`](https://huggingface.co/google/mt5-small) model for **multilingual grammar correction** in English 99%, Spanish 75%, French 70%, and Russian 80%. It was trained on synthetic and human-curated data to correct grammatical mistakes in short sentences. ## ✨ Example **Input:** > She go to school yesterday. **Output:** > She went to school yesterday. --- ## 🧠 Model Details - **Architecture:** mT5-small - **Layers:** 8 - **Heads:** 6 - **Languages supported:** English, Spanish, French, Russian - **Tokenization:** SentencePiece with special tokens `<pad>`, `</s>`, `<unk>` ## 📦 How to Use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("your-username/Multilingual-Grammar-Corrector") tokenizer = AutoTokenizer.from_pretrained("your-username/Multilingual-Grammar-Corrector") input_text = "She go to school yesterday." inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=64) corrected = tokenizer.decode(output[0], skip_special_tokens=True) print(corrected) # ➜ She went to school yesterday.
BootesVoid/cmbcuj8l101so10ozgf7m2rp5_cmbcun33b01sv10ozm83dq9p7
BootesVoid
2025-05-31T23:50:21Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-31T23:50:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SELININFERNO --- # Cmbcuj8L101So10Ozgf7M2Rp5_Cmbcun33B01Sv10Ozm83Dq9P7 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SELININFERNO` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SELININFERNO", "lora_weights": "https://huggingface.co/BootesVoid/cmbcuj8l101so10ozgf7m2rp5_cmbcun33b01sv10ozm83dq9p7/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbcuj8l101so10ozgf7m2rp5_cmbcun33b01sv10ozm83dq9p7', weight_name='lora.safetensors') image = pipeline('SELININFERNO').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbcuj8l101so10ozgf7m2rp5_cmbcun33b01sv10ozm83dq9p7/discussions) to add images that show off what you’ve made with this LoRA.
frashdan1/agent
frashdan1
2025-05-31T23:50:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-31T23:50:20Z
--- license: apache-2.0 ---
luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_501
luckeciano
2025-05-31T23:49:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T18:58:23Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-1Action_501 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-1Action_501 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_501", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/gep981tx) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LinaSad/mcqa_sciq_lora_bisbis
LinaSad
2025-05-31T23:44:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-31T23:44:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO
AmberYifan
2025-05-31T23:44:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "kto", "conversational", "arxiv:2402.01306", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T23:09:47Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO tags: - generated_from_trainer - trl - kto licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/ynaty345) This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite KTO as: ```bibtex @article{ethayarajh2024kto, title = {{KTO: Model Alignment as Prospect Theoretic Optimization}}, author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela}, year = 2024, eprint = {arXiv:2402.01306}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/YOYO-O1-14B-V2-GGUF
mradermacher
2025-05-31T23:42:31Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "zh", "base_model:YOYO-AI/YOYO-O1-14B-V2", "base_model:quantized:YOYO-AI/YOYO-O1-14B-V2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-31T08:25:15Z
--- base_model: YOYO-AI/YOYO-O1-14B-V2 language: - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/YOYO-AI/YOYO-O1-14B-V2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LucidityAI/Kiwi-1-8b
LucidityAI
2025-05-31T23:37:04Z
6
0
null
[ "safetensors", "qwen3", "en", "region:us" ]
null
2025-05-11T15:04:03Z
--- language: - en --- # Kiwi-8B Preview Kiwi-8B is a hybrid reasoning model based of Qwen's 8B fine-tuned for better STEM performance. Here are a few examples of Kiwi-8B Preview, Kiwi-1.7B nano and Kiwi-4B. These are all one-shot results on the same settings. | Model | Generated GUI for "A Tailwind Dairy Shop" | |---------|---------------------------------:| | Kiwi-4B-Preview | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAycAAAIeCAIAAAAuyecvAAAgAElEQVR4Ae3d/5Mc9X0nfv8l+xuV2ir94JSqqKwxB1bM8VGsDyDr8C3nszcxB7k40scxBxF8QtDZYOKycGyvHaKIyDYf+QTBjowV+8AhWd0HvA62kQFLBoOMdRKWAUkGhKTV7mp352qmZ3rf3TOzu9qZ6enpfmypotme7veXx6vDPP3unta7Kn4IECBAgAABAgR6L/Cu3nehBwIECBAgQIAAgYrU5SQgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhUOrU9csX5z/8789fOzL1x//h/JunFrLwTvZx6o2FO/7r9MbLpj6/beb8VPK95G+vvDT/p6PnP/jeqd07Zufmku/5jQABAgQIEBgEgRylrvNTlb/cPH3tyFTznx8/1ZOgsbrU9eOn5ppHOPaB85/7i5mfTM7Nz19E2eOmPvzvz//yxaWO3Lv7QtRpvwLiRczKrgQIEMiHwM8Ovfh777v2Z4dezMdwVj+Kwkxk9QRFOVLquui1rjgqNWeva0em7viv06feWOmy2ZunFv7fP1nRWtf//qW1rqL8/5x5EOieQMsP4zdOnLr6uo8MVtT42aEXf+d33zc0PBL9+Z3ffV9Xxt/Sp3v8PWnpiYmnhoZHnph4Kmx9ECeSqumHPvrxs+eWvKYTTri4r3Oaunq0uJWqY4drXX+5eTq6LHj6rYV/3H1h9Pfrq3TbPjF97sxKg1dqSH4lQIDAygVafhgPaOoKF6Xuf2B3V4JXS5+V8z4x8dTvrbvujROnVn5Ih3uePTf1oY9+/OrrPvLJ2z8dNtXhRMKm4tdvnDj1e+uuS8W7+N0OX3zy9k+HdGfPTX3y9k93ItlhLTo8vEON8HCpa/VrXXHqikCff2b+hvfXg9f3vnUhVPaaAAECvRBo+WFcgNQVhY/7H9jdIVpLn5W3mf1H9c8OvXj1dR+p9pu8MNrhRFpOuXep6/4Hdnd9ZavDWnR4eEvA1W0cpNT1pbtnoot6/7j7wv7H5v7TVefj3HPy9YW/+avZ//x/VSPU6O9Pfe7Omdd+XV9tWlioHHp2/lO3zERrUR9879QnPjId3UQVrnUdPDD/6VtmPvjeqQ++d+ov/nT6+NG2i1XxFca494h+YaGy8/Oz0QjvvnVmZqa6+ddHF/72c7N//B+qN8JfOzL1sWvP/3/3z8Y3zocDiG7nDxt/+efznxybjm75CrfHhy8x60qlcvL1hb+7b/Zj11ZNoq73PSwLru7/TRxFIKcCLT+Mm1PX/Q/sji/ehYson7z909Gf6N37H9gdxZ3413Dan7z90929/Bc23jyRaGCVSiX6vNz98N6h4ZH4s7zdjCqVSvzW7/zu+3Y/vDeOL09MPBUfXqlUUpkjnHh0gS+e79DwSLxsE27sxSrR/Q/sjgr0yds/HYbOyOeHPzrwoY9+PKpCWMdIKdoeKkUa4Z6xc3QdMz6ki3NJwYZVjl/HNRoaHgmHFxU9frcle7wx3m1oeCS0Cs+BqP2wavHh8WAyfjGQqeuvbp+JvnsY5Z7DL87/0f9dzxbxvVYf/YPzL/+8en/6D/fPXX9lfQkqfje6ghmHno2XTX3w8sQ+Wz7c9vaslgEoKtsP99dvtI/veY+TYtz1tSNTn/nzmZnp6hHxAOL948Zv+aPpW/6o+t2Cdqlr6VmfeG1h839Km3zp7loSzPgU0x0BAj0TiD9Ewx5SqSu81hNlizh8RJ9G0Sdu/DEc/xpe44szUPQBH74Vdr3q16mJhGtd0cBSn83xZ2dqRvc/sDt+K/r4j4e6ROqK9oy7eGLiqRghbi36LI/p4n1WPeXmA8O8klqbie6RiueSGnDqgmxY8TjGRd2FzmF3zYNZ9ZbUyJvbCYeXKl90QkYRKnorLEpYizdOnNr+pb+LJ/U7v/u+ODiG7ce7LTuq5nH2aEtOU1eYUeIlpeYE85ebp397cmHrzdV0cv2VU0/+89zcXOWF5+Y/+gfVwPH5bTNnz1S2faL+vcjvP3phfr5yfqryxL65nz9XDWRx6Ll2ZOobO2Znpiu//MX8x66pHrvxsqkfPdn6i5NxMIoHFtcmfitOUV//yuwT++ZOv1VdOXv7zYW7/p/qYP7juqkXkgOI949biAVapq5zZ5aa9dxc5TsP1b/z+Jk/rz6TYn6+8sLz89a64kp5QaAYAqkbluOli/jjOfyUjaYcbgmzVOpDLvo1+vyLLnvFN+WEb3WLMRxVpVIJPzifmHgqnk6lUkntGW5pjhHhzkukrnZXxFIf1SFXtyYethN2l5pLVOg4WMRLgG+cOJXaM7WM15fUFWfTcHbR67AizVs+efunw2NDkPB1qtnwhGxuP9p5icNTrfX614FMXR/9g/M/fXpuYaEaoZ790fymf1ddpvrcX8xED7KamancfWv1WuTNm84fP7oQP43i3q0zv3l1YSG4chinrps3nY++eLiwUPnr/16/jrl3d+vrcXEwWknqiut39szC8aMLcXB8/NvVxuMBNKeujZdN7XmgGgQvzFYuXKikOn3ux0vN+tQbC/GTJm7eVLW60Hoq8ei8IEBgIAVafsaEa12pqFGpVKKPqOjzO4wR4UdXvFuUuuJlsDjVNV/T6ZAvFR/bffRGaSN8Nx7qExNPpdJhGMiaDwzDSupyXjyX1Ed1asEp3q1bL8JyRNEzXulpLnS8pXnW0bFR7fqSusJFqRTOyk/IMFmmXkdtRhWMz8n4XE2dHtHOqVKmRpXlrzlNXS2/wxhHlt07ZmOjeFEnXhyKX0SrRI/uubDxssWrhx+75vz3vnUhSiHNoadSqcS9rCJ1Pf7t+grTbf9l+sw71Xz3wnPzt//xdHRTVzywa0emosabBxCnq7iFaKbx9ijqLTvrV16qL/hFnV5/5dSX753py5Ng40p5QYBA1wXij96w5V6krpafZGGnHb5uOZGozdTn5RIf2835I2w2deAqUlc8nqHhkXD5rcO5R4encmcUJuJewonE+0e3rDXPur+pq3mooU+qCmFoTgXNVNJKnQbRTV3R/3gI/wdDc/tR76nDwyFl/HogU1eYh+JFnTDTRK+j1LWwUPnJ5NwnPlJ9LFa8z99+rvqE9+bQ00nqunChsv0v6+tkOz8/u7BQCaPP2AfOb715+tYb65c7l01dqYW0VOpadtaVSuW1Xy987i9mwnvalrhZLePTTncECHRFoOUnXJi6mncIt4SLK+FHV/xZGF9hjO9J78qwmxsJR5V6N/V52bxnvKXlZbg4uDS3E98MtMIrjPHAUlbx9k5eNI8hyoXtShDvH8bHaADhlni36K3wcm24WycjTx0b4cSrdKl342LF28Mt4Qm5ROpK+Ye/hq3FXaSaCrdn/3rgU9eBH85FcSq+wtgO8Z23F77+ldlo5z/50Pm3frvQ3dQV37Z/w/unfnGwet/YN79eX/p6Yl/1emi4pcPUtfJZX7hQ+cG/zkU3um28bOrAD1vfrNYOzXYCBPIs0PIzJkxd0fpBfMUn9YkYfsiFH12p1BW9FS53bf/S38W3eXXFp+VEopZTaWnZGcWTjVJFnLqiX6MQE80ofhhp9FYcFOI75VOjimedsupcoF2DcWaKAmU8tejXaC7Rbf7xNCOfuFhhEk2BtOu08+lEHcVjiE6n+Hld4U170Rhi+fCETEWlVC3CPaN1r1gjbD++mz51eOdzXHULA5+6Tr6+8Kc3VO9/v/7Kqce/fSF6XsNbv114eNeF5348f36q8tXx2Reem79woXpH+Q/+de4/rquueH3iI9On3+pa6nrrtwsP/f1stKq08bKpvd+4EGWsB/66/iCJf/ha9Ub+I4fn/+RD9e8Vdpi6lp51pVL5/qMXntg3Fz2s9fjRhU98pLrGtunfTT3346X+3aFVn0YOJECgLwItP0tSqSv6GE7d/hKNNvzoSn0Gt/y1ZSNdmXjLiUQtN6euJWYUvhU9AjRcpYtvUItWucJnhEZBIZpgHG7i1qIt4QMI4s/4rkw/XIIKG4wzU+Sz++G98RP8UwMIH6MQh5ioqfitJUDC+/TDAXTyOuRK3QgYvhVOJDwhU6krVYuwXtu/uONDH/14qp34XI2nFnUaFreT2a362IFPXe2eDXHtyNSPn5pr+W87fvC9U/v+ofXN7Bd1hTG+Xhm/uP7KqX96pBqwop9nfzQfXuDbeNnU2Ae6k7qWnnWlUml5CfLu/7bMv7G96tPIgQQIECBAgMCyAkVIXZVK5div5j/z5/XnoG68bOqmD57/6vjs6bcWZmYq/2Pn7E0fPB9dWBz9/alP3TJz6Nn5aC2qwyuMcdj64Hun/nT0/Dd2zKZuV19YqDz1xFz0KIroLv5/+Fr9mmOHa11RXdvNulKp/OQHc58cm44yX/Rg2P/5jxeih4Qte07YgQABAgQIEOiFQI5SVy+mp00CBAgQIECAQE4EpK6cFMIwCBAgQIAAgYILSF0FL7DpESBAgAABAjkRkLpyUgjDIECAAAECBAouIHUVvMCmR4AAAQIECOREQOrKSSEMgwABAgQIECi4gNRV8AKbHgECBAgQIJATAakrJ4UwDAIECBAgQKDgAlJXwQtsegQIECBAgEBOBKSunBTCMAgQIECAAIGCC0hdBS+w6REgQIAAAQI5EZC6clIIwyBAgAABAgQKLiB1FbzApkeAAAECBAjkREDqykkhDIMAAQIECBAouIDUVfACmx4BAgQIECCQEwGpKyeFMAwCBAgQIECg4AJSV8ELbHoECBAgQIBATgSkrpwUwjAIECBAgACBggtIXQUvsOkRIECAAAECORGQunJSCMMgQIAAAQIECi4gdRW8wKZHgAABAgQI5ERA6spJIQyDAAECBAgQKLiA1FXwApseAQIECBAgkBMBqSsnhTAMAgQIECBAoOACUlfBC2x6BAgQIECAQE4EpK6cFMIwCBAgQIAAgYILSF0FL7DpESBAgAABAjkRkLpyUgjDIECAAAECBAouIHUVvMCmR4AAAQIECOREQOrKSSEMgwABAgQIECi4wLvOnj371ltvnTx58vXXXz9+/Pirr756rPZz1A8BAgQIECBAgEBnAlGsevXVV48fP/6uqampM2fOvP3226dOnTpx4sTrtZ/X/BAgQIAAAQIECHRDIApXJ06ceNf09HQUvE6fPv3222+/2fj5rR8CBAgQIECAAIHOBBrB6s233377XbOzszMzM1NTU+fOnTtb+znjhwABAgQIECBAoHsCUcR619zc3IULF6LsNV37Oe+HAAECBAgQIECgewJRxHrX/Px8FLyi7DXrhwABAgQIECBAoAcC71pYWJhv/Mz5IUCAAAECBAgQ6I1ANXXFP4305W8CBAgQIECAAIEuC9SfkhoHLy8IECBAgAABAgR6IeDZ9BU/BAgQIECAAIEMBKSuDJB1QYAAAQIECBCoSF1OAgIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECCQEJraODA3Hf7ZNJN70CwECBFYvIHWt3s6RBAgUUkDqKmRZTYpAHgSkrjxUwRgIrEpgbubwk3vu3nLT2iuvitdmLrnyhtFbdzx88NT0qprs0UHJHBMvI9VeXLph7egdd++aPHyuR51fdLPJ0VrrumhABxAg0E5A6monYzuBPAvMHHv8vnXvTsaXxYtite1Xb3v46ExO5pDMMW2GvWbD2ENH8hAWk6PtT+qafufI5EPjox/YvPNoTmpoGAQIdEFA6uoCoiYIZCowd+rRWzfEi1tLvthw83dPZTq2Np0lc0yb1FVLjeu/frxNG9ltTo4269R14sn71r/nikZZx6Su7AqvJwK9F5C6em+sBwLdFDgzsXWFkSsKNxu2TJzpZv+raiuZYzaN3nXvbdU/d6wPro3Wc8aaOx7t96XG5GizTl3Hdo01ItfI0LDUtaoTzkEE8iogdeW1MsZFoJXAiUc2Bx/JtVx1zbadTx458c5MZW7m9K9femz8lrVrkotJl2577J1WbWW4bYkcc/rAjvXJAY99t88XRpcYbQZmUlcGyLog0C8Bqatf8volcPEC5ya3XJpIVGu27j8919TOK3tSOeay8Zeadsp0w9I55tAXNoVR8pJ7DmY6uKbOlh5t0+5d3iB1dRlUcwTyJCB15akaxkJgSYHp794RppOhq3ccao5ctRZOp/a89L4D9ZaP79wY5LaNe44lemz7bjIK1C66vTm5/cYbLqkuU11x22SileZflskxE9sS89paby5xVG2o0y/v2zJa+8Lmmpse/HXYz8yx6Ouc8R1Ra65a+/6bxsb3HXoz3C35eu7MgYfuHb2m8Q3QS28Y/ez+Y9OVRL/DwRXG5Di3pB7ktfS7tZ6nX5ncedfmdcF11Uuu3LR+657J1yqVShI/9d2I4ZF1u4I73qaPT+waHxvdtCb+RsWlG9Zes3nL+OSJ5BT9RoBArgSkrlyVw2AILCEw8+iWIDANjyx1JW7umTsTq2KbttfXj5If7atOXe9Mbrl8cTDp/NE0ibY5prZnKk3Ga12JozbuOZxYw1u84Wn66L6br1wcTCLAVbPLhtFdB083Dany2vdbH3X9nvFbw9a6lLreObj9DxvxrilR1QCTpWnaJ05dp58eb//11WC0zVO2hQCBfgtIXf2ugP4JrFTgYDJIbX745FJHPpaIDnFES360rzZ13fmpRIDoLHWdaZcmk6nrvjs/HIahRupKRLFwh8TrtXdNJoJXMjU2BbXw2CDHLL2atcS7y3V3EanrtX2jydvgkoMPRrvU2eE9AgT6IyB19cddrwQuXmByS7j+sebe1AWuVIPJa4Lx9amupK4wlFRfrzp1Tb/20oOpp2AE32FMpK5w7tXXtdQ1d/DOYMmtmj8uHRurfUFyy42Je8WGhjfc9mR8k/7MxNZEahxac9X6LdWvVY5dn9xe7SjIMUvkqkql0vbd4zuvT4ldsWb0jtq3OLeNfqDaYw1w5vDk/sce3//greHIN23ZXd342OP7J1+pjj95D9yG0fHJwydPnTh56sSvX5p4aMfYNfctfVakThK/EiCQsYDUlTG47gisVuDonnVh8kgvU6Wb7XHquuK68WdO1B7xMP3OkWPV25KW+lkyPyUSSfi8rvRRazbcvPfI6dpzVKdfO3LiXCX1jc7UdwtOPzeeEPvwt+r3PJ3cNxpKXr7tseDer9NPbFsTvtt56nr6vkvCBq9MP7329IEdDwY3xiUL11jSa+gmTfr/lI3GuPxNgMCKBKSuFTHZiUD/BVJZoa9rXZdsnbyoh8gns0IiZgUXyK64bjxxA1bqqPjGpkYtkje6BYtkjR1S98U3MsoTiZv3x/amnmd26sHEpcxO17qSsxgbfyUeXesXS6eu5FrXyNCVt2x//KUTF1WM1t3aSoBAFgJSVxbK+iDQDYHkFcbhZe7rmrgrEW4aFwG7coUxvjd/pdNKJo/EwOqp6+o7dh5IpZ82mWmxzyPj1wRN/eG+5q/vJRfD6utG6ViT+C5ktfXkaDtMXUnw61NfGl2cTPwqPbzUvwj04o7LwpWz+usr1t44/vCLacC4TS8IEMiJgNSVk0IYBoFlBdK3B938eHyjUvOxqVvv4wtVyRCQvkzZ9t1kFGgsGjV322ZLMsfEz6a/97a7djz4+DOH32k9kcRR799xKN14MoY2njeR2Ctxr1UdIZlHg1DVODLRb6dXGCdvC29+bznIRr/R30nquHCLO5347rb0g3AbOSz9pYHFg7wiQCAXAlJXLspgEARWInB4PLzPemRo49cOr/B5XVfvOFzvIBndVpm6WiSVpcffNscseVjiqPRQK5XK8mtdx77e4l/XOXBPeMt8iwSZ6DdMXU/eG1wPbfoOQSLhxe++tP39yyzIpQyWTV3V/d986eHPbl4bP6yrkbqGhkdGH8nFv7yZmpRfCRCIBKQuZwKBwRF47VvXBZ+vQ8MjqfvHo5lMv/y11LPpw0/iRKRYs20izG3nvj8Wth8EnWQUyEnqWva+rplH/yxIPI3JJudy1Z1PJ0+AuYN3hzkpTF3JLzSk7jNL3XHVuKR76uE/DMYwfNODy33zIDm8FmtdwXBnTr88uXPrDWEWHGp1pTU4xEsCBPopIHX1U1/fBC5W4MA9Tf/0dfTvMJ48M33uzIlf15ZAwktawyOpR9gnr6+NjH03vhlo5vD94crQyFDeU1flxEM3hYFjzdbEQ7lOTyS/jfhn++s3nR9Ifqlw49cOL96NfubAF5IIYer69Z71YSoN/22AN/en/rGmRupKD3Lo+h0Hgq9MViozJybGx4PvMCbvRRtJ3ew/cf+9jx1NX5BNPJstqNrFnl32J0Cg1wJSV6+FtU+gqwJzR5oe/hQupTS9XjO2M/mludSH+tDwFevaPaoq+PxOLsDkZK2rUjk3eVub53WNRf9w0GJICr4B0PyUr3dvGr313tvu2jZ69RVhjKu9DiebultuZOjKzVvuuve2W1tc7ItTV4tBrrnqshu31Z7Xdcf62j9htLhzpVJJXsccWrPhulurXdy8u/ovAtWWKq9Yt3XPYwePVx/TdfL4ocd3jIYIK7h1rKtnpMYIELgIAanrIrDsSiAXAu8c3J5+6mZT2IrSxrtv2vlyel2kcouVaVAAACAASURBVG7/zanFsDiaXD62Pry4lv/UVamcnri33a3lYX5av+tIWLv0MlgsUH2xYU0YYsK1rkolfWtdcOD66xOLZGGQWnaQ4c7tChRd0ExcIA56b0w2CJfhhL0mQCAfAlJXPupgFAQuSmDuzIFdtyydNtZ+fE/yStZiB60zx7tv2vnKkXb/NnZO17pqczr93J6xJf4dxjU33PZEixvMD389cXWykVquuO7rRx7bGqbYcK2rUmmz1lj98mDru+nr7Kef25NYkUoGpkTqqlSO7W4xtuVT15ob7nwyvl68WG6vCBDIj4DUlZ9aGAmBixQ4d2TyoR1jo5vWxv/Q9Zqr1l6zecv4vkMnm5a4km2ffnHf3TfecEm06HXphvWf2neoerPRCp8ckQwiyZZb/pZcoVnp4YmjgoW3Fl3MnTn0+J7bbrxhkeLdV60dvWP7Q88cW7xnK33c9NH9d994w5rom4Brrlo7uu3B2jPDEv0m17qqTUwfn9i1LboyODR8xZpr7tj+5PFqJ0umruqBc2cO7a3Wq95j9d8v2lAd5N6XToffaagN8/SBb20Z3VAvULWXzeMHqjWdPvrMg+PbRq9JN3L3rsklZpqeud8JEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDIBqatkBTddAgQIECBAoE8CUlef4HVLgAABAgQIlExA6ipZwU2XAAECBAgQ6JOA1NUneN0SIECAAAECJROQukpWcNMlQIAAAQIE+iQgdfUJXrcECBAgQIBAyQSkrpIV3HQJECBAgACBPglIXX2C1y0BAgQIECBQMgGpq2QFN10CBAgQIECgTwJSV5/gdUuAAAECBAiUTEDqKlnBTZcAAQIECBDok4DU1Sd43RIgQIAAAQIlE5C6SlZw0yVAgAABAgT6JCB19QletwQIECBAgEDJBKSukhXcdAkQIECAAIE+CUhdfYLXLQECBAgQIFAyAamrZAU3XQIECBAgQKBPAlJXn+B1S4AAAQIECJRMQOoqWcFNlwABAgQIEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDIBqatkBTddAgQIECBAoE8CUlef4HVLgAABAgQIlExA6ipZwU2XAAECBAgQ6JOA1NUneN0SIECAAAECJROQukpWcNMlQIAAAQIE+iQgdfUJXrcECBAgQIBAyQSkrpIV3HQJECBAgACBPglIXX2C1y0BAgQIECBQMgGpq2QFN10CBAgQIECgTwJSV5/gdUuAAAECBAiUTEDqKlnBTZcAAQIECBDok4DU1Sd43RIgQIAAAQIlE5C6SlZw0yVAgAABAgT6JCB19QletwQIECBAgEDJBKSukhXcdAkQIECAAIE+CUhdfYLXLQECBAgQIFAyAamrZAU3XQIECBAgQKBPAlJXn+B1S4AAAQIECJRMQOoqWcFNlwABAgQIEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDKBXKeuBT8ECBRaoGT/vTVdAgTKLpDH1LWwsPDGG288/fTT3/nOd77xjW/sbvw8//zzMzMzZa+Y+RMolkCUKos1J7MhQIBAa4F8pa7ov79TMwunpxJ/3pma/6fv/s9du3b98If/Nj093XoqthIgMJgC8XLeYA7fqAkQILBSgRylrui/vK+9Nf/yby6k/hx5Y/YfH/3e888//8ILL/zLv/zL+fPnVzo/+xEgMAgCgtcgVMkYCRDoVCAvqSv6b+6rb5z52dHZn/5q5tVTc5VKZWpm4ae/mvnpr2ZeODb9zb3V1FWpVI4ePfroo4+eO3eu06k7ngCBPAkIXnmqhrEQINATgVykrvoq12uvPffyyef/9+wbb1cjV/QzN1957sj0z49NP/TN73yp8fPlL3/lq1/9WmMXfxMgUBCB+fn56L8GBZmPaRAgQCAp0P/UFf1Hdm5u7qmnfvDsyyfPTi+EI5ybX/jpK+cPHD77q9emT7w99+aZubfPzr/6m1O7dn013M1rAgQKIDA/Pz83Nyd4FaCUpkCAQEuBXKSu+fn5CxcuPPLIN5996eSzvzr/o5fPpf785JdTzx05f+jo+Z8fO//S8Zmfv3LygQf+vuV8Ot84sXVkaOtk5+0stjCxbWh4bOfR2obq620T0cuud7TYpVcEBlJgYWHhwoUL0YrXQE7AoAkQILCkQJ9TV/Q/aufn52dnZx944O8P/OLEM7+c+uEvzi7x55lfTh34xYmdOx+oVCa3DI8MhX+6kZakriVPGG8S6K3A7Oys64y9JdY6AQL9E+h/6ooWuqanp3fs+LtnXnzjJ4enJl84s8SfnxyeOnJyviZWTV1booWj7gl2P3WFY7PWFWp4TaBJYHp62nJXk4oNBAgURCAXqWt2dvb8+fP33/+3z7zwxo9fmnrq0Jnan3f2Hzz1nZ8e+u6zL/2vg282Np554Vj82AipqyBnoWkQiAXOnz8fL3fFG70gQIBAMQT6mbrCy4vnz5//m7+5/ycvvPFvL579X8+/ve+ZFz/7r1/40CO/N7p3ZHTvyJ/sG/37H+z752eP//I3ceSqRFcYW6x1RetJ1f87MrRxz7FaoY7tGouvRQaHHN+5Mb5GWb/1KlrrCvav34aVrvfRPeuGx3ZO7FlXv8RZ2y3qtLolOCpY36oEr8NFtai7dbuOp3vxO4GSCYSpa2Eh8d2akkmYLgECBRTof+qam5ubnZ2dmpr68pe/8qOfvz75wtn/8fRTt3//v43ufU8UuaL/++G9Vzx++FtRBU6ePPnlL39lydSVuCO+mmka8atSTUv165LJ3LMtuuG9unF4pBGAarGs5e1itXYazTbSW72X5FFB0mqduqo7NG63L+A5ZkoELkJgampqdnbWNxkvgsyuBAgMjkAuUtfMzMy5c+fGx7/8o5+//u2f/vjPH/9EKnKN7h358fH/P1J9Z2rhRz9/fXz8y81309cXsdIhZnJLMtM0wlY1GzXS1WLFqu/GEa1SCXPS4k7Vp7XW1rqibybWdwuSU5ukFbZWH0Z1z+7fnZYYql8IDI7AuXPnZmZmpK7BqZiREiBwEQJ9Tl3R43lmZmbOnj37xS+O/+uzL2ybuP3D374iXOUa3Tty5O2Xoznt/9U///VT4/928Ddf/OL4kmtdwQW+aFEq/KrjcGMlrJZ4ElcDK5VGJmsghvmpsa36d4vUFXQaHtXmdS3ejcULb2HbXhMorcDZs2ej1OX5EaU9B0ycQIEFcpS6vvCFL/73793+X/7pA2Hk+tg/XXV29kxUgAd/9sXqPV7f2/T5J+77/Bf/+mJSV5CHmorZuIWrvk/2qat5va1pjDYQKIuA1FWWSpsngVIK5Ch13XvvX91zz2fuvvue+E/toVz1sjzyyDfj7ffc85nPbd++0tRVe6xXcAd9yzovfh0y09S1dTK6z0zwalkVG0soIHWVsOimTKA8AjlKXRePvhiVEseGV/Rqb1SDVPClwmO7ohvnj+/cWv+GYxjgsk5d9YuVLe4wS0zKLwTKISB1laPOZkmgpAKlSF3Vm91r30ysPzyi/p3ExhcPa7d8xYthfUhd9Zvxk3fxl/SENO2yC0hdZT8DzJ9AoQUGOnUVujImR6CUAlJXKctu0gTKIiB1laXS5klgIASkroEok0ESILA6AalrdW6OIkCgJwJSV09YNUqAQD4EpK581MEoCBCoCUhdTgQCBAosIHUVuLimRmDwBKSuwauZERMgsGIBqWvFVHYkQKD3AlJX7431QIBA3wSkrr7R65gAgWYBqavZxBYCBAojIHUVppQmQqAIAlJXEapoDgQItBGQutrA2EyAQD8EpK5+qOuTAIGMBKSujKB1Q4DASgSkrpUo2YcAgQEVkLoGtHCGTaCYAlJXMetqVgQI1ASkLicCAQI5EpC6clQMQyFAoNsCUle3RbVHgEAHAlJXB3gOJUAg7wJSV94rZHwESiUgdZWq3CZLoGwCUlfZKm6+BHItIHXlujwGR4BAZwJSV2d+jiZAoKsCUldXOTVGgEC+BKSufNXDaAiUXEDqKvkJYPoEii0gdRW7vmZHYMAEpK4BK5jhEiBwMQJS18Vo2ZcAgR4LSF09BtY8AQL9FJC6+qmvbwIEUgJSVwrErwQIFElA6ipSNc2FwMALSF0DX0ITIECgvYDU1d7GOwQIZC4gdWVOrkMCBLITkLqys9YTAQLLCkhdyxLZgQCBwRWQuga3dkZOoIACUlcBi2pKBAg0BKSuhoS/CRDIgYDUlYMiGAIBAr0SkLp6JatdAgRWISB1rQLNIQQIDIqA1DUolTJOAqUQkLpKUWaTJFBWAamrrJU3bwK5FJC6clkWgyJAoDsCUld3HLVCgEBXBKSurjBqhACBfApIXfmsi1ERKKmA1FXSwps2gXIISF3lqLNZEhgQAalrQAplmAQIrEZA6lqNmmMIEOiRgNTVI1jNEiCQBwGpKw9VMAYCBOoCUpdTgQCBAgtIXQUurqkRGDwBqWvwambEBAisWEDqWjGVHQkQ6L2A1NV7Yz0QINA3Aamrb/Q6JkCgWUDqajaxhQCBwghIXYUppYkQKIKA1FWEKpoDAQJtBKSuNjA2EyDQDwGpqx/q+iRAICMBqSsjaN0QILASAalrJUr2IUBgQAWkrgEtnGETKKaA1FXMupoVAQI1AanLiUCAQI4EpK4cFcNQCBDotoDU1W1R7REg0IGA1NUBnkMJEMi7gNSV9woZH4FSCUhdpSq3yRIom4DUVbaKmy+BXAtIXbkuj8ERINCZgNTVmZ+jCRDoqoDU1VVOjREgkC8BqStf9TAaAiUXkLpKfgKYPoFiC0hdxa6v2REYMAGpa8AKZrgECFyMgNR1MVr2JUCgxwJSV4+BNU+AQD8FpK5+6uubAIGUgNSVAvErAQJFEpC6ilRNcyEw8AJS18CX0AQIEGgvIHW1t/EOAQKZC0hdmZPrkACB7ASkruys9USAwLICUteyRHYgQGBwBaSuwa2dkRMooIDUVcCimhIBAg0Bqash4W8CBHIgIHXloAiGQIBArwSkrl7JapcAgVUISF2rQHMIAQKDIiB1DUqljJNAKQSkrlKU2SQJlFVA6ipr5c2bQC4FpK5clsWgCBDojoDU1R1HrRAg0BUBqasrjBohQCCfAlJXPutiVARKKiB1lbTwpk2gHAJSVznqbJYEBkRA6hqQQhkmAQKrEZC6VqPmGAIEeiQgdfUIVrMECORBQOrKQxWMgQCBuoDU5VQgQKDAAlJXgYtragQGT0DqGryaGTEBAisWkLpWTGVHAgR6LyB19d5YDwQI9E1A6uobvY4JEGgWkLqaTWwhQKAwAlJXYUppIgSKICB1FaGK5kCAQBsBqasNjM0ECPRDQOrqh7o+CRDISEDqyghaNwQIrERA6lqJkn0IEBhQAalrQAtn2ASKKSB1FbOuZkWAQE1A6nIiECCQIwGpK0fFMBQCBLotIHV1W1R7BAh0ICB1dYDnUAIE8i4gdeW9QsZHoFQCUlepym2yBMomIHWVreLmSyDXAlJXrstjcAQIdCYgdXXm52gCBLoqIHV1lVNjBAjkS0Dqylc9jIZAyQWkrpKfAKZPoNgCUlex62t2BAZMQOoasIIZLgECFyMgdV2Mln0JEOixgNTVY2DNEyDQTwGpq5/6+iZAICUgdaVA/EqAQJEEpK4iVdNcCAy8gNQ18CU0AQIE2gtIXe1tvEOAQOYCUlfm5DokQCA7AakrO2s9ESCwrIDUtSyRHQgQGFwBqWtwa2fkBAooIHUVsKimRIBAQ0Dqakj4mwCBHAhIXTkogiEQINArAamrV7LaJUBgFQJS1yrQHEKAwKAISF2DUinjJFAKAamrFGU2SQJlFZC6ylp58yaQSwGpK5dlMSgCBLojIHV1x1ErBAh0RUDq6gqjRggQyKeA1JXPuhgVgZIKSF0lLbxpEyiHgNRVjjqbJYEBEZC6BqRQhkmAwGoEpK7VqDmGAIEeCUhdPYLVLAECeRCQuvJQBWMgQKAuIHU5FQgQKLCA1FXg4poagcETkLoGr2ZGTIDAigWkrhVT2ZEAgd4LSF29N9YDAQJ9E5C6+kavYwIEmgWkrmYTWwgQKIyA1FWYUpoIgSIISF1FqKI5ECDQRkDqagNjMwEC/RCQuvqhrk8CBDISkLoygtYNAQIrEZC6VqJkHwIEBlRA6hrQwhk2gWIKSF3FrKtZESBQE5C6nAgECORIQOrKUTEMhQCBbgtIXd0W1R4BAh0ISF0d4DmUAIG8C0hdea+Q8REolYDUVapymyyBsglIXWWruPkSyLWA1JXr8hgcAQKdCUhdnfk5mgCBrgpIXV3l1BgBAvkSkLryVQ+jIVByAamr5CeA6RMotoDUVez6mh2BAROQugasYIZLgMDFCEhdF6NlXwIEeiwgdfUYWPMECPRTQOrqp76+CRBICUhdKRC/EiBQJAGpq0jVNBcCAy8gdQ18CU2AAIH2AlJXexvvECCQuYDUlTm5DgkQyE5A6srOWk8ECCwrIHUtS2QHAgQGV0DqGtzaGTmBAgpIXQUsqikRINAQkLoaEv4mQCAHAlJXDopgCAQI9EpA6uqVrHYJEFiFgNS1CjSHECAwKAJS16BUyjgJlEJA6ipFmU2SQFkFpK6yVt68CeRSQOrKZVkMigCB7ghIXd1x1AoBAl0RkLq6wqgRAgTyKSB15bMuRkWgpAJSV0kLb9oEyiEgdZWjzmZJYEAEpK4BKZRhEiCwGgGpazVqjiFAoEcCUlePYDVLgEAeBKSuPFTBGAgQqAtIXU4FAgQKLCB1Fbi4pkZg8ASkrsGrmRETILBiAalrxVR2JECg9wJSV++N9UCAQN8EpK6+0euYAIFmAamr2cQWAgQKIyB1FaaUJkKgCAJSVxGqaA4ECLQRkLrawNhMgEA/BKSufqjrkwCBjASkroygdUOAwEoEpK6VKNmHAIEBFZC6BrRwhk2gmAJSVzHralYECNQEpC4nAgECORKQunJUDEMhQKDbAlJXt0W1R4BABwJSVwd4DiVAIO8CUlfeK2R8BEolIHWVqtwmS6BsAlJX2SpuvgRyLSB15bo8BkeAQGcCUldnfo4mQKCrAlJXVzk1RoBAvgSkrnzVw2gIlFxA6ir5CWD6BIotIHUVu75mR2DABKSuASuY4RIgcDECUtfFaNmXAIEeC0hdPQbWPAEC/RSQuvqpr28CBFICUlcKxK8ECBRJQOoqUjXNhcDAC0hdA19CEyBAoL2A1NXexjsECGQuIHVlTq5DAgSyE5C6srPWEwECywpIXcsS2YEAgcEVkLoGt3ZGTqCAAlJXAYtqSgQINASkroaEvwkQyIGA1JWDIhgCAQK9EpC6eiWrXQIEViEgda0CzSEECAyKgNQ1KJUyTgKlEJC6SlFmkyRQVgGpq6yVN28CuRSQunJZFoMiQKA7AlJXdxy1QoBAVwSkrq4waoQAgXwKSF35rItRESipgNRV0sKbNoFyCEhd5aizWRIYEAGpa0AKZZgECKxGQOpajZpjCBDokYDU1SNYzRIgkAcBqSsPVTAGAgTqAlKXU4EAgQILSF0FLq6pERg8Aalr8GpmxAQIrFhA6loxlR0JEOi9gNTVe2M9ECDQNwGpq2/0OiZAoFlA6mo2sYUAgcIISF2FKaWJECiCgNRVhCqaAwECbQSkrjYwNhMg0A8Bqasf6vokQCAjAakrI2jdECCwEgGpayVK9iFAYEAFpK4BLZxhEyimgNRVzLqaFQECNQGpy4lAgECOBKSuHBXDUAgQ6LaA1NVtUe0RINCBgNTVAZ5DCRDIu4DUlfcKGR+BUglIXaUqt8kSKJuA1FW2ipsvgVwLSF25Lo/BESDQmYDU1ZmfowkQ6KqA1NVVTo0RIJAvAakrX/UwGgIlF5C6Sn4CmD6BYgtIXcWur9kRGDABqWvACma4BAhcjIDUdTFa9iVAoMcCUlePgTVPgEA/BaSufurrmwCBlIDUlQLxKwECRRKQuopUTXMhMPACUtfAl9AECBBoLyB1tbfxDgECmQtIXZmT65AAgewEpK7srPVEgMCyAlLXskR2IEBgcAWkrsGtnZETKKCA1FXAopoSAQINAamrIeFvAgRyICB15aAIhkCAQK8EpK5eyWqXAIFVCEhdq0BzCAECgyIgdQ1KpYyTQCkEpK5SlNkkCZRVQOoqa+XNm0AuBaSuXJbFoAgQ6I6A1NUdR60QINAVAamrK4waIUAgnwJSVz7rYlQESiogdZW08KZNoBwCUlc56myWBAZEQOoakEIZJgECqxGQulaj5hgCBHokIHX1CFazBAjkQUDqykMVjIEAgbqA1OVUIECgwAJSV4GLa2oEBk9A6hq8mhkxAQIrFpC6VkxlRwIEei8gdfXeWA8ECPRNQOrqG72OCRBoFpC6mk1sIUCgMAJSV2FKaSIEiiAgdRWhiuZAgEAbAamrDYzNBAj0Q0Dq6oe6PgkQyEhA6soIWjcECKxEQOpaiZJ9CBAYUAGpa0ALZ9gEiikgdRWzrmZFgEBNQOpyIhAgkCMBqStHxTAUAgS6LSB1dVtUewQIdCAgdXWA51ACBPIuIHXlvULGR6BUAlJXqcptsgTKJiB1la3i5ksg1wJSV67LY3AECHQmIHV15udoAgS6KiB1dZVTYwQI5EtA6spXPYyGQMkFpK6SnwCmT6DYAlJXsetrdgQGTEDqGrCCGS4BAhcjIHVdjJZ9CRDosYDU1WNgzRMg0E8Bqauf+vomQCAlIHWlQPxKgECRBKSuIlXTXAgMvIDUNfAlNAECBNoLSF3tbbxDgEDmAlJX5uQ6JEAgOwGpKztrPREgsKyA1LUskR0IEBhcAalrcGtn5AQKKCB1FbCopkSAQENA6mpI+JsAgRwISF05KIIhECDQKwGpq1ey2iVAYBUCUtcq0BxCgMCgCEhdg1Ip4yRQCgGpqxRlNkkCZRWQuspaefMmkEsBqSuXZTEoAgS6IyB1dcdRKwQIdEVA6uoKo0YIEMingNSVz7oYFYGSCkhdJS28aRMoh4DUVY46myWBARGQugakUIZJgMBqBKSu1ah19Zgzk5+9YWh4ZOjKWx4+2tWGl2msqd+je9YNj1RHMrxtonbsxNbo15F1u44v05i3CXRJQOrqEqRmCBDIo0ChU9e5I4+N37H+yqtqSWJk6NIN624cf/jFMzmrw+SWetZZXbg5vnNjPRtVp/n+HYeapnfgnoZAkKgqlaZ+pa4mOhuyF5C6sjfXIwECmQkUNnWdfvq+y9YEcaSRbIaGR9betf/0XGbCy3Y0c+ALnax1JVPX8KbtB5M9zj1z56WhQ30dq1Jp6lfqSsr5rS8CUldf2HVKgEA2AsVMXdNP3rumHrM2jO565sR0pVKZOfH0ntHL6/ljzV2T1W1F+GmkrkuvuqQ25bVfeCkxrafvi7bXF/waVw8T+0S/SF0tUGzKWkDqylpcfwQIZChQxNQ1d/DORrpav+tIAvOVrzVuXdq0/cXEOwP7SyN1bdw8Gq1pXXrfgWAyk5+qXV68fmx9PYbGa13BTtFLqauJxIbsBaSu7M31SIBAZgJFTF0T2xo3ct03mb6SOPPolvpy1yX3RJfiGqlleGTLxJkD45svWTMytHWyVoCZY4/ft/49V1RbW3PVZVv2PLx9rN5yfYdK5c2XHh6/Y/37N1SPGq7eOrZ+67cOvBOXL2j8iTMT45svqwejG27edfB0fa9gn+gm9tr26aOTO7fetDa+OPieG257/FTcbvCicfjGr+2s3/x+1Z1x7Jqb3FIb2GXjOxp3j8Wpq3FgdeK19pZNXXMvbb+6sVi4dbIx/mAsXhLoWEDq6phQAwQI5FeggKlr8ebxW/c3wx/b1UhOG/ccq769GD7Gtm6rX4yrhqozE3dtqGes4J6wVOpabC3c5/L7DtTT3mLjay5Pt3bz4zO14S3uU08/lcrpiXvXNt2U1uaLhPHh2x5rxM1LPvVMfeL1LZu2vxjfO7/61HXoC5vq01+cYDOwLQQ6EpC6OuJzMAEC+RYoYOqKn3eQvsMpqsQTjZWwNffWlnji1FJfxakGi62TlQP31e8MWzO2/UD1a4+nD+xYHyehxlrXsd133Dw+efjNWn6aPvXY1nq0GvtuOlENbRw/8GalMn3q0Vsb3yjc8v3avWWLA4jXnBodJW5Ku3tvy8c3xIdvm4hvnL80WuSbefTPapO6esfhxW8srjZ1xRdn14ztfCXfJ7XRDbKA1DXI1TN2AgSWEShf6mosCA3V73+KU8vI0OV3PHy0mpamz83EC2ZhdFtc7GmkrkXdc2dOnDwyuf2maDWoxeXLJxr7PnlvfcWoabEtSl1xL4tLVo1DW/0dj78ap+p3cQ1fdefTlcq5/TfXYmJtkazDta7JndfXU+nYd/P26I1WKrYNrIDUNbClM3ACBJYXKGLququxarX0Fcbrm64w1heoapcdGyEjvupXqVQWryfGqeu1/XffeEP9pq7wImN9hzgSNe6dqlQq8e1TrVPX8TjfhF23r2TcRW0R60D9G4uXfOqZ04/fUYt3YzurD1/tKHVdEl8e/bP9budqXwvvdEFA6uoCoiYIEMirQAFT14lHNtcXk+oX2kL7xbvpLxuPnrAQp5YgGAU3e4XRJ526zk3e1viy5NClG9Zes3nLjY07nzpJXY2nnoZdh3NIvo7HH106fGn7+2uh89L77oxurv/wt05UD+goda3bcsdlUaZcc8eji98VSA7EbwS6ISB1dUNRGwQI5FSggKkrvrI2NDwy+kjye3/xzUnDNz34WlSSOLWEqavy2K31BbPwHvb4jrH6lxwbs/ouyAAACYhJREFUt4it/ezB+tO/Glsa34Js1fgya12VicZa3SquMFYqlcPjjeRXu7x43UORQGepa9eRA/c0vg1QXyPM6QltWIMuIHUNegWNnwCBJQSKmLqq3wHc1rgXfsPNe4+cbnpKavAcr1bBqFI5sfeW+oLZmrHxF2eqD1l9vNFmdLt9pRIvql1ya+1h928e3N64LtlJ6lq8kX+4Mfi5mcNPjt+2e7m76aM6v7ijvi5VXZ2Kw2WHqet4JVjYC/SWOLW8RWA1AlLXatQcQ4DAgAgUM3VVl3weuaX54Qu1IHXFdePxs7IST45IXNGbOxLfX1WPX9WndjXuGIsuIL62bzTeUrsAt/h4iNVfYayeOIfjx1sE94qFq27B2RWnxhZfThyqX17s+Apj7V+/Dp743/TvDgUD8pJAJwJSVyd6jiVAIOcChU1dVfc3Dz541+Z10WNOh0eG3rNp/dY9E7VvKQZViVNL4gpjdYfp44996qY1764lrfdUn2t6IA5D9VBVOf3cnpuvrj1G9d2bRj+7/0Q3rjBGYzt94FtbRhsPX11z1dprNo8fiJ5GEYy9+jIef5y6Fu/6Dy6wdrzWVe3rzGPxYy8uv3fiXGokfiXQBQGpqwuImiBAIK8ChU5d3UaPHyfRZtmp2/1pj0D5BKSu8tXcjAmUSEDqWnGx42eQDhfm33Bc8dztSCArAakrK2n9ECDQBwGpqx368Qe33vfwwVPTtX/bZ/q1Z7Z/uHYlcXhk6A/31Z7F0O5A2wkQWL2A1LV6O0cSIJB7AamrXYni+6Uad9BHN7Z/4L5JD6xqZ2Y7gY4FpK6OCTVAgEB+BaSudrU5NbF987orG/9m4pqr1o7esX3vS6fr/6x1u6NsJ0CgIwGpqyM+BxMgkG8BqSvf9TE6AiUTkLpKVnDTJVAuAamrXPU2WwI5F5C6cl4gwyNAoBMBqasTPccSINBlAamry6CaI0AgTwJSV56qYSwESi8gdZX+FABAoMgCUleRq2tuBAZOQOoauJIZMAECKxeQulZuZU8CBHouIHX1nFgHBAj0T0Dq6p+9ngkQaBKQuppIbCBAoDgCUldxamkmBAogIHUVoIimQIBAOwGpq52M7QQI9EFA6uoDui4JEMhKQOrKSlo/BAisQEDqWgGSXQgQGFQBqWtQK2fcBAopIHUVsqwmRYBAJCB1ORMIEMiRgNSVo2IYCgEC3RaQurotqj0CBDoQkLo6wHMoAQJ5F5C68l4h4yNQKgGpq1TlNlkCZROQuspWcfMlkGsBqSvX5TE4AgQ6E5C6OvPr7tFH96wbHtt5tGuNTmwdGdo62WlzE9uGVjKq6uBHtkx02lvPj69OZ2RFM+r5UHTQQkDqaoFiEwECRREoZOqa3FL9WN2WDgDVj9umjR0Wsv4RPlL7IF/8v+t2HV9Nw1JXk1o1OFarWfuTSJBRlavbw6h3bNfY0MY9x5raqW/otnC7fmxftYDUtWo6BxIgkH+BAqeupmWeXqSupgpXU8ISn/pN+yc2dDsTdGetKzHEbH85umfLYn6txqw4zk5sbbwOy1oFXDJYhzu3n8oy0a39gd7pXEDq6txQCwQI5FagsKlry66mC14r+8TtqFTVLjq4RCh1LakfhKHJLYvp6vjOjXXzia2Jda8Wja3sHAg6atGGTT0VkLp6yqtxAgT6K1Dc1DVRqX52Ln42VyrpT9zFS1SN1anjOzc2VlCqZUksrlSWXUdJ7V8rbG0M9Qtk4YWw8MLZ4vZk6lpyn8XBx8s/tQ4Xtw9tnVxyrSvYM5EUI4TJnRsbF++SbsGMxnZOBDeihYOvv17sIjnIVZ7zQRhqlbomti19E1voWa94dZz16oTnwOI1zVUvW65yig6rSF1OAgIECixQ5NRVqVQDxOIncSI9VANBHHeqn8e1z9fgcz1KaYuXCxNvtToj4kbiNxOH1D7g6z2GF86qo2osjyWDy+LFtfQ+wV1r4VvJ2FftfTiYfjysSqWWIIN8WW0k1qihxUOqJNJqrc34El4UqloPvpZmGnsmBhmO46Jeh5k4eD2xrVa7MIe1b7Y6ksaoKpWJrfHrxKmSKFz7xrzTCwGpqxeq2iRAICcCxU5d9XhRzzrBJ271YzW8NTuOO9UX9U/i6p1DW7clfl28xyhdvlocaeSP+puTW8LsUv2Mb5mBgvwXDyPdfGqfOCFVomQZrSQ1Z4U2PbYYSbBnIn9UB7LoFgwjGmE44PTr1oNMz2zFv1dH2ChN7aAo89W/jVi/zas6hmjtKo5TyQ4W55LcXqmtjDZWtpol03v7vWcCUlfPaDVMgED/BYqeuqJP0+jTOvjErX2ENy4t1T+no8wUB4vobqFqBKmFtnSESpSu9mGfvoi2mACCjuKol3w3XgNLPDliJfvU1vOirhdvMG8MLshSjU3Vv+NJBRvrK0aJGFd/O3arjieZZqpbGlmz3etGj2mf6vY4OS39KIfabo1IFAy68XJxuas+mHSqbuwYJMj6psSZ0OhC6orBsn8hdWVvrkcCBDITKH7qWrzOGKeH2rJTqxBQZa8nlaN71sXXHLdOVj+tGx/JTbVpWhyK9mjOKI0ja5/0jbBSSx7NqWsl+9Taq/Y+sKmrIbLE31XG4GJoiz0bgTi8r6tRvvTuwTkQXWaNlzzDpBW+Trfg9x4LSF09BtY8AQL9FChD6mpcZ6x+q7G+VLPUx2otYE3sGqvHstrn9+KvTcWqNtVoNvlmvGyW3BzGrOo7wW6Ly0XBxrb7RM0mUlcyGraJg3GyDMZVz5rVLYsN1t+Pk0otANUDYvRe9a1GfFwcfATe2N6yzXrTy/3V3GPTEYsrfBeZulLnQPhr+LqpQxt6KyB19dZX6wQI9FWgHKmrfp0xuDGo9nEeLHdNbomv/dUve8WhoXZ5a/Fm82S5lowFtfWqxUtyx3Ztqz13PhFravs0boFaDC4r2ScaSbBnclK1ONjyTrKWd9PH8w0ajHqIU1cU1xYjZiTTOHBx8F1LXdUpLNYlKR+PLV6DDAYQhMjkUcFcElcba3SLmTXcLdmA33otIHX1Wlj7BAj0UaAsqat+nXExMdSTR+MZAY3oUCtF9TM7/ixPR42wWNWA0mgh8SLOc1Goqu8TB4joM752P9mWiWBZK8gN9etfS+9THUsyJAUtr9t1vG34aHyNsTH4xWiYbjBxN321v2BG2ybSA24whtubBxkSLvk66GuRd3GxLd1L42unw4nyJXpIxqnF9jfumUg81L5R2eA0SLTjl54JSF09o9UwAQL9F/g/7jZLAjzT+pkAAAAASUVORK5CYII=">| | Kiwi-8B-Preview | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAx8AAAIpCAYAAADD8IWhAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAGCiSURBVHhe7d13fFP1/sfxd5ImXdBSkLIKZYMIsvcSEQeiCAiIqFcBERci4k8BB7gXgjguCoILARUcIKgoey+RLVA2FCiFMjrTJL8/oLnNaUunx7a8no9HHtd+xzlN6D2fvHPO98QSeXVLjwAAAADgH2Y1NgAAAADAP4HwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDQL4snDtDB3as0ZsvjzJ2XVF4HQAAyB7hA0VOz9tv0fb1i7Tnr+Ua/vhgY7fatGymNYvnZtlfXEyf+r4O7Fjj89i9ebl++WG6etx2s3E48mHqf8fpwI412rt1pUY/PdTYfcV48IG79dtPX2v35uU6sGON9m9fre0bFmvmZx+pWtUqxuEAAGRA+ACKuFSXS/HxCYqPT5DVZlXd2jU09rmn1L9vT+PQIunl55/Wqj9+0OefTDB2maJNy2aqV7e2JMnPZlPrlk2NQwqFLtd30HdffaK1S+ap5+23GLvzpUK5cH328Xg9O/wx1a5ZXXa7nxKTkpScnKLgoEBVjaysCuXCjdMKzD/53CSpWtUqmvDWWG1csUCvjx1p7AYAFCDCB1DE7T9wSNc0v17XNL9egx4ZoegTJxVSsqS6d7vROLRIatWiiSpWKC8/P5uxyxTt27bUVWXCdPbceSUnp6hyREV173aTcdi/rsE1ddWwwdUqERxk7Mq3t14ZrY7tWskjj1at3aB+Dzyqq5tcp7pNOuqZ51/Tnqj9Sk1NNU4rMP/kc9OlcNWqeROVKR0mq9Vi7AYAFCDCB1CMLFm+WqtWr5ckVSxfTg3r1zMOQS41bdxAfn5+WrVmg6JPnFTJEsFq0bShcVixdW+/XmrS+Fq53R79OO9X3f3AY1qzbpO3f9bsn3TvoKFat3GzzzwAADJD+MAV5amhD2ntknmK2rrKu0bip2+mqUO7Vt4xwx8frD1/LdeaxXM1/PHBWr/sZ+3fvlo7Ny3RhLfGqkK5cE167w3t3LREB3as0V9rFuqFZ4f57KdWjWqa+t9x2r5hsXetwKKfZ6lvr9t9xv0TUl0uSZKfn5+Cg4N8ns/Tw4boz1W/avv6Rd7LV+7v31uLF3yrvVtXXvZ3rVAuXO+/87L3Of395zJN+fAdWSy+nxS/+fIoHdixRmsWz1Wbls0ytC+cO8NnfN9et+uXH6Z71xFEbV2lhXNn6M2XR2vPX8tVq0Y1SVLbVs195qfNS/97fz3tA59t59cNndqpRrVIJSQmas36Tdq2fZesVquaN22k4Cw+hbdYLJrw1ljv38f2DYs14a2xGcbn5G8xp6/lwrkzNPThAbLb7QoODtK7b7xYYGueOrRrpeCgQB08dFgffPyZsTtTOXlu6f8uHx18v9Ysnqv921dr79aVmj19svff/XLPrUK5cI17/QVtWD7fu6+//1ymLye/552fJqu/s8kfvK2vp32g8uXKescd2LFG06e+7zMfAFAwCB8osux2u4Y+PEAHDIuu07+RSG/c6y/okQf/o/CyZRRzKlZ7ovbL5Xbp2vpXa8IbY3Rb1y4+40NCSmjgfXcpMTFJZ+LOKsDfX91u7qxpH49Xpw5tdOzYCcXHJyg0pKTu7HGr+vXuLl0KHu+/87I6dWij+Ph47fp7r6KjT6ha1SoaOeKxDPspaGWvKiNJijt7TqvWbvC2Oxx29et9h8JKhXrbnh42RCNHPK6qVSJ07tx5/b0nSokJiapeLVIvjnpSQwbe6x371iuj1e2WGxQUGKDjJ2J0LPq42rVursjKlbxjcuupoQ9p7OinVLd2DbncLu2J2q+jx6Ll73AoPj5Bu3ZH6cKFeEnS6dNx2vX3Xu3bf1C3de2ikSMeU51a1XX69Bnt+nuvTp06ravKlDbuIl86dWijsFKhOhkTq2Ur12r9pr+UkJioShUrZLn2oHWLpup64/U6fvykTp+JU1BggG67pYteem6Ed0xu/xazs2//QR08fFRut1upqamK2ndQu3ZH6WTMKePQXKtaJUKStGPXHu0/cMjYnUFun1tISAk9Mvg/cjqdOnI0WjarVU0a1dczwx+VsnlurVs21U2dO8pqtWrP3n3aE7VfktSuTQvvfGXzd3bu3Hnt3rtPSUnJkqTo4ye06++9OnjoqHc+AKDgED5QZHk8HiUmJXkXW3sfCYlyudw+Y+/t10s3d+kkSZrz0wK16nSbutzWTw8/MVLRJ06qdOlSGe4QFeDvr4WLl6vDTb3U976HdeDQEfn5+alaZGVN+fxrde7WV6PGvqmz586rZIkSanDN1ZKkIQPvUe1a1bV63UZ16tpHN/e4Rzf3uEc7du1RqdAQ3dS5o89+CkqtGtX0xkuj1LZVc7lcbq3b8KdPf6nQEJ0+E6f7Bg/TNc2v1+kzcerb63Y5HHatXLNB7W/sqZu691fPux/Urr/3KigwULd17aLg4CDvpTcul0uzZs9Vq063qVPXPnprwn+9b9pyq3HD+rqj200KCPDXlm07dXvvB9Tltn7qcFMvdbipl156Y7xu632/ok+clCTt3L1HN/e4Rw8NfVa1alRTcFCgYk7F6qmRL+nmHveoc7e+eu/DT427ybPg4CA1ura+rFartm3fpf0HDmnZyrU6GROroMAANW+S+aVXQUGBGv7sGHXq2kftb+yplWs2yGq1qG2r5mrTslme/haz89DQZ/XjvF/lcrmUnJyiDz/5TLf1vl9fzZxjHJpn8QkJxqYM8vLcAvz9tXDRMrW/safa39hTi5etksViUd3aNdSwfr3LPrfjJ2L08dSv1P7Gnrq5xz3qcls/zVuw0Gd+dn9nT416SWNefVdxZ89JkpatXKube9yjUWPe8Pk9AQAFg/CBIis1NVWTp33tXWyd9njw0acVcyrWZ+zVdWopKDBA0cdP6Ivp33nb06+RqF2zus8aiXPnL2jhH8skSXui9utY9HFJUvSJk/ruh/mSpN8XL9epU6dlsVhks138v9M19erKarWqTctm2r5+0cVLb9Yv0jVXX7xjUljY/848FIRaNap5L8G5686LYWLVmvV6c/xHPuOczlTN+WmBlq1YI0lq2KCeQkqW1Nlz5zXjmx8UH3/xzeWeqP36fckKOZ1OVapYXte1a+19/Y6fiNGMb37wbnPqFzN1/GSM9+fcaNmsscpeVUZnz53Xp1/M9H5qnRMxp2KV4kxV2avK6MVRw/XwoItnaH7+9Q/j0Dy75cZOiqxcSQmJiVq/6S/p0uL+bdt3yWKx6NoG9TK9veymv7Zp3i8Xf4/4+AQtXLRMiUlJKlkiWOXLlc3T32Jh4GfLfsF/Xp7bmbizmrdgoffnHbv2yOl0ei8bvJxVazfo6LHjmjThdS37dbZ2bFysnt27SukuO8zP3xkAoOARPnBFsNmsslgsSkxK0l/bdvj0HY0+kembnaSkZJ2JO+szVpKcTqf38pP4+AS5Pb5nWdJCyJ9/bdOcnxZkeCxfuc5nfH6l3Wo3+sRJbd6yXa+9877uffAJb5hIc/bcOW3dttP7s81mk8WS+fPct/+gUlKcsvv5yd/fcdnXL6+CggJls1mVmJikmBjfsJidL2fM1qzvftS58xdUs3pVPTP8Ua3640eNeGKIcWiedWjTUsHBQQoKDNTY0U95L+vrdssNkqTy4WUzPYt1+vQZn5/Pn78gj9vj/flyr2VWf4v/pnPnLkiS6taple3vlJfn5nSmKiEhyfuz2+37/6fLeXrYEL0+dqTatGouu8Ou7Tt3a/OW7T5j8vN3BgAoeIQPXFECAwIyfKJcqUI52e12JSUney/xKQhn4s5q+LNjMzwmffqlcWi+pN1qt3Wn23XHXQM1edrXxiGSJLfbk+FyNEkKCPD3WQciSdWrRcrhuPianDt/3ttut9t9Pu2vVrWK7Ha79+fLCQ7K/I1rZvvPiZfemKA2nbvrlbfe0+69+xRSsoTuu7uXd+1NflSrWkX169WRxWJRUlJyhkv7Ul0uBQT4q0WzRsapGZ5n1cjKcjjscrvdcqV7Y52fv0XjPv5J6zZuVmpqqmpWr6pHHrzP2J2p/Dy3nKpWtYpuvfkG+fs79NsfS9W60+3qfc9D2r13n3GolI+/MwBAwSJ84Iqwbcffik9IVIXy5XRf/zu97de1b602rZvL4/Fob9SBHC2ozc6BQ0ckSc2aXOvzRX+NG9bX+DfHqEXTjG9Y/w2bNm/V2XPnFRpSUv363OH9NLpWjWq64bp2stvtOnrsuFav26QTJ09dvAyrQjn1vuNW7zbuvKNrhi+Xizl1Wk6nUyEhJVSnVnXp0uvctMm1PuPS77//XT0ybMfIavnf4apPz9vU5foOio9P0JTPZui1t9/XqdjTCgwIUIXy5Xzm5UWHti1Vvny4kpKSNeHDKRku7fvlt8WSpDq1aqhxw/o+c5s2uVbXtW8tXXotb7y+g+x2u2JOxWrLtp25+lvM6Wvpw2KRn5+fsTXP5vw4X/sOHJK/v0MD7+un18Y84/NvdW+/XvpyykS1aNooV88tT9I9twrlwhXg7y9JSri0HqVWjWoZQk9u/85ycnkZACDvCB+4Inw5Y7Y2bPpLVqtFPW+/RWsWz9XCuTP03/deV4Vy4ToWfUJfzZxtnJYn02fO8X7R35hRT2rx/G+0cO4MfT3tA7Vu0bRA3xjmx5Llq/XHkhVyud1q26qZlv82R7/+OF1zvp6sunVqKu7sOc2aPVfx8Qn6fu4v2rvvgOx2ux4aeK+W/Tpby36drQfu7Sun0+mz3Q2b/tLZc+cVFBio/xv+iP6YN0sT33lJ/naHz7gly1dr6fLVcrndatOymX7/eZYWzp3h3XbarWXT7tjUvFkjLf11tr776hM1bdxAH41/VYvnf6Nfvv9Kb70yWleVKa1z5y9o+87dPvvJi3ZtWigoMFCnYk9r9dqNxm7vXa/Cy5ZRpw5tfPpKBAXpv++9rj/mzfK+lsnJKfpp/kLtP3AoV3+LOX0tJenQkaPebxwf9fTj+mPeLD3xyEDjsFzbE7Vfb0+YpGPRxxUQ4K+7+/TQit9/0PYNi7Vr01K9/PzTqlWjmvz8/HL13HIjs+fWvGlDnYi5uN6o2y1dtHDuDE2f+r4iIy/enStNTv7O/tq2w3uGr9stXbR4/jea/MFbPtsBABQMwgeuGI88OUpTv5ipU7GnVS78KtWqUU1ut0eLl63SY089pyXLVxun5MmS5av1zPOvacu2nfJ4Ll4eUr1qpM6fv6Cff/k9w7Xw/6Znnn9N496bpKPHjis0JER1atWQw+HQn39t09CnX9D0WRfvlrT/wCGNfX38xeckjypHVFRISEnN/PbHDJfQLFm+WpM+/VIxp2IV4O+vyMgI7dl7QIuXr/IZJ0lPjXpZ496bpBMnTynA31+1alRT+XLhOn0mzrtmZfK0rxW176BsVquqRFSUv79D+w8c1pm4s6pSOUJ169RUqdBQ/b1nn1558z39+vsS425ypXHD+qpXp5YkaefuvZn+ey1buVbHj5+U3W5XqxZNfPqWLF+tPXv3q2pkZZUoEawTJ09pwoeTNeHDKd4xOf1bzM1r+d33P+un+QuVmJSkUqEhKl8+XKmpF7/zJb8WLlqm/wx+Uj/O+1VxZ8/JarUoOChQfnY/nYw5pUVLV3hfp5w+t9zI6rmNf3+y9u47ID8/P1WvGqnjJ2L0+6LlxunZ/p2lnUE7cfKUHA67qlSOkNVKeQSAf4Il8uqW/1sJCQAAAAD/ED7aAQAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMIWlZaceHmMjAAAAABQ0i8fjyVP4cLnd8ngkt9tt7AIAAABQxFmtVlksks1acBdL5Tp8eDweuT0euVyEDgAAAKC4s9msslosslgsxq5cy1WMScspbneu8goAAACAIirtvX8uz1lkKsfh43/Bw10gOwYAAABQ+Hk8Hu9Si/zmgByHD13aWX53CAAAAKBoKagckKPwkbaj9KkHAAAAwJUh/dVP+QkhOQofSpd28rMzAAAAAEVPQWWBbMNH+h0UxA4BAAAAFC3GHJDXTJBt+AAAAACAgpDj8JGWdljzAQAAAFxZ0tZ85PWMR5ochY+COMUCAAAAoGgqqDyQo/ABAAAAAPlF+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAIWM0+nUhPHjVSo0RP4Ou/5z3306e/ascVix9M7bb8vfYfc+ataooePR0cZhAIAiivABAIXM99/P0TPP/J8SExMlSTNnztCrr75iHAYAQJFD+ABwxToeHa2aNWr4fNKe2aN0WCk1b9ZUTz89QuvWrZPL5TJuqkAdOnjI2KTt27crISHB2FxoxcbGatq0abr55ptUvly4z+tZIjhIDepfo0cfeVjLly83TgUAFGOEDwDIRnx8vLZs2aKJ772n9u3aKrJKZc2aNfMfCyHXXnutLBaLT1urVq0UFBTk01YYOZ1OvTtunCKrVNaQhwZr8aJFOnPmTIYxu3fv1pQpU/TZtGk+fQCA4o3wAQC5FBMTo/vuvVdPDH3ce2lUQep0/fV64403FRgYKEm6665+Gjr0CeOwQsfpdGr4k8M0cuSzcjqdxm4AAAgfAJCe3W5XRERlRURUVtmyZY3dPiZPnqynhj9Z4G+07Xa7hj35pOLOnlNyilOff/GFQkNDjcMKnWVLl2rKlCnGZklS+QoVcvy6AgCKL8IHAKTTt+9ditq3T1H79unI0WNKSk7R7j17NXjwYONQSdK0adO0bOlSY/MVx+PxaObMGXK73T7td93VT8eij+vgwUM+r+uF+ASt37BRXW/t6jMeAFC8ET4A4DIsFosiIyP1/gcfasGCX1SqVCmffrfbrYnvT1RSUpJP+5UmPj5eBw4c8GkLDAzU40OHqkyZMj7tunR259prr1WvXncauwAAxRjhAwBy6PrOnfXyK68am7Vu7doMb7ydTqd+X7hQgwYOzHBHravr1tGLL76Q5fdXbNq0SWGlQn3m/PzzPJ8xAwcM8OnvckNnXbhwQZs3b1aH9u3k77CrfLlwbdmyRS+//JLPWH+HXS+88LzP9tJktu9vvpllHJYjiYmJ2r37b2Nznpw7d06vv/aaIiIqeX+vxo0a6rvvvs124X9iYqJmzpyhm2++SaXDSnnnlwgOUvNmTTVh/HidO3fOOE3K4o5o77z9tiRp48aN6t79dgUG+Hu3d8cd3bVz507jZgAAlxA+ACAXevXqpUaNGvm0nT59WgfThY9t27apWtVI3XprV3355Rc6fNj31rn79u3TG6+/rquvrqsF8+f79OXH+nXrdNONXbR27VpJUnJyslJTU9WzZ68MZ2wWL1qU6RcXrl2zxueWvhUrVlKLFi19xmQmODhYVatWNTZr4IABevutt/K1MP/AwYPq2KG9xox5UTEnT3rbd+zYof53363nn39OqampPnN06VKwP37/XTVrVNd/7rtPixctUnx8vLff6XRqy5YteuaZ/1Nklcr65ptZ8ng8PtvIjMfj0YTx49WmdSv9smCB91Izp9OpBfPnq1HDazVh/PgcbQsArjSEDwDIhTJlyqhN27bGZm3dutX73ykpKT5vcrOSkJCgIQ8PUVRUlLEr1y5cuKAXXnhecXFxxi7VqVNHN954k0/btm3bMuw3KSlJv/z6i0/b7d1vV2RkpE9bZiwWi+66q5+sVt+y4na79dxzo1WlcoTGvfOOYmNjffqzExt7Sg/c/x/t2LHD2OX1/sSJ2rBhg7FZP/zwvbp1u1WnTp0ydmWQkJCge++5R+9NmJBtaJg5c4ZGjnzW2Oxj5MhnteiPP4zNAHDFI3wAQC5VqljJ2JSpkJAQPTVihFasXKWDBw9p3/79+ui/k3y+r+N4dLS+/nq6z7y82LRpk9atW2dsliT5+fnp3nvv9fnukISEBK1ds8Zn3IEDB7Tu0lkTXQoU3W/vnuE7R7LSoWNHDRo0yNgsXbpsatSokapYobweHDRQR48eMQ7JVEJCgvbt26fKlavo/vvvV8+evWS3233GpKSkaNasmT5tO3bs0GOPPpphAXz16tV1//33a+DAgWrYsKFPnyS9/vpr+uuvv4zNPrZt2ya3262GDRtq4MCBuvPO3t7bIqdxu90aP2F8vs74AEBxRPgAgALmcDj0+htv6tDhI3rttdfVvHlzla9QQZUqRWjgwIF6fOhQn/ErV67M0ZmS7Njtdn3030k6e+68kpJTtG37dtWsWVOS1LJVKzVv3txn/C+//uKzUP7PPzfp9OnT3p+bN2+ups2aeX/Ojt1u17vjJ2jMmLHGLh9ffPGF6tSurUmTJmV7lkGXvvdk46ZN+viTyZoxc6Z++fU3ORwOnzE7d+70eQ0nT/4kwxmPl19+Rdu279DHn0zWR/+dpLXr1mvSx5/4hKu4uDhNmzb1sr+XxWLRuHHvau269frov5M0/euvtWXrVu9rnWbN6tXas2ePTxsAXOkIHwBQwOrXr68hQ4YoMDBQTqdTJ0+c0LKlSzVt2jQNffwxLVm82Gf8ubNnlZKS4tOWF2PGjNXAgQMVEBAgi8WiSpUiFBISIkkKDQ3V3Xf39xmffqF8amqqfvzhR5/+7t3vyPX3i9jtdo0cNUpbt21X9+53GLu9nE6nnhj6uN4dN+6yb/T9/Pz0/PMv+PweDRs2zBCkXKmp3u2cOXMmw1mdFi1a6KEhQ2Sz2bxtFotFvXv3Vvv27X3Grl+3LtPL19Lc0KWLBg4a5BNaqlSJ1HPP+y7iP3/+fIb1PgBwpSN8AEAueDwen/UdaWrW+t+n3h6PR8uXL1eH9u1UIjhIlStHqEuXGzTkocH6+OOPvQvC05w8GaPkfN6qt1z58up15+VvW3tL166qmO6SsdOnT+vPPzdJko4cOeLze5UqVUq3dM37d3DUrl1b33z7rY5FH9drr73uDUFGb7315mXXc4SHl1PVqr5rTiwWi0+IMEpIiFd09HGftmbNm2capEqUKKH6DRr4tGX379G+XfsMl1lJUq1atTO0HzuW+R3NAOBKRfgAgFw4ePCgli1b5tMWFBSkKlUuvkH2eDx6d9w43dD5+gwh458UUalShjtaGUVERKhdu3Y+bT/+8KNSU1O1bdtWHTt21NveqnVr1ahRw2dsXpQpU0ZPjRihQ4eP6IUXXjR2Ky4uTr8vXGhs9rLZbLJZsw4amTlx4qTi4s74tF1unc7l+jKTVfCxWq0Z1sckFMDldABQnBA+ACCHPB6Pvpk1y+dNui5dZpX2Rn3t2rUaM8b3TXb//vdoydJlOnz4iC7EJ2jO99/79BeE4ODgDAuxjTJbeL527VodPnxYCw0BoH///goICPBpy4/AwECNGj1aI0eNMnZpy5YtxqZ8KVcuXKVKhfm0HTX8m6W3d2/BrMuIOXnS5zbFMpwRAwAQPgAgRzwej7777lu9+OILxi7de99/vJf0zJ//s8/6jfvuu0+fTp2q1q1bK7xcOdntdu3dszfdbHMZF54fO3ZUixcv1tIlS7xtVapEZjhDkhObNm267N2dLBZLhu9I+ScEBQWrQoXyPm0b1q/P9HtNzp49q+3bt/u0VatWVSVKlvRpS++vv/7K8L0iHo9HS5ct9WkLDAxUuXK+vwcAXOkIHwBwGR6PRzt37tSdvXrpnv79M9y6tdP116tv377en48e8f2EPSg42Ofn2NjYPH9jeEHIbOH519O/8rkrU9dbu6pChYo+Y3Liww8+UIP69fXFF19k+kb/wIEDen/iRGOzrr32WmNTvpQqVUrNW7TwaVu3bp2mTJ7ss7jd4/FoyuTJGW5R3KFjR5UoUcKnLb3vvvtWc+f+5NO2Zs0afTplik9b1apVM/3iRQC4khE+ACCdWbNmqkb16qpRvboiI6sowN+hRg2v1bx5c41DVbNmTX3yySc+C5ntdj+fMZ98/LGef/45bdq0SV9/PV03dL4+0y/EM5Nx4fny5cu9n+T7+fmpT5++GdYu5NThw4f04KCBCi97lSIqVfS+lhGVKqpO7VpasWKFz/irrrpKXW680actvywWix54YECGNTCjRo1Uvavr6qHBD+qhwQ/q2gb1NWrUSJ8x5StUyBDOjNxut+7q21ddbuisRx4eoi43dNZ1HTtkuEPWPffcq6uuusqnDQCudIQPAEjH6XTqyJHDOnLksI5HZ32nojp16mj2nO+9C83TXN+5s8/Pbrdbb7/1llq3aqkH7r9fO3bsUFiY73oEs0VGRur27rcbmyVJTZo0Uf369Y3NeRITE+N9LWNiYozdkqThw5/S1VdfbWzOt4YNG2rkyIzrS/bt26fPPvtMn332mXbv3u3TZ7VaNWHChGwX2qcFs2XLlunTTz/NcAMCXToj9uDgwcZmALjiET4AIBfsdrueGjFCq9esVd26dY3d6tGjpwYOHGhs9ho4cKCeeGKYsdlUFotF/frdneGL+iTp7rv7Z3pL2oIWGBioqdOmafhTT+X5LMvlWCwWPTFsmL6aPj3L2/ymFxFRWb/+tlA9evQ0dmXw1IgR6nT99cZmr6ZNm+rTTz815XUEgKKG8AEA2Shbtqx69Oipr2fM0ImTMXrttdcVbFjLkcZut2vcu+M1fvwEn7MitWvX1vSvv9b7H3yY7V2pzNCwYUN1vO46n7ZSpUqpQ8eOPm25Menjj/Xzz/M1aNAg1a5dO8PzLFu2rDpdf72mffaZjh6LVv/+9/wjwSPNxS8R7KOoffs16eNP1KJFC5/fKTg4WF1uvFEzZs7Utu3b1aFDB5/5WQkrFabZs+doxNNPq2x4uLe9Xr16+nTqVP2xaLEqVYrwmQMAuMjiudxXy15akJf2cLlccjqdsjv8jcMAAEXI2bNn1e3Wrj6Lrfv06atpn30mPz/fdStXsuPR0WrXrp3PN5W/+uprGvH00z7jAKC4c6Yky263y2azyWKxeB+5xZkPALgCrV2zRuvXr/dpu+322wgeAIB/FOEDAK4wZ8+e1bvj3/W57WyzZs3UufMNPuMAAChohA8AKOaOR0era9dbvLeYbdqkiRYvWuQz5tHHHlOZMmV82gAAKGiEDwC4Auz+e7f3FrPp1y/o0h24evfu49MGAMA/gfABAFewnj176fU33sxwZyoAAP4JhA8AuAI1bdpUM2fN0lfTp/N9FAAA03CrXQAAAACXxa12AQAAABQphA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExRrMLHzz/Pk7/Dri43dNaFCxeM3QAAAAD+RXkKHxcuXFCXGzrL32H3eQQG+KvTdR21YP58uVwu4zQAAAAAV7A8hY+suN1urVq1Snfc0V3Dnhgqp9NpHFJsHD58SI89+oju6tuXsywAAABADuQ7fLz66mtKTnEqOcWpmFOxGjRokCTps88+08aNG43Di40tW7Zo8uTJio09ZewCAAAAkIl8h4/0QkJCNPq551SlSqRSUlK0Yvly4xAAAAAAV6gCDR+SFBISqqpVI33aNm3apLBSoQorFaply5ZpyEOD5e+wa8hDg71jdu7cqTvu6K4SwUE+60eWL18uj8fjsz2n06n/fvSRqkZGyt9hV6nQEE0YP14pKSk+45Ru3zVr1NDx6Gifvnfeflv+DrsGDhjg0+5yufTdd9+qcaOG3vUsVSMj9eWXX+h4dLRq1qihnj16SJKWLVumMqXD5O+w6+ef50mSzp07p1GjRioiopJ3fuNGDbVp0yaf/QAAAABXkgIPH8ePH9fOnTslSZUiKvn0eTwevfzyS5o2bZokyelMlSR9//0cNWncSAvmz5ckRURUls1m06pVq3Rjlxv0ySefeLfhdDr1xNDHNWzYE4qOPqawsDCVKFFCzzzzf3r1lVe84/Lq7Nmzuqd/f/W/+27t2LFDwcHBKl+hgqKjj2nJ4iWy2WyqUqWywsLCJElWq1UVK1ZSRERlBQYE6uzZs+rTp7fGvfOO4s6c0W233a7mzZtr165dio4+ZtwdAAAAcMUosPDhcrm0bds2DRzwgGJiYlSjRg21b9/eZ0xiYqK2/PWXFi1eoqTkFH06dap27Nihxx59VG63W6Ofe04xp2IVtW+fYk7FavRzz8ntduuN11/XgQMHJEmLFy3S1KlTZbVa9dX06Yo+fkJHjh7TX1u2FsjC74kT39OcObNVqlQpzZv3s2JPn9HBg4e0b/9+NW/RQmXDw7Vo8RJ9OnWqJKldu3baum2bovbt0/WdO2vjhg1asnixypYtq81/bdF3s2drxcpV2hsVpQYNGhh3BwAAAFwx8h0+Ro8eJX+HXUGBAWrapLHWrFmjoKAgjRv3riIiKhuH6+13xqlt27ayWCySpPk//6xTp06pRYsWeuKJYQoMDJQkBQYGashDQ1SnTh0dO3ZUq1evkiTNmTNbHo9Hd97ZWz169PRup27dunpxzJh0e8q9U6dOafZ330mSxk+YoC433ujdfqVKERoyZIhhRkb+AQGy2WyKjY3V8uXLvbccrlQpQlWq+F6OBgAAAFxJ8h0+goODFRFRWRERlVW7dm09O3Kkdu7cpVu6djUOVVBQkOrXr+/TlnaJVrPmzRUaGurTVzY8XI0bN5EkHT1yVBcuXFBUVJQkqUPHjvLz8/MZHxIS4vNzbh06dEgHDx5U6dKl1aRJU2N3jrRo0UIDBgyQ2+3WkIcGq0zpMI0aNVKxsbHGoQAAAMAVJd/hY9So0Yrat09R+/Zp67btGjv2JZWvUME4LFslS5Y0Nl1WiRLBxqYCY3c45O/vb2zOEbvdronvf6DlK1bqxptuUmJiosa9847qXV1X69evNw4HAAAArhj5Dh/5Vbt2benSXakSExN9+k6fPq1du/63eD0gIEBlylwlSVqzZk2Gu2Bd7m5SsbGndCzd3a48Ho+2bt3qMyYsLEwlQ0J04vhxrV+/zqcvNywWi1q0aKG5c+cp+vgJdb31VsXFxemVV15WUlKScTgAAABwRfjXw0eXG29UUFCQFv72m96fONG7RiIxMVEffviBNm/e7F287ufnp44dO0qSZs6YocWLFnm3s3r1an3w/vven9OUL19OV11VVgkJCZozZ7Z3+3/8/rt++ulHn7GVK1dW11tukSSNfHakNmzY4O07evSIJk2alG70Rbv37NGJEye8P+/atct7KZkklS5dWq1atpIkJScne/cPAAAAXGn+9fDRuHFjvfjixYXizz//nEJDSqpG9eoqe1UZvfrKKxkWr995551q1KiR4uLidMstNyuiUkVFVKqoGzpfr1tv7WbYulShQkXd3v12SdLbb72lShUrKKJSRd11V1+179DBZ6yfn59GjR6tevXq6ciRw2rbprXKlwtXZGQVVa9WTevX/e9sSIUKFRUUFKTj0dFqeG0DRUZW0S8LFigqaq8aNbxWVSMjdWevXrq6bh298MLzkqTevfsoOPifu1wMAAAAKMz+9fBhsVj0xLBh+v2PRWrTpo1cLpeOHDksSbrvvvu0dt16n8XrZcPD9dNPc3X//ffLarUqJiZGZcuW1c8/z1fPXj3Tbfkii8Wil156WY8PHSq73a4zZ86odOnSmvP9D+rQ3jd8SFKVKpFaumy5nhoxQmXDw3XmzBmdPHFCbdq00cBBg7zjGjdurDfefEulSpWS0+nU2bg4lShZUjVq1FTbtm116lSM5s79SQcOHFCbNm30+x+LNMDwZYYAAADAlcTiMS6cMPB4PN6Hy+WS0+mU3ZG3xdgAAAAAih5nSrLsdrtsNpssFov3kVv/+pkPAAAAAFcGwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBQWj8fjMTam5/F4vA+XyyWn0ym7w9/bBwAo+iwWi7EpX6gPAFA8pNUHZ0qy7Ha7bDabLBaL95FbeQof/v4BsvnZlPvdAQAKI48kV6pL7suXhGxZLRbqAwAUI2n1ITk5qUDCR64uu0oLITYbhQUAihOLJJvN5j3O5xb1AQCKp/zWB6Mch4+0nXk8HuUh5AAACjmLxfdYn1PUBwAo3vJaHzKT4/CRJr87BAAUXvk5xudnLgCgcCuoY3yOwkfaztxut9xut7EbAFBMpD/O56TQUB8A4MqQ2/qQlRyFD6W7njc/OwMAFG55OdbnZQ4AoGgpqGN9jsOH0u0UAFA85fU4n9d5AICioaCO89mGD+NOOK0OAMWX8RhvrAHpGfuMcwEAxYfxGG+sATmVbfhIU1CnWgAAhVdejvV5mQMAKFoK6lifo/CR350AAIqenBz7czIGAFC85OfYn6PwkV5+dgYAKNzyc4zPz1wAQOFWUMf4XIcPAAAAAMgLwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYotiFj29/WKA7+j2kJSvWSpJWrd2kbn0GaebseZKk2NNnNODRZ/R/L7ypxMQkw2wAQHFgPPYDAAqHQhk+du/dr773P65ufQZp9MvjlJKSYhyilJQUjX55nLr1GaQBjz6j2NNnJElWq0UOh0MOu904BQBQTMQnJGrWnJ818NFndVvfB9WtzyD1fWCoZs6eaxwKAChECmX4SO/vPfu0d99BY7P27juov/fsMzar1+0365vP31eblk2MXQCAYmDvvoN6bMSL+nLm90pISlLrFo11w3VtVbF8uE7GnDYOBwAUIoU6fJS9qrRsVqvWrN9s7NLi5WvlZ7PpqjKljV0AgGLqTNxZvfvBp4o7e06PPniPpk9+V6OeekTDHnlA419/TkOH/Mc4BQBQiBTq8FG9ahXVrFFV6zb+pdNnznrbT585q63bd6lxw2tU9qownzkzZ89Ttz6DtGrtJp/2rHg8Hs2a87O69xusT6bNUGpqqnEIAKCQWLlmow4dOaY+PW7VzTd0lNV6+TKWmpqqb39YoD7/eVy33zVYb47/WPHxCd7+lJQUffP9z+o3cJi69Rmk+x9+WkuWr5HH4/GOOXDoiJ5+/nXdftdg9eg/RK+N+0inz8RJl2rI8lXr9dATo3Vb3wfVo/8QTZz0ufdy4ZxsHwCuJJc/av/LAvwd6tyxjaJPnNTmrTu87Zu37tDpM2d1fYfWsljy9xRWrN6gmbPnqlP71hpwb2/5+fkZhwAACgGPx6PtO/coNKSk2rRsIovFYhySwYKFS7V56w7173O7IiqW1/LV6/XdT79Il4LJR1Om65s589XlurZ66vFBqhxRUe9N+kyr1l38AGvX7iiNHPuOnM5UDR3yH93du7v+2rpT70ycosTEJG3cvE3vfvipwkqF6omH79d9d/VUQmKiXC53jrYPAFea/L1zN0GjBvVUoVy4/li6SikpKUpJSdEfS1epVs2qqlk90jg8V3bujtJHn05XnVo19OB/+hI8AKAQS0pKVuyZOPn7O1QiOMjYnamqkRF68ZnH1f3WLhr2yAMKDgrUzr+jlJScrN1792vFmg265647NODe3urUvpUeGdhfJUsEa9HS1UpKSdH3835TaMkSeuGZobrhurbqfcctuqlzB/299+J6xAOHjsjpTFXHdi3UuWMb9bjtRv3fE4MVEOCf7fZTXS7jrwsAxV6hDx+lw0LVumUT78LzvfsOas/eA+rcsY0C/P2Nw3Ps+IkYjf9wqkqWCNaTjzyg4BwWMgDAv8Nms8rf36HU1FSlpDiN3ZlqUK+OHA6HJKlM6VIKDg6S2+2Wx+3Rjr/3KikpWVM+n6VufQapW59BenDoKJ0+c1bnL8TrdGycovYf0pFjx3XfQ095x8yZ+6uSk1N0/kK8WjZrpHLhV+mjKdM1ZNhz+m3RcqWmpspisWS7fWcOnwMAFCeFPnxIUoc2zWW327V6/Z9atGy1SoeFqlGDesZhuVIiOEhhpUJ0/kK8zsT9bz0JAKBwcjgcqlYlQmfizmnDn1uN3bmWmnrxzMPQIf/R55Pe9nmMHvGIHA4/ud1u1a5ZTR+/92qGMc0a11flShX0wTtj9djge+V2uzVx0ud64bX3FB+fkO32AwLy/gEaABRVRSJ8VI6oqGuvqaNffl+mxcvXqHXLJiodFmoclislSgTryUcHqGSJYI19833t2h1lHAIAKGSu79BaISEl9PV3c7V+05Z8LdyuVaOq/Gw2bdn+t0JDSqpM6TDvIzSkpEJKllCliuV19NhxnTt33qe/TOkwORwOOZ2pCvB36KbOHfTxe6+qS6d22vX3Xu3asy/b7edkzQoAFDdFInz42Wzq3LGNnClOWSwWNW/cwDgkT8qHl9XwRwdIkt79cKpOnDxlHAIAKESqRkZoyAN3KykpSWPfmKgHHvk/vf3eJ3r3g0/12IgXNXHS58YpWapXp6bq1a2lpSvW6oVXJ+j3JSu1YOESvfzWB9q776AcDoduvqGDUpxOvfTm+/pq1g9atnKdpnw+S598NlOS9MPPC/X2xMlavHyN5l9a3F6yZAmVvap0ttsHgCtRkQgfknRN3VqqVrWy6tSqnu+F5unVqVVdd9zaRdHHT2r8R9N8bsEIACh82rdprvGvP6cmDa/RhQsJWrpynZasWKvk5BTVqVXdODxLgYEBGjl8iG64rq12792vCR9N06dffCuPx62wUiGSpDYtmujZJ4copGQJzZrzs96eOFmr1/2pyMqVJEkVy4drx649Gvf+FH0ybYbCSoXq/4YNVpWIijnaPgBcaSyebM5Zu91u7yMlJUUJCQkKDw83DgMAFANn4s7KbrfLZrPJYrF4H5nxeDzeh8vlktPpVFip/F0SCwAonE6ePKmgoCA5HA5ZrVbvI7dyPwMAAAAA8oDwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUuQ4fFovF2AQAKCbyc4zPz1wAQOFWUMf4PIUPt9ttbAYAFHFutztfxYX6AADFU37rQ3o5Ch/GnSUkJlJgAKAYcbvdSkhM9GkzHvszYxxDfQCA4iWv9SErFo/H4zE2pud2u+XxeOR2u+V0OpWUlCSL1San0ymXy6XU1FS5XC7vOEne/wUAFC5pBcNischqtcpms8nPz082m012u93731ar1WdsZtIf891ut7cmUB8AoOjJrj543C4FBATIbrd7a4TVmqPzGD6yDR9pRSUtfCQnJ8tq85PL5br0SJXbfXFM2ngAQOGVVmCsVqusVotstovFJe1xsf1iYckqeKTxeDw+deJ/tYH6AABFzeXqg9uVKn9/f2/4SP8hVW7kKnykpqYqJSVFVpuft8ikPzOSblL6TQAACot0hSL9J1dpocNms3lDR27CR9ojrS5QHwCgiMmmPrhdqXI4HPLz8zMnfHg8Hm/4sPnZvUUmreCkjQUAFH7pT69bLBZv6DCe8ciusKQ//qevF9QHACiasqoPrlSnN3ykrxe5laPwkVZQXC6XUlJS5Gd3eNspLgBQ9BiLi/GRfkx2jAHE+Eg/BgBQuGVVH1KdKXI4HD7rAnNaJ9LLdfhwOp0+4SNtDACg6MmsyBj7spO+BhA6AKB4MNaHVGeK7Ha7eeHDc+myK6fTKbvDP0OxAQAUPVmFjdwWlKxqAvUBAIomY01wpiR774qY2QdWOZWr8JH+zEdmstkUAKCQyKpgZNWeU1nVgazaAQCFS1Z1IP2ZD9PDh93h7+0DABR9eSkgl0N9AIDiIa0+pJ35+FfDBwAAAIDir6DCR+6/lhAAAAAA8oDwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKfJ1t6tspgIAioi83LHkcqgPAFA8FIpb7fr7B8jmZ1PudwcAKIw8klypLrkvXxKyZbVYqA8AUIyk1Yfk5KQCCR+5uuwqLYTYbBQWAChOLJJsNpv3OJ9b1AcAKJ7yWx+Mchw+0nbm8XiUh5ADACjkLBbfY31OUR8AoHjLa33ITI7DR5r87hAAUHjl5xifn7kAgMKtoI7xOQofaTtzu91yu93GbgBAMZH+OJ+TQkN9AIArQ27rQ1ZyFD6U7nre/OwMAFC45eVYn5c5AICipaCO9TkOH0q3UwBA8ZTX43xe5wEAioaCOs5nGz6MO+G0OgAUX8ZjvLEGpGfsM84FABQfxmO8sQbkVLbhI01BnWoBABReeTnW52UOAKBoKahjfY7CR353AgAoenJy7M/JGABA8ZKfY3+Owkd6+dkZAKBwy88xPj9zAQCFW0Ed43MdPgAAAAAgLwgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTFMrwsXvvfvW9/3F16zPI++g3cJi+/WGBUlNTjcMBAFeAHbv26M77HtUjw1/Qmbiz3naPx6Np07/TbX0f1NwFf/jMAQAULoUyfKS5uk4N/d8Tg3V379sVGOCvL2bM0ffzFhqHXVZKSoo+mz5bw0e9qtjTZ7ztF+IT9O4Hn2rM6+8pMTHJZw4AoPCpXau62rRsqsNHo7Vi9QZv+8FDR/X7kpWqUa2KOrZr6TPn30SdAYCMCnX4CL+qjDq0baG7e9+ul0Y/qdCQktq4eZuSkpONQ7Pkcrm14++9ijt7zqc9OTlZ23buVgIFAQCKBD+bTd1u6qSgwAD9sXSVzp2/II/Ho3m/LlL8hQT16n6zQkqWME7711BnACAj25gxY8YYG9PzeDzeh8vlktPpVHBwsHFYgYo9HafFy1arYoVyatuqqXTpDMbvS1apXPhV6tiupaxWq95+7xNNnPSZGta/WmVKh0mSVq3dpIeHPy+r1SqHw64nnnlJx6JPKD4hUT/MW6jfl6xUQIC/Ro59R/EJiYo5dVrf/jBfm7fuVLtWzWSzWbV42WqNfXOipnzxjeb+skgej0d1alaT1Wr1bj/29Bl99+MCvf/xF6paJUKVIyoYngUAFD1JScmy2WyyWq2yWCzeR3Y8Ho/cbrcCAwKMXQUqNDRER6NPaNNf21W5YgW5XC59Nn22alSPVP/et8tu99OBQ0c07oNP9f7Hn+urWT/ot0XLJYtFNatHymq9+Jnb+fMXNOGjaRr3wRTNmjNP0cdPateeKD3/yrveY/rM2fP07Ji3fI7xu/fu1yPDX9CBg0fUtlVTxZ4+o8dGjNGSFWu1J2q/3hg/SYePHJPFYtFTo1/LtM7Y7X6GZwUAhV98fLzsdrtsNluu6oNRoT7zkSYxKVlzf1mk8xcu6JYuHeVnsxmHZKpCubIaMrC/KlUop5IlS+jhgf310AP9dO01dfXwwP4qWbKEKlUop2EPP6C7e98mP7ufvv1hgSZO+lwN61+t/3tisJo1qq+vZn2f4XKvRctW69YbO+mnmZ+oTcsmPn0AgH9G+rMfP87/XZ9Nn60Up1N9enRVYGCAdu2O0six72hv1AH1uv1mPfrgPSpRIliffvGNZv/0qzwej1JSUvTO+1O0Ys0GXV27pgbff5diTp3WT/N/N+4ux/ZEHZAz1aVZ0ybq6ScGq3bNapnWGbvDbpwKAFeUQh0+lq5cp259Bqn3fY9q7oJFuveuHmrU4GrjsCyVLFlCrZo1UmhoiAID/NWqeSO1bNZIFcqHq1XzRgoM8FdoaIjatmqqRg3q6VTsGS1YuFQ3dGqrYY88oA5tW2jwA/1UtUplrVi9QecvxHu33aRhfbVr0zxPiQ8AkHfVq1VRy+aNtf/gYW3ZvktNGzVQowZXK9Xl0vfzfpPT6dTIpx7WPX3v0C1drtNLo55UpQrl9NsfyxV7Ok5/79mvbTt3q2O7lnpp9LCLY0YPU9NGDYy7yrHSYaHq16ubHA6HJOmqMmGZ1pmcfngGAMVVoQ4faQvO772rh6pXraxPv/hGz70yXvHxCcahBeLQ4aOKPX1Gv/y+TLf1ffDiXbYGPKGo/Qd1/sIFpaSkeMfWqlGVIgIA/4K0sx/BQYEKCPDX7V07y8/PT+fOXVDU/kOqWiVCNatHeseXDgtVjWpVdCE+XqfPxOng4aNKTk5Ry2YN5ed38RIoPz8/nzm5Vb5cuEqFhhibAQAGhTp8pC0479vzVr398rPq2K6ltu34W5u2bDcOLRAul1sej0e9ut+szye97fMY9+oohZUKNU4BAPwLKleqoMgqEQq5dFlTbly49AGWzer7AZLb7fb5GQBQ8Ap1+EjP5XbL6XRKkpKTL56BqFSxvOITEnXoyDHvuPT/nVuVKpZTaEhJ7di1RwH+/ipTOsz7CCsV6l2oCAAofEoEB6pC+XAdPnJMBw8d9bafPXdeh45EK6xUqK4qU1qVIyrIYrFo01/b5PF4JEnx8Qn60/DBVqWKF0PNrt1R3rYTMae8NQgAkHuF+m5XfnY/Bfj7669tO/Xpl99q85YdiqhYXnf3vl1BQYFKTErSyjUbtWtPlCSLFi1bpQULlyo1NVXX1q+r+vVqy2K1aN3Gv7Qn6oDizp7TkWPRqlC+nBwOu1au2aj9Bw8pOSVF+w4cUoN6dRRz6rTWbtisDZu3ymKRDh4+qpmz5ykgwF+VKpTT4aPRWr56vXf7AFCcFPa7XaVJTU3VkhVrlZCYqBuvb6egwEDZbDaFhpTUspXrtWL1BiUlJeto9HFNmjZDhw4f0929b1fD+nVVskSwNvy5VZv+2q6Dh4/q7Nnz+uSzGTp3/oLi4xPUvk1zVY6oILfbrRVrNmpv1AElJSVry7Zd+vrbn5SckqLIypXUtlVTJSYmaeHiFQoKClSn9q187mTlcrkz1JlqkZW52xWAIumKuNvVzr+j9NZ7n+iTz2bqwMEjur5Da7383HBdVaa0JKlJw2vUv+8dSkpK0bTp3+lM3Dn9p19Pn2342Wzqd+dtqlShnBYvX6O1G/6Sn59NJUsE6+7etyk4KEg/zFuoPVEH5O+wa8iAfurT41bFno7Th5O/0oeTv9Tp03EKL1vGZ7sAgMKnaaP6enHkUFUoH65vvp+vj6ZMlzPFqaeHPqiuN14nSQorFaqRTz2sWjWqatXaTfpixvdq36aFOrVv5bOtqlUi9ND9d8lut+ub7+frzy07NPC+PgoKzFnAyqzO2GyFuuwCwD/O4kk755wFt9vtfaSkpCghIUHh4eHGYQCAYuBM3Nkcf7KV2Znxorw27t0Pp2rt+j/18nPDVbtmNWM3AFzRTp48qaCgIDkcDlmtVu8jt3I/AwCAYuZo9Alt3rLDuy4EAPDP4MwHAMDrSjjzkZiYpIkff66SJYJ1dZ2aitp3UH8sXaUL8Qm6r19P9b7jFuMUALjiceYDAIA8sNmsKh0WqqUr12nc+1P004I/FFYqVE89Pki9br/JOBwAUIA48wEA8LoSznwAAHKPMx8AAAAAihTCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwRa7Dh8ViMTYBAIqJ/Bzj8zMXAFC4FdQxPk/hw+12G5sBAEWc2+3OV3GhPgBA8ZTf+pBejsKHcWcJiYkUGAAoRtxutxISE33ajMf+zBjHUB8AoHjJa33IisXj8XiMjem53W55PB653W45nU4lJSXJYrXJ6XTK5XIpNTVVLpfLO06S938BAIVLWsGwWCyyWq2y2Wzy8/OTzWaT3W73/rfVavUZm5n0x3y32+2tCdQHACh6sqsPHrdLAQEBstvt3hphteboPIaPbMNHWlFJCx/Jycmy2vzkcrkuPVLldl8ckzYeAFB4pRUYq9Uqq9Uim+1icUl7XGy/WFiyCh5pPB6PT534X22gPgBAUXO5+uB2pcrf398bPtJ/SJUbuQofqampSklJkdXm5y0y6c+MpJuUfhMAgMIiXaFI/8lVWuiw2Wze0JGb8JH2SKsL1AcAKGKyqQ9uV6ocDof8/PzMCR8ej8cbPmx+dm+RSSs4aWMBAIVf+tPrFovFGzqMZzyyKyzpj//p6wX1AQCKpqzqgyvV6Q0f6etFbuUofKQVFJfLpZSUFPnZHd52igsAFD3G4mJ8pB+THWMAMT7SjwEAFG5Z1YdUZ4ocDofPusCc1on0ch0+nE6nT/hIGwMAKHoyKzLGvuykrwGEDgAoHoz1IdWZIrvdbl748Fy67MrpdMru8M9QbAAARU9WYSO3BSWrmkB9AICiyVgTnCnJ3rsiZvaBVU7lKnykP/ORmWw2BQAoJLIqGFm151RWdSCrdgBA4ZJVHUh/5sP08GF3+Hv7AABFX14KyOVQHwCgeEirD2lnPv7V8AEAAACg+Cuo8JH7ryUEAAAAgDwgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYIp83e0qm6kAgCIiL3csuRzqAwAUD4XiVrv+/gGy+dmU+90BAAojjyRXqkvuy5eEbFktFuoDABQjafUhOTmpQMJHri67SgshNhuFBQCKE4skm83mPc7nFvUBAIqn/NYHoxyHj7SdeTwe5SHkAAAKOYvF91ifU9QHACje8lofMpPj8JEmvzsEABRe+TnG52cuAKBwK6hjfI7CR9rO3G633G63sRsAUEykP87npNBQHwDgypDb+pCVHIUPpbueNz87AwAUbnk51udlDgCgaCmoY32Ow4fS7RQAUDzl9Tif13kAgKKhoI7z2YYP4044rQ4AxZfxGG+sAekZ+4xzAQDFh/EYb6wBOZVt+EhTUKdaAACFV16O9XmZAwAoWgrqWJ+j8JHfnQAAip6cHPtzMgYAULzk59ifo/CRXn52BgAo3PJzjM/PXABA4VZQx/hchw8AAAAAyAvCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAuCKlpKTo0y+/0Z33Pqq7Hhiq7Tv3aNTYdzTw0Wd14uQp43AAQAEo1OEjPiFRs+b8rIGPPqvb+j6obn0Gqe8DQzVz9lzjUADAFWD33v3qe//jevu9T7xtHo9HM76bq259BmnCR9OUmprqMycrc+b+pu/n/qar69bU3X26y+Gwy2azKiDAXzZboS6PAFBkFdqj6959B/XYiBf15czvlZCUpNYtGuuG69qqYvlwnYw5bRwOALhCrVi9Qd98/7Pq16ujB//TV35+fsYhGXg8Hh08dFSlw0L1yMD+uv2WzqpVo6pefm64Phw3VleVKW2cAgAoAIUyfJyJO6t3P/hUcWfP6dEH79H0ye9q1FOPaNgjD2j8689p6JD/GKcAAK5Au3ZH6aNPp6vsVWX05CMPKDg4yDgkU0lJyYo9Eyc/Pz85HHZjNwDgH2LxeDweY2N6brfb+0hJSVFCQoLCw8ONwwrUvF8WadLUr9W/T3fd1aubLBaLcYgkadXaTXpt3Ee6qXN7HTpyTLt279PI4Q+rTcsmOnDoiD6c/KX+3rNfNptVzZtcqyED7lbpsFKSpA1/btXX3/6kvfsOyuPxqHKlCnrogX5q2OBqJSYm6cXX39Op2NN68P679Pn02Tpy7LiqVK6oZ4Y9pLi4c/rvp9N15NhxRVQsrxFDH1TN6pHGXw8AipwzcWdlt9tls9lksVi8j8x4PB7vw+Vyyel0KqxUqHFYgdq9d7+ef+VdNWvcQPf166nnXx2v8xfi9eIzj6tu7RrecW63W0uWr9EXM7/XqdgzKlmyhHredpN6dOuifQcO6/lX3lV8QqJ3fL26tfTssIf0xoSPdSr2tN5++VmVKR2mmbPn6atZP6j3Hbdo9bo/dSburF5+brhq16zmnQsAV4KTJ08qKChIDodDVqvV+8it3M/4h3k8Hm3fuUehISXVpmWTLIteeouWrdatN3bSTzM/UZuWTbRrd5RGjn1HTmeqhg75j+7u3V1/bd2pdyZOUWJiknQpuFSJqKgnHx2g+/r1VOzpMxr/0TRFn4jxbvdM3Fl9OfN7db3xOjVt3ECHj0TrpTfe1weTv9QNndqqU/tWOhp9Qh9O/lLx8QnpfiMAwD/p/IV4jf9omk6fjtMjg/r7BA+Px6Nvf1igiZM+V8P6V+v/nhisZo3q66tZ3+v7eQtVoVxZDRnYX5UqlFPJkiX08MD+urv3bbJf5gzIL38s1+MP/UezPnuf4AEA+VDowkfaqXB/f4dK5PD0eZOG9dWuTXNZLBalulz6ft5vCi1ZQi88M1Q3XNdWve+4RTd17qC/9+7T3n0HJUmPPHiPhj3ygDq1b+Xtj4s7q2PRJ7zbtVqtevA/d+n2rjdoxOODVL1qFZ08Fat+d96mO7vfokcG3aOr69TUyZhYnTp9Jt1vBAD4J+2JOqA9UftVrWqEmlx7jU/f8ZOntGDhUt3Qqa2GPfKAOrRtocEP9FPVKpW1YvUGyWJRq2aNFBoaosAAf7Vq3kiNGtSTn83ms530ulzXVtdcXcvYDADIpUIXPmw2q/z9HUpNTVVKitPYnalaNap6i8a5cxcUtf+Qjhw7rvseekrd+gxStz6DNGfur0pOTtH5C/GSpGPRJzRp6td6bMSLunvgMM2Z+6tSXS4lJ6d4txsaUlJVIipIkvxsNvn7OxRWKkRXX/qELTAwQGXLhCk1NdVnHgDgn3Vt/brq2Laldv4dpcmfz/K5w9Whw0cVe/qMfvl9mfdOif0GPKGo/Qd1/sIFpaTk/nid/swKACDvCl34cDgcqlYlQmfizmnDn1uN3dnyeC6uT6lds5o+fu9VfT7pbZ9Hs8b1tSfqgJ558S1t2bZLt97USS8887h6db/ZuKlMsTgRAP59fjabBt7bW/Xr1dEfS1dp9k+/Km0Jo8vllsfjUa/uN2eoAeNeHfWPr0sBAGSt0IUPSbq+Q2uFhJTQ19/N1fpNW7wFJSdKlghWpYrldfTYcZ07d15lSof5PBwOhzZu3qbz5y/onr536JYu16lOreqKPR1n3BQAoBALDg7Sk488oArlwzVz9tyLl1RJqlSxnEJDSmrHrj0K8Pf3qQFhpULztEASAFAwCuURuGpkhIY8cLeSkpI09o2JeuCR/9Pb732idz/4VI+NeFETJ31unOLlcDh08w0dlOJ06qU339dXs37QspXrNOXzWfrks5mS5F1L8sO837R4+Rq9N+kzbd6yw7AlAEBhVy78Kg1/dIACAgL00afTtWt3lCpVLK+WzRpp599RembMW1qwcIl+X7JSr437SOs3bTFuAgBgokIZPiSpfZvmGv/6c2rS8BpduJCgpSvXacmKtUpOTlGdWtWNw320adFEzz45RCElS2jWnJ/19sTJWr3uT0VWriRJ6nxdW3Xu2EZ79h3QxEmfqWSJYN18QwfjZgAARUCdWtV1x61ddOFCvN79cKpiY89oyIB+6tPjVsWejtOHk7/Sh5O/1OnTcQovW8Y4HQBgokL5PR8AgH9HYf+eDwDAv6PYfs8HAAAAgOKJ8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFLkOHxaLxdgEACgm8nOMz89cAEDhVlDH+DyFD7fbbWwGABRxbrc7X8WF+gAAxVN+60N6OQofxp0lJCZSYACgGHG73UpITPRpMx77M2McQ30AgOIlr/UhKxaPx+MxNqbndrvl8XjkdrvldDqVlJQki9Ump9Mpl8ul1NRUuVwu7zhJ3v8FABQuaQXDYrHIarXKZrPJz89PNptNdrvd+99Wq9VnbGbSH/Pdbre3JlAfAKDoya4+eNwuBQQEyG63e2uE1Zqj8xg+sg0faUUlLXwkJyfLavOTy+W69EiV231xTNp4AEDhlVZgrFarrFaLbLaLxSXtcbH9YmHJKnik8Xg8PnXif7WB+gAARc3l6oPblSp/f39v+Ej/IVVu5Cp8pKamKiUlRVabn7fIpD8zkm5S+k0AAAqLdIUi/SdXaaHDZrN5Q0duwkfaI60uUB8AoIjJpj64XalyOBzy8/MzJ3x4PB5v+LD52b1FJq3gpI0FABR+6U+vWywWb+gwnvHIrrCkP/6nrxfUBwAomrKqD65Upzd8pK8XuZWj8JFWUFwul1JSUuRnd3jbKS4AUPQYi4vxkX5MdowBxPhIPwYAULhlVR9SnSlyOBw+6wJzWifSy3X4cDqdPuEjbQwAoOjJrMgY+7KTvgYQOgCgeDDWh1Rniux2u3nhw3Ppsiun0ym7wz9DsQEAFD1ZhY3cFpSsagL1AQCKJmNNcKYke++KmNkHVjmVq/CR/sxHZrLZFACgkMiqYGTVnlNZ1YGs2gEAhUtWdSD9mQ/Tw4fd4e/tAwAUfXkpIJdDfQCA4iGtPqSd+fhXwwcAAACA4q+gwkfuv5YQAAAAAPKA8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCn+8fBxPDpaNWvUkL/Drp9/nmfsLvSOHj2il14aq1WrVhm7AAAAAORCnsOHx+PRunXrdMcd3VU6rJT8HXb5O+wqHVZKTwx9XBcuXDBOKZJeeP4FvfrKKxrwwP06eeKEsftf4XQ69fPP83R9p+tUIjjI+9pXjYzU00+PKDavPQAAAIqXPIUPj8ejd8eNU/t2bbVg/nzFx8d7++Lj4/Xzz/N14fx5nzlFVaNGjWSxWNSlSxeFlS5t7DbdwYMH1bFDe/Xs0UMrV66U0+n09kVHH9P3c77/R177LVu26J7+/fX4Y48auwAAAIAcyVP4WLt2rcaMeVGS9NBDD+lY9HElpziVlJyiHTt3qfsd3WW15mnThc7jQ4cqKTlF73/woex2u7HbVIcOHVTXW27Wxo0bZbfbNfq557Rz199KSEzS2XPntWTpMt3a7dZ/5LX/7ddf9e233yghIdHYBQAAAORInt6lbt26VSkpKapYsZKeefZZlSlTRpJksVhUo0YNjRv3rsLLlTNOQz44nU49+8yz2rt3r0qVKqWFv/+hF154UdWrV5fNZlNAQIBat26t996byGsPAACAQilP4aNChfKSpFOnYrRn9x5jd5Y8Ho++/fYbVY2MlL/DrnLhZfXfjz6Sy+XyGZeYmKiPPvxQtWrW9K5niIiopFGjRurcuXM+Y995+235O+zqckNn7d27Vw8NflCBAf7yd9jVuFFDLVm8WB6Pp0DGp62l2LRpk8JKhSqsVKhWrVqld8eNU7nwsvJ32HV13Tr6feFC7/w0R48e0cNDHvKu0WjcqKGWLVumgQMGyN9h1ztvv22c4mPr1q1asGC+JOnlV15V69atjUMySExM1FdffammTRp7n2PpsFIaMeIpn0vl0t8U4Pvv5+iN119XYIC/br75Jq1cuVJhpUI1evQoSdJXX30pf4ddYaVCtWnTpnR7AwAAAC4vT+GjZYuWqlevnlJSUtSt26169ZVXMoSCzLw3YYLuveceeeSR3W5XXFychg17QrNnf+cdc/bsWfXocYeefHKYDh06qLJly6ps2bKKOXlS4955R11vuVnHo6N9titJp06d0r339Nf06dNVvnwFWa1W7dixQ7fccnOmd9nK7fjMpKSk6OEhD2n06FEKDAyS1WrVvn371L//3dq8ebN33KFDB3Vjly6aOnWqnE6nyleooIMHD6r3nb20bNkyn21mZePGjUpISFDFipV0661djd2ZWrToDw0cMED79+9Xo0aNVK9ePcXHx+v9iRP11PAnlZqaapyiL774Qi+++ILcbrdcqamy2+2qXr26goODJUl2u10REZVVvXp1ORwO43QAAAAgS3kKH2XDw/XDjz+padOmcjqdeumlsSoXXlZPPjnssiHk8OHD2vzXFh08eEj7DxxU06ZNJUlzZs9RamqqPB6PXnpprBYvWqRq1appw8ZNOnL0mI4cPaYNGzepWrVqWr9+vSZPmWzctHbs2KF27dsr5lSs9h84oKPHotXxuuvkdrs1adIkJSb6rlXI7fjMpKamqlz58jp6LFoHDh7U1m3bVa1aNcXFxWnOnNnSpbM948aN0969e1WzZk3t2LlLBw8eUsypWL3x5ls6cuSwcbOZ2rFjuySpZs0aCg0tZezOVMkSJfX5F18o5lSsVq9Zqz83/6Vpn30mSZq/YIEOH864718WLNAXX36phMQkLfz9D7Vo0UIbN/2pUaNGS5L69r1LUfv2aeOmP1W/fn3jdAAAACBLeQofkhQZGak/Fi3Wx59MVtmyZeV2u/XRhx+qcaNG2rBhg3G4JOmJJ4apbt26kqSyZcuqe/c7JEmxsaeUlJSk6Ohj+unHnyRJL738sho0aOCd26BBAz3xxDBJ0oL583XmzBlvnyTVqVNHT494WoGBgZKk0qVL68UXx8jPz08bN2zQwYMH8zU+M35+fnrxxTEqfekuWDVr1tR1110nSTp65KgkKebkSf36yy+SpBfHjFGNGjUkSTabTXfddZc633CDd3uXc+5s1qEuKx06dtRdd/WTzWbztrVu3UblypfX+XPnMryGkvTk8OHq06evzxwAAACgIOQ5fEhSYGCg7r//fu0/cFCfff65QkJCdOTIYfXq1VNRUVHG4aoSWcXn5/oNfD85P378hE6dilHp0qV17bUNffokqXmLFgoMDNTJkzFKTkry6WvcuInKhof7tFWtGqnw8HJKSkpSQkKCT19ux2fG4XB4w0uamjVr+fx85OhRnThxItPnFBgYqHLhOVscXrt2bUlSTEyMkgzPPSsej0d79uzRSy+N1Z29eqla1aqqd3VdnTh+3DjUq23btrJYLMZmAAAAIN/yFT7S2O129et3t37/Y5FKlSql49HRmjd3rnFYjtkdDvn7+xubL8vPzy/Dm+bY2NM6fz7zMwa5HZ9feXlO6V1T/xpJ0q5du7Rp40ZjdwapqakaPXqU6l9TT6++8oo2b96sNm3aqE+fvqzVAAAAwL+iQMJHmipVqngvK0r/5Xc5Va5cuEqVCtPJEye0Z/duY7d27NihxMREhYeXlX9AgE/f/v37dN7w5Xq7du3U+fPnVa5cOUVUquTTl9vxeRUUFKSAgIBMn9PZs2e1e/ffPm1ZSVvk7/F4vIvxL2fv3r2aNnWqgoKCtHzFSu2NitL0r7/WmLFjC8WXJQIAAODKk6fwMWXKFI175x2dPHHC2+Z0OvXlF1/ozz//lDK5xConypUrr5tuutH7Bjv9pVtbt27Va6++Iknq2bOXwsLC0s2UVq1apVmzZnlv2xsVFaWxY8ZIkm66+eYMl1jldnxeRUZGqmmzZvJ4PHrzzTd09OgR6dLrNXHie1q3bp1xSqbKhofrhRdflNVq1d69e9W0SRNNmzZNsbGxkqSkpCStWLFCQx4arJMnTighIUFJSUkKDg72rklxuVyaOWPGZS+7ys6uXTu9+wQAAAByI0/hI+7MGY0aNVKVK0d4v4ejRHCQnn56hNxut3r1ulNdu95qnJYtPz8/jRo9WvXq1dPevXtV7+q6iqhUURGVKqpZ0ybav3+/evbspQcHDzZOlcPh0BNDH1elihVUrWpV1b/m4jZq1qypp556KsMlVrkdn1eBgYEa/uRw2e12rVixQtWrVVNkZBWFlQrVzBkz1KJFC+OULN1xRw9NnjJFdrtd586d05CHBqtihfLyd9gVGlJSna/vpN9//0Nut1vh4WVVoUIFxcTEqFHDa1WjenVVqlhBK1etVNmyZY2bzlbdqy/eKGDDhg2qVbOGrm1Q3+d2wgAAAEB28hQ+evbqpQcffFCVK//v7IbdbleLFi00c9YsffnVVypRooTPnJyqUiVSS5ct11MjRqhseLhiYmIUExOj2rVr69OpUzV12jSFhoYap6lly5Za8MuvqlGjho4du3inqTvv7K3fFi5UlSqRxuG5Hp8f13furN8W/q5mzZpJkk6eOKHu3e/QbwsXqnbtOsbhWbJYLLrnnnv19+7dGjJkiM/ZmbTX/+133lbZ8HBFRFTW1zNmqlmzZnI6nTp27Kj633OPJkx4TwEBvovkc+Kmm27WM88+K7vdrvj4eO9ZFQAAACCnLJ70X+edCY/H4324XC45nU7ZHXlfOF3Q3nn7bY0ePUodOnTQ9z/8mG3oye34f1J8fLx69eqpxYsW6dVXX9OIp582DgEAAAD+dc6UZNntdtlsNlksFu8jt/J05gMF48CBA/pz0ybp0veYAAAAAMUZ4cMEy5Yu1ddfT/e5A9jRo0c09PHHFBcXp3r16qlJkyY+cwAAAIDihvBhgvMXzuuB++9XSMkSqla1qiIqVVT1atW0YsUKBQUF6bXXXi+wu2sBAAAAhRXhwwTXXnut7rvvPoWGhurYsaOKiYlRWFiYBgwYoG3bt+uWrl2NUwAAAIBip8gvOAcAAADwz2LBOQAAAIAihfABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYAqLx+PxGBvT83g83ofL5ZLT6ZTd4e/tAwAAAFA8WSwWSZIzJVl2u102m00Wi8X7yK08hQ9//wDZ/GzK/e4AAAAAFBUeSa5Ul5KTkwokfOTqsqu0EGKzETwAAACA4s4iyWazeXNAfuU4fKTtzOPxKA8hBwAAAEARZLH4ZoH8yHH4SJPfHQIAAAAoWgoqA+QofKTtzO12y+12G7sBAAAAFGPpc0B+gkiOwocMC88BAAAAXDkKKgvkOHwo3U4BAAAAXDkKKgdkGz6MO+GyKwAAAODKYswAxoyQU9mGjzQFdaoFAAAAQNFSUFkgR+EjvzsBAAAAUDzkJxvkKHykl5+dAQAAACh6CioD5Dp8AAAAAEBeED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFISPIiIxMUmjxr6jgY8+qxMnTxm7AQAAgEKv0IWPHbv26M77HtUjw1/Qmbiz3naPx6Np07/TbX0f1NwFf/jMuVLYbFYFBPjLZit0/2wAAABAtgrdu9jataqrTcumOnw0WitWb/C2Hzx0VL8vWaka1aqoY7uWPnOKutXrNumR4S9ozfrNxi6vwMAAvfzccH04bqyuKlPa2F0k5OR5AgAAoPgqdOHDz2ZTt5s6KSgwQH8sXaVz5y/I4/Fo3q+LFH8hQb2636yQkiWM04q0g4eP6dCRY3K73cauYuVKeZ4AAADI3P8DeuKS67vt0goAAAAASUVORK5CYII="> | | Kiwi-Nano | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAycAAAItCAYAAAAjRQV7AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAIhMSURBVHhe7d13eBTlwsbhZwMhhSSUUBJCi/QiRUCUDiJYAEURVBQUURQBFQuKoiAWELGi4tEPxYYgigIeDii9Q5DeS2ghoYSWkASS7H5/JBt3J7vJbhoD/O7ryqW8M9tnZ99n3mZ5+o0vbQIAL00a/YSxCAAAIF8shBMAefHwvZ2MRQAAAPlisdlshBMAAAAAl52PsQAAAAAALgfCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMIUrOpzExsaqWrVqslgsbv/69+9vvJnXxo8fn3V/c+bMMW4udHPmzMn2ukqWLKkNGzYYd3V6Tzp06KDExETjLgAAAIApXdHhxBPfffedqlWrptjYWOOmK0L//v3Vo0cPY7GSkpLUvHlzjR8/3rgJAADNmTPH7YWsxMREdejQoUAu4F0J+vfvn+0in/2vMH9H+/fvX+AXCjds2KChQ4cai03B+D4X5GuPjY1V//79C+z+vFGYr8uoMD7fy/ne5cVVH04k6fDhw3rggQfy/KGMGDFCNptNNptN3bt3N24uNHPmzNF3331nLHby5ptvuvzhAQAA/2rfvr0SEhKyfs9tNpuioqL05ptvFmpAKUiJiYl6/vnndf78eeOmy2rDhg0qWbKkJDm9v9WqVVPFihULpJ7y8ssv69ChQ8biQlUUr8tRYX2+l+O9y4+rJpy4OunMnj07a/vSpUu1ePFip9uY3cyZM7P+f9y4cU6vrV+/flJmC8onn3zicCsAAOCJZs2a6fXXX9e8efPyfAHzWmevUPfq1UtTp0512vbZZ5+pRYsWev7556+49/dqfV1XgqsmnLjSvXt3p4DiWNmXQyLObSyHqzEnjrft37+/U5PfHXfckW1/R/Z9XT2WK1WrVs0KI3bDhg1TYGCgJOnQoUNuvxxRUVFOrzGnpkhXY1uMV5PsXQEsFouqVaumf/75x2ncz5XchQ4ArnXG7iuOv2EbNmxQ1apVNXnyZKfflfHjxzv9Jrr6bXP8HTXerxlER0crISFBsbGxatKkiV577bVsr8X4Goy/jzK8fx06dNC5c+eyttnHhDq+dvtvqrF7navPITY2Vg0aNNDSpUuduqw7/i5fjvd38eLFWr9+vYYNG2bcpKCgII0ePVpz585VUFCQZKhH2P8cX/+cOXPUoUMH3X333bJYLGrbtq3atm2r7777TkuXLi2UFgtXvH1dyuUYsX/+kydPdqo32V+7u89XLupnnnzHxo8fn/VeF/V7l2+2K9ixY8dsVatWtUmytW/f3paQkGDcxe0+s2fPtkly+zd79uys+xg3bly28qioKFtgYGC22wUGBtr+97//ZT1mv379su7HlsPzccXxcatWrWo7duyYcZdsHO/f3Z+rx+3Xr1+2/Vztn5CQYGvfvn22fRz/AgMDbVFRUU73DwAoWrNnz3Z7Prafy+2/UfZ/O/7W2H8njb97jvvYfzuMZY7/HjdunNPzsN/PuHHjbEWlX79+Ln/7jNvsv6HG39x+/frl+hpcvW7H31D7fTvWL9x9Do7P1f5Ys2fPdru/Y10jp8+9MIwbN87te2vk6vka30v7cWesP+X0GRYGb16XzYNjxP75O+5j/I65en9cfZ7GY834b/tj2R+7qN+7/LqqW04kKTg4WJGRkZLDlZHExERNnDhRMnQHy6mVJTezZ8+WzWbThQsX1LVrV3Xo0EGStGTJEqeWhKioKB0+fFiS9PzzzzslbqN+/fqpatWqUua4mUqVKrlM4zmxdwdLSEhQ+/btJUnr16/X7t27s/ZxHNvSr1+/rK5j48aNkzK7xH322WdZ+zuqWrWqjh075rR/UlISTZ0AYAL2yVMcr6haLBYFBwdr6dKlWfvZrxL/9ttvCg8PlzJ7H/Tr108TJ050Op8PHjw4a59evXpJkiZNmuRUdurUKR07dkyxsbH6/PPP9frrr6tZs2aSQ1eqzz///LK3tNt//26//Xan32PH17hhwwbNnDlTP//8s9vXYN/H8X14+umns353PbV7924dOHBAEydOzHo+zZo104ULF1yOeU1ISFB0dLTq16+fVda9e3dduHAh67kWth07dhiL3LK/PsfWiDp16qhFixZO9xMYGOiyxaIoefO6PDlG7By/Cx07dlT79u1zrHPOnDlTvXr1cvo8jd+xJUuWOB174eHhOnTokEaMGOFwT1eOqz6cuBIUFKQlS5bIZrNpyZIlWSeASpUqZXWV8kb79u3VsWNHpzL7Cfvw4cOKiorKKrcfgFWrVlXz5s2zyl0JDw/Xb7/95vI5vfzyy7Lk0k3LsTtYUFCQbr/9dinzx+rYsWNSZvOqPahVrVo1K2DIcGI1frmUefJw/CFz3N8YgAAARS8wMFBRUVFOYxaNF6yUWRErV66cKlWq5HT7Xr16ZTufO1aEK1WqpCpVqmS7nV1UVJROnTqlzp07O5V37txZNpst67eoKCxdulTBwcFOIa1Hjx6aPXt2tkqc42u0P0fja+zcubNOnTqlqKgoHTt2TOXKlXP6XQ8KClK1atWcbpObv//+WxaLJdtjuRMeHq4OHTro5ZdfzrE+YBbNmjXT4cOH1axZs6yugMagLMnlsWhmnhwjdo7HliemTp2aNebF3t3PcRZX+3fM+NhXsmsynBjZ++Q1b95cSUlJxs25qlatWrYWkObNm2e1etgDiT3dSlKHDh2yKvU5sV8xMf6Q2C1dulTdunVzeUKKjIxUcHCwsdjJ7t27tX79esnFc3IMNMaQJRcnD3cBCABgfp78ZuSFq9ab5s2bKz4+3rhroXI1cY7Nw1k4jb93Ru6usntbEVUePoepU6dq3LhxTuHLOIalMHnzGh3HmzRv3ly9evVyW7+53Lx5XfLgGMkrx/EmM2fOVFRUlFNPHxXiY18uV304sTd5GjkO3nOV2vPLfjVDDl27HLt02VtWPOXY2mPLnP7Q3qJivKoFAIC37F2fjfJb8XHXelOUXY/yy96Fxig0NFSVKlVyW5F1F1py4u5zyInjkgfjxo3Td99953H37/yqX79+jvUQx/V27N0H7ceDcRYsM/HmdcmDYyQv7L1b7F3u3X1n3D32leqqDyeOgcDeMhAbG6t77rknq5XEfjXFscJfEOwzatlbHewtKK66gRkZZwMzsvdlFK0UAIB8ql+/vssKzsyZM72+ku/IXikz3u+cOXOumNkd3b0Gxy5YlSpVytZ9JzEx0eXaEo6BxXgB1d3n4I0RI0aoX79+eQpGedGxY0e1aNHC5bIG9sp1ixYtVKdOHe3YsSPr/+0ce3CYiTevy5NjJC/sx4fxgrbjGBV3j30lu6rDyYYNG3T//fdn/dv+4ToGlnHjxjmNOylI9kFekjRx4sSsLl3GgXeuVKpUSeXKlZMyV7k3XgHZsGGD3nzzTaeyvHB8jsbB+4mJiZo3b57kZoyMsauX4/6BgYF5/jICAIqWvSJ2zz33OE1f+t133+U6eUtO7BfS7r///qwrzLGxsRoyZIjToHMza9asmXr16uX0Guy/wfbXYN9nyJAhWe/fZ5995tQrwz5Bj+OaKi+//HJWfUQOn4PjpDKxmVPQGusB7rbZB2cbK7SFJSgoSBMnTtTMmTOzXUx9+umntX79+qwB/vXr13dady4xcy2RpKSkHJdFuBy8eV2eHCN5YT9mHCelsH8v7RemXR179u5zxud9pbhqwomrgW6OY0jctVbYryw4fkEKiuMYjKVLl+rw4cMKDAzMNjDQlfDwcA0ePDjr3/YB8K5em6vg4CnjuJKXX345a5vjidXdl8vxy7B48eKs/Y0zSwAAzMvedbhDhw6qlDkz5P3336+oqCiPxmTkZMSIEXr99dezxp1UqlRJgwcPzjYI3cymTp3q9BqaN2+un3/+2ek1TJ061en9mzdvnu66666s7UFBQZo2bZqio6Oz6iv169d3WsfM/jlUq1Ytax/H9ysoKEjPP/+8vvvuO5UsWVLHjh3TmjVr9PnnnzvVD15//fV8f27eaNasmY4fP65Dhw451VUOHTqk48ePZ9UHunfvrnHjxqlHjx6yZHarv/3229WvX79cu7MNGzZM69evV3BwcJGt4+Lp65KHx0hujJ/v7t27sx0zQ4YM0ezZsxUYGJhVh506daoGDx6cdewFBwerWrVqWd3mLsd7ly/GuYWvJJ6s6WGcf93T2znOB53bOifGubgdGddD8Waeaft818bn5vhnnPva8bUZH8vV67DrxzonAAAAuMyumpYTd8aNG6dDhw45XfUPDw/XmjVrsmbTksOAPftUujkNgvKGvbnNzpMuXXb2KyjGWRns+vXr53ZwlLemTp3qcszN7Nmz3XZ7q1q1qvbs2eM0y0bVqlW1b9++AnlOAAAAuLZYbDabzViIgtW/f3999913CgwM1LJly67ointiYqK6deumpUuXqmrVqlqzZo3L7l4AAACAt676lpPLzXH1dcZhAAAAAO4RTgrJ+PHjZXFYxTMwMFDDhg0z7gYAAAAgE+GkCFwN3bkAAACAwsaYEwAAAACmQMsJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBYvNZrOdOHFKqWlpSktLU3p6utLT02W1WmWz2SQp678AAAAAUJAsFkvWfy02m8128tRpWa3pSk+3Ki0tTVarVVarVSKYAAAAAChkWQHFZrPZ4k+fzQwk9hYTZYUTEVAAAAAAFBJ7MJE9nJw5e142m03p6ekZQcTenUuEEgAAAACFz6LMbl3nzifKZrM5/YkWEwAAAABFJGvMyfmEC4QSAAAAAJeVxWaz2RISk5wCCeEEAAAAQFHLCieOCCcAAAAAiprFZrPZEi8kS4QSAAAAAJeRUzgBAAAAgMvFx1gAAAAAAJcD4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJiCxWaz2RIvJBvLAQC46qSmpmn/gYNasWaDNm3eppPxZ3Tm7DlZrdasfXx8fFSmdCmFBAepccN6atmiqRo1rCs/vxJO93U1stlsOhV/Rv9s2qot23drz94DOp+QmO09kqRSIcEKDAxQjchqatSwrpo0aqAqEeHy9S3utB8AeINw4oF3JnymRctWGYuLTMnAAL039hXVqV3DuAkmtWL1eo1+5yNjsVuPPtRLffv0NBZfMZKTk/XK6AnatmO3cZNXmjdtpDGvPudVJfBYbJyee3ms4k+fNW7ySm6fwY/TZ+mbH2Yai90qyvvLy/s/euSzanNzC2PxVetU/Bn9PHO2FixcoaTkJOPmXPn4+KhJo/p66rGHFFm9inHzFc1ms+ngoaP6dfb/tGpNlM4nJBp38Zivr69aNm+i++65Q/Xr1JLFYjHukitvj+eKFcrp4/dGq1xoGeMmAFcgunUBMI3oQ4cVf/q0sThHhw4f05kz54zFgCTJarVq7rxFeuTJ5/X73AV5CibKvJ9/Nm3ToGdG6uPPpyglJcW4yxUp+uARDXnhDQ0a9or+99eSfAUTSUpNTdWK1ev1zItj9PTwUYo+eMS4CwDkiHACwDTOnDmn/dGHjcU52rp9l6w2m7EYUGpqmr75YaY++WKKUlIuGjfnidVq1Zx5C/X62x8q8ULego4ZXEhK1kef/58GPTNSu/fsL5Tv0J590Rr0zEh9/tX3unjxknEzALhEOAFgGlabTTt37TMWu5WSkqJdew8YiwHZbDb98ttcTZ85u1Aq3v9s2qbPv/5eqalpxk2mF33oqIYMf11z5y3KNo6koFmtVv02+396+/1JV3SYA1B0CCcATGXH7n1KTvZsHNzpM2cVcyzWWAxoz94Dmv7b3EIJJnZ/L1yuRUtXGotNbdfufXrx1bd1JOaYcVOhWrVmg94c9zEBBUCuCCcATOVIzDHFnYg3Fru0P/ow402QTVpauqb/NlcXkjwLuXlltdk0579/XzEV7l279+m1sRN19tx546YicSW3NgEoOoQTAKaScD5RRz28qrtz175CvTKOK9ORmFht3rbTWFwoDh6O0f4DB43FpnP6zDlN+PiryxZM7BYvXa2Va6KMxQCQhXACwFSsNpv+2bTNWJxNcnKyduz2fHwKrh2HjhzVuXMJxmKXfCwWdencTt9/9aH+mv2D/pr9g77/6kN16dxOPh5Mg5ty8aKiDx81FpuKzWbTDz//pkNHLv/zTE1N1ZTvf9HxE6eMmwBAIpwAMKNde/YrIfGCsdjJmbPnFBt33FhcaPr26am/5/zo8Z+7NUlQ+GJiPBuH5GOx6JG+vfTisCcUHlZBFotFFotF4WEV9OKwJ/TwA/cYb+LShVyO1ctt87adWrBohbH4sjkWG6fZ//1bNlo9AbhAOAFgOidOxetU/BljsRPGmyC/ylcop1tvaedyoUCLxaI2rW5UqVLBxk1XlNTUNM2aMz/P67L4+vqqU7tW+nD8KP058xv9PedH/TX7B8347nMNHzJQ4WEVjTfxyOJlqxR3/ISxGAAIJ4WlU7tW2a6k5vXvj+lfszo8rho+FouKFStmLHZy7lyC9u7dbyx28s+mbbmONwkqGWgsArKEBJWUv7+fsdgrPhaLrqtezVhsGrv37teGjbl3k3SlVo3qmvL5exr54tO6vn5d+fmVkDKDW9kypXRH146a+uX7emLAg/L19TXePEcnTsZr5dp/jMUAIIvNZrMlXijcGU2udO9M+EyLlq0yFueoU7tWGvni08ZiUzh3PkG/zPqvFi9frZMnTslqsymoZKBq17xOPe7srJbNm8rXt7jxZrlKvJCkTVu2a/W6jdq+Y48SEhN17rxzv29fX1+VKR2ikoGBqle3ppo0qq/GDesrtGxpp/0KSnJysrbt2KuofzZr+849On32nM4nJLpckM3+3Er4llC9OjXUqEE9NWlcX2EVy7u8spqTFavXa/Q7HxmL3Xr0oV7ZugFdvHhJGzdv1+Jlq7R9917Fx59VamqqlPlcy5cLVasbm+qOrh1VpXIlr59jQUpOTtYroydo247dxk3ZlAwMUPVqVbR95x7jJiddOrXVS889aSyWvHi8G5s11roNm43FLrn6DArTj9Nn6ZsfZhqL3crp+Xn6fjgaPfJZtbm5hbH4qvD7nws0afJUY3E2/v7+euv159Xk+vrGTZnjNGZp6k+/Gjc5CS1bWh+OG6VK4WHGTabw8edTNGfeQmNxrho1rKc3XxvuUcC32WxatHSV3v/kq6xzlCfq1rpO48a+ku0xvD2eK1Yop4/fG61yoWWyymw2m+KOn9TSFWu0YtV6xR4/6fR7VCokWDUiq+m2Lu3V6sYb5O/vn7WtoFy8eElbtu3S2vUbtXnbTp09d17nzp13uqji4+OjMqVLKcDfP9+/O96yWq06dOSYVq+JUtSmbYo7fkJnzp7P9hmWCgmWn18JRVarrAZ1a+uGJg1V47rqeaon5MZqtSo27oTWb9yiTZt3aH/0IaWkXMz2vtnZn1tYxQqqX7eWbmjcQA3q1c4K0gXNjPWcqxHhxANmDCfeVGwcn8vK1VF676PJOU6xGRxUUk89/rA6d2gtH5+cG9dSU9O0Nmqjfprxh/btP+jy5OGJ8uXKqvttt+iu7l1VMjDAuNkrNptNm7bu0FffTMvXc7ILD6uoRx7qpY5tb8r1/bDLTzi5kJSs6b/N1W9/zHMZoox8LBbVrVNLrzz/lMLDKhg3FwlvKhMlAwN0712366df/lBaWrpxc5ZaNarrvbdGKjiopHGTog8d1QuvvpXjoOdSpYJ11x236rtpvxk3uZRT5V9efudUxPfnzftvl1M4sdlsmjbjD33740yvvz++vr56Ydjj6tS+VaFXrtzZvHWHXh3zvlIu5v79cVcJX7E6SuM++CLX7lDtWt+okS8MUfHiObcGXg5nz53X8Ffe0uEjMcZNOQotW0YT331VlSuFGze5ZbPZ9OnkbzX7v38bN2Xj4+OjKhGV1KJZI/W5t5vKlC7ltN3b49kxnNhsNu3YvVefffmd9uyLNu7qUmBAoPr1vUf3dO/q8TneHavVqm079+iHn2dp05YdeV7kMiQ4SJ3at9L9vXo4ha6CcPzEKf0y608tWLhCScl5mwbb19dXjRvW08MP9lT9OrXy/V2/kJSs73/6Vf9dsDTPz8nO19dXHdq01CMP91bF8qHGzV4zYz3nape/byGuKEuWr9FbEyblGEwkKSHxglauiVJ6uvuTqs1mU9TGLRow+CWNfucj7dkXnecvrCSdPHVaU374RX36P62fZvyR53nwj5+M1/Mj39aLr76T7+dkFxt3XO++/5mGvTRGsXGF20f6n83b9OQzr+qn6b97FEyUObvVjl179PTwUVq/YYtxsylFVApTqZCc+/Ifiz2uY8fijMWSpKMxx5RwPtFY7CQiPExhFcoZi+GBpSvW6vvpv3v9/fH19dWI557ULR1a57uykh/XRVZTlcqeVay3bNvptPaGzWbTvAWLNe6Dz3MNJv7+/upx562mDCaSdOjwUZ3Iw6xY991zp1fBRJldve687RaFBAcZNymoZKBubnmDnh/yuKZ+OVH/mzVV//f5eD35WN9swSQ/UlJS9MkX3+i5EWM9DiaSlJScpMlf/6B33v8s1888J9EHj2jI869r+MtjM7qd5jGYSNL5hET9PneBHhwwTB9/PiVfz8vuQlKyPvr8//Tw48/p97kL8hUCUlNTFbVxi555cYyeHj7K6wBsZ7VaNXfeIvXp/7Rm/jEvX8/JLjU1VX8tXqHHBr+oX2b9mefPwaz1nGsB4eQaERd3Qt/88Eu25lpXihcvpts7d3DbZHvx4iV99uV3GvnGewU+W1JKykVN+X6GRrw+TvGncx4QbbRr9z49/dxr2lJI6xvs2r1Pz498y6sfPU/ZbNJ/5y/Wq2Pez/N7ej4hUe9O/Ew7du01bjKd8uXKKrJaVWOxkwtJydrpZtyJJ+NNakRWVaAHXVLgbMeuvfrki288Olc48vf304jhT6pD25uMm4pccFBJNb+hsbHYrb8XLtcvv83V8ROnNOqtifpw0v95dHGgS6c2atywnrHYNKIPH/Wo9chRaNnSanVjU2OxR6pViVDTRg1UvWqEeve8U++NfUVzZnyt33/+SmNfe163d+2giEph+W6dcOXU6TN6adR4zZm3MM+V0SXL1+iTL6d6XWm0Wq2aMetPDR4+qsB/H6xWq+bMW6innh2l6EN5nwp65659enzICM2dtyjP7487e/ZFa/DwUZq3YLFXM7Clpqbpw8/+T598McWj75u3UlIu6qtvpumLr3/w+jM1az3nWlHwZwiYjk02zZj1p2LcXIU2atSgnpo2bmAsljL7W456a6J+/3NBrpXD/NiybaeGv/KW4jxsqYiLO6F3P/ii0BcYO3EyXm9PmOTx8/LUH3/+pY8+n+J1hdDofEKi/vPNNCUn59w6drn5lSih+vVqGYuz2blrX7Yfu6TkJO2PPuxU5krjRtnHESBncXEnNP7DyTqfkHOrlFHJwAC9MnywOrS5/MHErtttnTweB2K12TTlh1/U97FntGbdRo/ObY0a1tOAfn0uawtRbvbsOWAsylXtWtepQvnyxmKPFC9eTKNeHqavP3tPTwx4UDc0aaiAgMLvvnL6zDm9Ovo97diV8zg2TyxaslIr13q+SKS9gv31N9Pyff7OyZGYY3r+lbF5uvi0ZMUavTjqHZ04GW/cVGBSUi7qw0n/p2kz/sh2znbFZrPpl9/mav5fSz36vuWV1WbTH3MX6Jff5nr0vGTies61hHBSSBYtW6XO3fvm62/F6vXGu82T7Tv3aOGSlcZil3wsFt16S1uXg8lSU9P0+dffe7RAXkGIORanV0ZPyPWEevHiJX08+RuPw1d+xRyL04+5jJfw1pmz5wrsataOnXu0Ys0GY7Hp1KtdM9fuMHv3H8w22PD4idOKic35sy5VKljVqlQ2FiMH8afP6J2JX3j9PSoZGKCXnn1SrW9ubtx0WVWsUE59e/fwaCFFb7W6qZnLcSpmkpKSorg8dOmqVSMy1++l2aSmpmY7T+RVWlq6Zv72pxIv5N69qKgq2HbnExL12pvva/dez0Pnjl179cnn3xRKy4SR1WbTtz/O1PRf5xg3ZbP+ny156jqaF1abTTP/mOdRq5ZZ6znXGsLJNeDEyfhcx5nY1a4ZqZtvvMFYnHUS/nvhcuOmQnUk5pi+/Wlmjk2ye/dHa+v2/F8x88biZWu0a485Vye32mxatnJtgYanwlC9WmWFhpY1Fjs5deqMYmOdryp5svp3lYhKCquQ/4GQ14rEC0l6e8JnXl95DgkO0msvDTNdMLHr1L61br6pmbE4z3x8fPTQ/T016qVhpg4mkpRy8VKeWpKrVqlkLLrmHDwco/0HDhqLs8nr2Kz8OJ+QqC+n/ORReMprS2h+WG02zfjtzxxbeNLS0vW/v5YUakuT0fmERE3/dU6Ov4tmrudcawgnyOJjseiWTm1d/uju2L1X03+bW6QnYbvFS1dr45btxuIsq9Zs8Hiw4HXVq+j9d151Wkxs5g+T9frLz+i66lWMu7uVkpKi32b/L8cT3eV04OARnTlr7gUKQ4KDVCWXQbdJyUnavd/5KuHmLTuc/u1KZLXKRdKd5Gpgv1Lo7VitkOAgjXntObVo1si4yTR8fYvrqQF9FVHJs+5dOalQPlTj3hyhR/r2cjsez0zS0tJ10cvxJv5+fipTKsRYfM1JuXgx15nCYuNO6P++m16kFWy7bdt36a/FK4zFTlJT0/TNjzO9bgktCOcTEvX9z7/p4sVLxk2SpBMnT+Y6lbydr6+vHux9l6Z986n+mv2D/p7zo/6c+Y2++WKC7r6zi1frFK3bsCXHi4pmrudcawgnyBJRKVztWt1oLNbFi5f0w8+zPG59UeYVxo7tbtYXH72teb9N1d9zftS836bqm8nen1BSU1M1c9Z/XZ7oLl68pAMHjxiLXQotW0avv/KMmlxf32kxsdKlgtWu9Y367IO3dEuH1vL391OF8qGqXClcndq10u1dOuiZpwbo1ReH6JMJo/Xzt5M084fJGjVimGm7PyScv6BTp04bi03Fz6+Erm9Q21iczYaNW7NCYELiBe3a43qQvJ2PxaIbmjQ0FsOFvF4pDAkO0luvv6Dr69c1bjKV1NQ0bdu5R2lpeb8i6WOxqFO7Vvpq0njd0PjKOa4uXbro9esuVsxH/n6en5uvZgcOHnU7RsFms+mXWX96NXujj8WiJtfXd7o4tuCP7/XjlE804KH7XE6Z7o7VZtPsuX8p/vRZ46YsG7ds17KV64zFOapYoZyGDxmomT9MzloE+vdp//H64p0kbdy8XRs3u65sxxw74XE3vHt63KZHH7pP5cuVzRrf5edXQlUqV9KQJ/tr/NiXFRRUUmVKl1KF8qFq1LCeunRup/t79dCrLw7R6JHP6oevPtTP307Srz98oYb16xgfQjJ5PedaRDi5BjVv2kiTP35HC/74XvN+m6r/fDpOPe7orC63tHG5SJC33aZCgoP09usv6tUXh6hWjX8XavL1La4qERknlHfHvORyykl3duzep737s/cXtVrTPZ6NJjysgkLLZH99dr6+xfXK84M195cp+mnKJ/r2y/c18sWn9fzQx9X9jlvUsd3Nql+3lsqFllHpUsGFMhA2OKikXnzmCc2Z8XXGD8PPX+mRvr28Oskps8Uh/oz5ZwFpWL9OrpWhmGPHlXjhgiTpVPwZnTiVc9/c4JAgVY6ga4on8tItpXSpEL3zxouqXzf3CQ0uF6vVqgWLluvBAcM07oMvdDwPYy/sypcP1f339bji1iVISLhQJOMMzMTHYtFNNzbVlM/f04I/vteCP77X5x++pdo1I4275ir+9Bm3LfJxx09ozfqNxmK3fH199ezTj2nC2yOdLo75+PioYvlQPdjnbn06cYxXLXwxx2L1z2bX4yLy0m2qQ9ub9H+fjdcdXTuqdKl/p3kPCiqpdq1v1Kfvv6kundo63SYnaWnpWrBomcveBRcvXXRZ7kq9OjVy/K1tULe2fp/2H/3y/ef6acon+uDd1/TSM4M0sH8fdWx3s9rc3EJhYRVULrSMy7G0dmau51yLCCfXmC6d22nsqOdV87pq8vHxka9vcV1XvYqGPfWoHrjvLuPukqSFS1a6PUkb+VgsGjTgwVy7elxfv64GDXjQ48GqKSkpWhvl2Urf7uw7cEj/bHJ9JccMIiqF6dOJY9S1c/usLklBJQP10P099dzTj3ndSnPocN6nnSwqlcLDVKp0zt1ITpw4lfVa9u7dz3iTArJ5606vpwyuUD5U74x+SXXr1DRuMg37WhPvfTi5QLo2Hj9xSi+99o527XbfHQSXn4/FoocfuEdvvjpcVatEyMfHRz4+PqpdM1LvvTVS9evm3krr6OSpeF1Icv27t3bDZq8GMN/T4zbd3qVDjpXsypXC9dyQxzxeqd5qs2n12g0uK/lHYmK12YtumrVrRmrok4/k+Nh+fiU06LG+XgW9XXv26+SpvF8YkKQ//7fQ4/pHflwt9ZyrBeHkGhJatowevK+HV/2lPelG4yiiUrjH6wvc0OR6lfdikbztO/dkO3n4+/srtKxnq+empKRozLsf6YmhL+un6b9rx669OnsuwW3TfVHysVj04H093C581qrlDapb27wVwrwqU7pUrt0FHPt/7/TgWGS8iWdmzZnv1UDZCuVDNXrkc15VToqSffHEoS++4dGsPN44e+68Ro6Z4HKR078Wr1Cf/k/r3fc/06Jlq3Ty1GlTnFOuNTVrVFePO291uYZKUMlA3dfzdo8riTlJS0vXho1bjcVu2Vd6zymY2NWpGama11UzFrvlajZDeXgRx1GnDq1zXRRXkkqFBKu1FxNMnDp1WgcPZV+cMbRMGQUGZB/b6sr6f7aqd7+hWd+vuLgTBd71yez1nGtR9m8xrlo33dhUER7O+28XG3dCscdPGovdui6yqsqW8WzFX08qpo7ijp/Q+YSM7j12FotF1zd03YfUFavNpgMHj2jKD79o2Iuj1euhJ3Vrj4d09/2P65FBL+idCZ9lBZeiXCukTJlSur6B+9cREBCgyGpX39S4xYsXU0MPrmju2ntAZ86ey/UHhPEmhefSpVQVL+75hY2iZLPZNP3XOfrws8JZzE2Zg3zHjPvIaaG5tLR0rV67QfGnz2rh0lV6Z8JneuDRoere+zENGT5KP03/XfujDxd4ZcoTZcqUVkkXk5tcrZpcXz/HCnbliEoKDvG8i407584n6KAXiyFWjghXeJhnldOAgADV96JV8ty5RJ1w0WVx01bPW00CAwLVoI7nXTQ96YprZ7XZtNfFrGfh4RVUrpxnFxWV2U3Z/v166PHndGevR9XlrofVp/8QDX/lLX38+RQtWrZKx0/G52lKfrPXc65FhJNrRPHixdTmphYeXb1xdOLkKSUmev5FWbpijW7t8VC2NVtc/d3Ws59Wr/3HeBduXbiQpDNnsg8AbNmssSqUz183nsQLSTp6LFaLlq3KCi7dew/UPQ8O0vuffKXtu/bk6aTnqYhK4Sqbw3gYSSoX6vnJ/EpSt06NXH/sDhw8oh079+b6A1KmTCnViMx55Xnkzdlz5zVz1p8uu5FcbktXrNXUn34r1O+oHBaam/L9L0pNTdOhIzHa5GL2uJSUi9q194Cm/PCLBg17RQOefkkni3iCCh8fH6/P9+npVo/H8JmJj8Wi6xvkPDlDcFDJXM8znjh1Kl7nXbRWuLNj11716P14tt8/d38zZv1pvAu3XI0tTE5O1jEvBuonJSdp6ItvZHse7v6eH/m2V8fI0aOxxiKVLhWi1jflb/pxq9Wq+NNntGXbTs2Zt1DvTPhMfQcM0209++upZ1/VnP8udNmq5MqVUM+51hBOrhGlQoIVUamCsThXMTFFPw2hOxeSknUyPvsPfHhYRfW4o3OBNNkbnU9I1P/+WqJnXhyj/oNe0MrVUYXSZaNc2TI59ve9mlWOqKQyuVyFio8/rYVLV+b6AxIeVlFlSud8X8i7xcvXuB2Ee7kcOHhEk76c6tXYmSoRlfT5h2+p/4P3en3esNpsmvbLHxoz7mP98ecCj7rG1akZWeTHpZ9fCa8G4yqzC2VCLt8xMwoI8Pf4SnZ+nT5zTsnJ5ul2YxxbmJR8UfGnzTMZyolT8S57IXS//RZVKYSJS6xWq/buP6iPv5ii+x4erImffpXrej9XQj3nWkM4KSSd2rXKmoovr39tbm5hvNs8K1umtIKD3Td5u5OW7t1UlJfLvXfdoc63eD6TSF7Exh3XmHc/0rgPvijwPqElS167YyRKhQSrei5d1tLS0rVqbe6r3tevU5PxJoUoNTVVP834w2Vl43JIS0vXzFl/5lr5cNSoQV19POEN1a4ZqYfu76knHusrX19f4265WrPuH/13/mJjcTY+FotubtnM6wkt8quEr69KheQ82YQrxsqut6IPHdWTz4zU5K9/0IGDR4pkYbnivsVVogBaRTyRbk33ana7omaz2Qq9BbEgVCgfqheffcLrAO0Nq9WqeQuWaODTIxS1Mft4MbsrpZ5zLSGcXCNKhYSoRB5+gM3G3Q+nr29xDR74sFp5MVgvL6w2mxYuWanX3/7QoxV6PXW1dtnyRPHixdSs6fXG4mxy607kY7GoXl3P+2sjb3bs3KP/ebkuSmE5dCTGqyld69etrTdHPZ9VIbJYLLq3x20aOqh/ngKKJ8qVK6uG9Tzv019Q/PxKKDysvLE4Vzmt7+GJvXv3a9+BQ5r5xzw9MfRl9egzUC+++o4WLVtVoOdMR/5+fl6tE3I1OXT4mLHIVHKa9ax+3Vp6fujjhT5N99lz5zX6nY+0ZMUa4yZTclfPuZYQTq4RIcFBOc7xfTUIKhmo0a88q5eee7LQu1D8s2mbZs3+X75+xPGvyKqV890fnPEmRcNqs+m3P+Z5tQBdYVmzboNH3aqUGYIfvK+HggyDxC0Wi+7o2lGvvTikUCpJLZo1VlhF77vUFoR6tWsYi3IVfeiIx331jdLS0rV8TZRTWWpqqjZu2a53Jnymex54Qn0HPqvJX/+g6ENHr4gr/ChcrW9urv9MGq/WNzd3OdNaQUlJuaj/mzrdFOct5K7wjgSYSl67FBQvZs7Zedzx8fFRl05t9dOUT/Txe2/otls7FFqz8R9//qVoL2ZtgXvVqlZWBS+mW3TFk0kFUDBi407ol1l/XtZwfvHiJa8WTasUHqa6ddxX1lvf3Fxvv/GiSpfyviuUOyHBQepxx61eD0wvKLVq1VAphwX1PHEsNk67duc8K547+w8c1JYcZoqy2mw6fvykZv4xT48PGaEefQZq1pz5xt1MrZhPMa/HKRUli8VSqJX8wlCxfKjGjHxO0775VM88NUC1alQvlNfg7rx1pdVzrgUWm81mS7xgjv7DZvXOhIz5tb3RqV0rjXzxaWNxgflx+ix988NMY7Fbjz7US3379DQW52rF6vUa/c5HxmK3Cvt151VycrJ27t6vfzZv16bN23T0WFyBdDF44L679Fi/3sZir983Tz6fovrM8yI5OVmvjJ6QtR5JTkoGBui9sa+ojsNVXZvNprfem6Sl+Wh2793zTj0x4EGnMm8+h9zer4J+/wvy/rx5/90pFRKspOQUjweWlwwM0DtjXlIDD6aCLgxnz53X8Ffe0uEj2ddRcKVh/Tp6d/SLuY5Jij50VG+++7GOxOS/u0yPOzpr6JOPXLZwktfjokWzRhr9ynNetbanpqbprQmfauVq55aTnPhYLBr18jC1bXWjU7m3z7tihXL6+L3ROXaPPRV/Rs+8NFrHXUy964q7+9y9Z79eGvWuLiR5Vm/y9LgrKN6+d67Ox2ZgtVoVG3dCW7buUtSmrdq1Z59Onjqd79a2CuVDNfGdVxUeVjGrzJvfCZm4nnM1KfhoiqtK+dCyXnV1iDtxUknJ+a/0F7SAgADd0KShBvbvo0kfjNXvP3+lP2d+o2++mKDnhzyu9m1uytNsL3v3RV+WNQyuNt6uV2PkyVSicK9Rg7r64qO31aRRPeMmty4kJeuHn2ddtuM/LS1dF72Y0vR8QqIuXso9eEVWq6z33xmpRg09fy9cCS1bRvfcddtlCybKPO+1aeX9xCob/tmqX3//b7YrzO7YbDb98ttcrV6T+6QVjsqVK+vVooNmEBpaVkFejG85cfKUEhKL7jcxICBAFcp5PrX+haRkxZ7IeYr2y8HHx0cRlcJ0e9cOGjViqH78v481//fvNPOHyXpv7Cu6v1cP1akV6fVYsdNnzupozHGnsqulnnM1IZwgRxUrlldoaFljsVsHDh7RgegjxmJT8vMroSqVK2Wd/KZP/UyTP3rHqwWTTpyKV3IBz9x1rapXq4ZXPxCOypQppWpVC35aymvBDU0a6s1Rz6tC+VD1ube7V1Nab9y8XStWrzcWm5I33ZVCy5bRu6Nf0t13dslzF54zZ85qxm9/FvjMft5q3fIGr9eBstps+vanX/XNjzNzDZ9Wq1W//P5ffT/9d69nsbq+Qd3LNh4nr4KDSno1Be6pU6e1eZv7rm6FoX597yZgWLZiba4TjpiBxWJR6VLBWRcaP/vgLc36abL6P3ivxyElLS1dMXHOUwdfzfWcKxXhBDkqFRKc48rlRikpKZozb6FH00cuWb5G3XoN0H0PD9aDA4Zp+Ctv6b2Pv9QXX32vvxev0LKV63Tk6DGdij+T6/oWBcFisahmjWoaMXywx+NULl68eEWc1K8E3v5AOIqsVlWhZfN222vZDU0a6vWXn8kaJN6gbm01a9rQuJtbaWnp+mnGbJ0+c864qdAFBvoptKz7bjxGaWnp+umX2R535/TzK6GnHn9Id9zW0bjJI1abTf+dv1iPDx2pnbv2GTcXmbCKFdSxXStjca6sVqt+mv67Bgx+Ub//ucBp9W2r1arjJ+P1+58L1H/QC/rPlJ887g5o5+vrq1s6tL6sLUt54edXQs2bNTIWu2W12TTnvws9Ou527Nqrex4cpHv7PqkHBwzTkOGj9N7HX+rjz6dowd9LtXjZau2PPqxT8Wd09lyC25at6+vX9fg3TJI2bNyq3ftyD+6JF5I0/JW31O2+AXpwwDA9MugFvTPhM7338Zf6bfb/tHjZav2zaZtOxZ9R/OmzHtUD8svf318P3d9TXbxYSuCCoT5xLdVzrhSEE+TIYrHo5hY3eDWgfvHSVfpu2m9uv7g2m02Llq3S+5/8RykXL+rM2XM6cTJeW7bt1IK/l+nX2f/TuA++0JvjPtajT72o+x8ZorsfeEI/Tp9lvCtZrVadPnNOp+LP6J9N27R42WrN+e9CTfz0K43/4As9PuRlPThgmO6+/3H16T9ER49lX63WKDiopPz9PZs5qny5UJUM9PxKM9wrFRKsWjWqG4s9cl31Kl71j4fU6qZmTsFEmVNy9+ze1avWk0NHjurP/y10W1EqLAH+AQoP8+6q+45dezRy9IRcF6mzWq1at2GzBj79subOW2Tc7JXYuON6ZsQYTfnhl1xbIQqDxWJRt9s6ef1e2R0/cUqTJk9V3wHD1OWuh9W5e191ueth9R0wTJMmT1VsnHMXGU9dX7+OGuez69zl0rRRA68q/zt27dFHn/1fjq1o23bs1utvfaDzCYk6dz5BJ07Ga9feA1rw9zLNmbdQ7338H709YZIGDXtF9z8yRL0eelITPvrS5cWxypXCVPM6z8+lF5KSNW7iFzlO8HI+IVFj3vlIW7btVErKRZ04Ga+jx2K1aNkqLfh7mT7/6nu9PWGSXhr1ru5/ZIj69H9ajz09wuUYn+TkZJ2KP6MjR49p2cp1Wrxstb6eOl3vffylXh3znh4cMEx9+g9Rl7se1lff/pzrucVisXh1oaJaVed1tcxez7kWEU6Qq8bX11Xd2p6vH2FfQfmZl0ZrzbqNWQu2paamafvOPXp1zASNe/9zpaR43l+8WpXKur1LJ6ey2Ljjemjgs+rdb7Duf2SIXhr1rt6eMEkffzFF8xYs0V+LVyj60BGdOBmvxAtJij99Rl989X2OV7BsNpv+XrRcp055tkJrxQrlvKrIwT2LxaJmjRsYi3NVvHgxNW3s+dV+ZOjSqW22aXWVh9YTXaaZ6ywWi5rf0Mjrblc7du3Ro0++pP/7boZijsVltQakpqbpSMwxfffTr+rTf4hGjn6vQAbFy6EVYugLryv6YNF3BwkPq6BH+vbyqvJVmHx9fdWr5x1X7AWFalUidFOLpsbiHC1ZvkaPDx2pvxav0NlzGVM1W61WRR86qvc++lLDX3nLq8VES5cKUa+ed7r8TP38SuiO2zq63OZObNwJDX3hdX09dbrT9yL+9Fn9Ovt/6v/EcG3cst14M7d8LBb17NFVFQ2zMP7fdzPUvfdA3f/IED361It6c9zHenvCJP08c7YW/L1Ma6M268TJeMWfPiOr1ao//vxLK3MZyxQXd0ILl6w0FrtUMjBA5V200Ju1nnOtIpwgVwEBAXqw910e9+m027MvWq+NfV/dew9U5+59dfs9/fXMS2O0bsNmr/omFy9eTA/27pFtwHr5cuVU18sZRtZGbdZTz76mvxav0Okz57KuyFy8eEl79kXr1TET9O2PMz16fsWLF1PHtjdfcd0SzKxa1cpejzspFRKsiEp5uyqM7Hx9i+u+u+/wKnSfPXde03+d4/YqYmFp1uR6VakcYSzOVVJykqb98of6D3o+qzXg9nv669EnX9R3037TmbOF003twMEjeuHVt7V3/0HjpkLXrnVLderQ2lh8WXTu2Fo3XMEXFIoXL6ZePe/0etrp2LjjGv/BF+r10JNZLVCPDxmhBQuXeT0L1V133qrIas4tAI5ubnGDmnp5sScl5aJ+njnb6XvRp//T+uKr75XgZZejunVq6daObYzFatm8sVfnlpSUFL057mO999GX2h99OKv10Waz6fSZc5rz34V66rnXFHPMeRyJO3Vq1VD1atnHlZq1nnOtIpzAIzc0bqjOHS/PD1unDq3VrnVLY7GKFy+m227t4PXJxP4D0bvfYN3a4yF17t5Xd/Z6VIOfe82rE0rNyGpq6EU/VeSuUqUwVQr/d4pHTzDepODVrV1TN3rRr16Slq1c59WV1YJQtkwpPdi7h1dXiC8nX19fDX3ykTx3X8wPX9/iGjzw4XzPQpZf9evW1qABfa+Yz8yd66pXUa+7b/e65a4gNGpYTz175DwTnJ9fCT3S9z6vup8VlJDgID352IMuW2Zr1YjU9Q28m37carVqwcJlGjTsFd3Z61F17t5Xt/Z4SL37DdbHX0zxODj5WCy69Za2blvszFjPuVYRTuCR4sWLadCAvkX+w9aoYT0NHviwfH1dL5LUtFEDtWvtPEd+UfD19VWfXt1dnnyRd8FBJRXp4qpWTurXq+X2xwZ5U7x4Md3T4zavrnCmpqbq2x9+yfPq4nllphaBnPhYLHq4z91q3+byVUCCSgbqpWeeUESlMOOmIhFRKUwvPffEVXPevPeuO9TZi4HYBSGiUpheesaz97BOres07KlHvb6Alx++vr4a9tSjql/X9Yxhfn4l9ND9Pb1uIS8IjRrWU5ubmhmLs5i1nnMtIpzAY0X9w1atSuVcT8K+vsX1aN9eRfac5FDJaHOz9+sHIHdNrvf8h6F48WKq50U/YXiubu2aauflGhn79h/UomWrjcWFyt4icEOTou8mFB5WIVufenc639JW993TLcer3UUhLKyCJrw1sshbb8LDKmj0yOdUuVK4cdMVq6hbo0qXCtErw59SmBeTG7Rv01IP97m7SFp4fHx81P/Be3IN4PXr1FKfe7oVyXOyi6gUpheGPZ7rYphmrOdciwgn8EpYWAWNGzOi0H/YGjWsp/feetmjk3BYWAW9/cYLqlbFff/bguLj46NH+vbS/ff1uOyVjKtVrVo1VKpUsLHYpbJlSqtaVe/HHCB3xYsX01133urVFU6rzaaff5nt0ax4BSmoZKDefPU53dKhdZFUePz9/TTgofv09aTxmvL5BA146L4cZ/gz25XRCuVD9eG4Ubqja8cieb8aNaynj8a/nuMYiStVUMlAjR75rG668QbjpgJVJaKSJrz9qurW8e5ijMVi0f339dDARx8o1BYUf38/Pff0APW5t3uuv4325/RI317y8Sn8ami1KpX19hsveFSfkEnrOdeawj8qcNUJD6ugD8eNUvfbbynwE4v9R//d0S95NTVg5Urh+nD8KLW+uXmh/diGh1XUuDdH6ME+dxf468a/yoWW8XiF46qVI1QqxLMgA+/VuK66Wt/U3Fico/jTZ/TbH//LdfrPgubv768Rzz2pF54dpGAvVvD2hr+/n+7pcZumT/1MD/a5W35+JeTnV0IP9rlbn04Yo9o1I4038aobTlHy9/fXc08/pvFvveLVwrPeCA4qqZeee1Lvvz3Sq/P5lSYkOEhvvPyMBjzcO8eQmhc+Pj7qdnsnTfrgzTyHOx8fH/XueafefuMFhYd5N6bPE7VrRurTCWN0e5eOuQYTOx8fHz3Q+y69/vIzhfZ99fHxUffbb9FnH4zxusXOjPWca0nBvuO4Zvj7++uZwQM09cv31eWWdvk+IYcEB+nubl307eSJWT/63goJDtKYkc/pq8/GqVO7VgVylcjHYlHtmpEaM/I5Tf3y/St6hpkrRXBQSY9nYbu+Qe08HSvwTPHixXTv3Xd4Pah2waIVRb4qtjIrI106tdXP307S80MHqnKl8HxfrPDx8VGd2jU08sWnNfP7zzX48YddtiZFVq+iD8e97tSKEhIcpBHPPWnaK6MWi0VNGzXQl5+8q68+HadbOrTO97ncx2JR5Urhen7oQP387SR16dS2wCt3ZuTrW1wP9r5LP/7fx+p11+1ef2eM/P39dEuH1vpq0rt6dvBjLo85b93QuKGmfP6eXh7+VL6/G/bvxZiRz2nSxDcVmYeAa7FY1Obm5lnf14IKToEBgbq7Wxf9NOUTPTN4gFdj5xyZsZ5zrbDYbDZb4oWM+ZmBvMpYIyBWa9Zu0PZdexR96KiSkpKzrSniY7GoVKkQlQwMVL06NdSoQT01ur6uwsMqFPgPmNVqVWzcCW3Zukvbdu3Rnr0HdCEpSecTEl3OPe7v76eQ4CCFVaygWtdVU/MbGqth/Vq59lEFYF6JF5K0ddsubdqyXdt37tHps+d05uz5bCua+/r6qkzpjHNT7VrXqW6tGmpQv7aqRIR73R3r+Ml4ffl/P6hdm5bq0OYm42ZTs583d+7epy3bd2nnrn26kJSU43tWtnQp1aoZqWZNr1fD+nVV2sNumVcz+/u4fuMWbdq8Q/ujDykpKdnlhBGlQoIVGBigGpHV1KhhXTVp1CBPx523zp5L0LYdu7Ruw2bt3LVP5xMSdebsuWzTGjv+NtavW0s3NG6genVqFMpvY3Jysnbu3q9/Nm9X9MFDij50VKmpaTp37ny2mTQd6xO1a0aqfv1aatG0UaHUJ2TSes7VinACAAAAwBSIcAAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMwWKz2WyJF5KN5QBw2V28eEmjxk7UP5u3GTd5pF3rGzXyhSEqXryYcVOevDPhMy1atspY7Nbokc+qzc0tjMVZCvr+CtruPfv10qh3dSGp4H8jSoUEy8+vhCKrVVaDurV1U8tmqhIRLl/f4sZdUciSk5P1yugJ2rZjt3GTWxUrlNPH741WudAyxk0AkC+0nAAwrYOHjmj33v3GYo9t37lHJ06eNBbDBM6dT9CJk/FaG7VZU374RU8MfVk9+gzUOxM+06EjR427AwCuEYQTAKa1Ys2GfF21jz99VqvWbTQWw6RSU1O1aNkqPf70y3pt7Ps6fjLeuAsA4CpHOAFgSgmJFxT1z2ZjsdfWrd+kixcvGYthYlabTWvWbdQTQ0Zo+ap1stlsxl0AAFcpwgkAU9q1e7+iDx0xFnttx+592rs/2liMK8CFpGSNHf+ppv86h4ACANcIwgkA07HZbFqxZr3S0tKNm7yWkpKitVH5b4HB5WG1WjXluxmaNuMPAgoAXAMIJwBM51T8GUVt3GoszrOVa6J09tx5YzGuEFabTd9P/10rVq83bgIAXGUIJwBMZ9PWHTp54pSxOM+OxcZp1+68z/qFyy81NVVTvvtFJxgkDwBXNcIJAFNJS0vXspVrZS3ALjxpaelavf4fugVd4Y7EHNPPM2fzOQLAVYxwAsBUDh2J0fYde4zF+bZ+w2bFHT9hLMYVZtnKdYo+xDool1tIcJD8/EoYiwEg3wgnAExl45btOp+QaCx2qWRggHx9fY3FLp06dVrbdu41FqOAlQwM0GcT39Tfc350+psz42t988UEDX78YYWHVTTezGNnz53X4mWrjcXIB39/f4WW9W6l94CAAPkWL2YsBoB8I5wAMI3k5GStWOX5oOeePW5TjepVjMUuWW02LVy8kjVPLpOAgABVqVxJ9/S4Td/9Z6LGvTlCFcqHGnfzSH4nODh7LkGr1/6jL776XkOGj9KDA4ap230D1Ll732x/Xe56WH36D9GDA4Zp+Ctv6ePPp2jRslU6eer0VdO9zGKxKCS4pLE4R+XKlpG/v7+xGADyzWKz2WyJF/K+AjMAFJRtO3br5TfeU0pKinFTNiUDA/Te2Fe0Ys0GTfvlD+Nml0KCgzTh7VdVI7KqcZNH3pnwmRYtW2Usdmv0yGfV5uYWxuIsBX1/BW33nv16adS7upDk+W+E/XOpU7uGcVM2cXEnNOKN8Yo5FmfclKPixYtp7GsvqEWzRsZNbqWkpGju/MX6Y+5fio07btycJ2VKl9J999ypHrffkmNF/eLFSxo1dqL+2bzNuMmtdq1v1MgXhqi4h60Tm7fu0Ktj3lfKxYvGTS49cN9deqxf76x/T585R19N/dlpn5w8+lAv9e3T01gMAPlGywkA01i1ZoNHwUSSKoVXVKVKYWrZvHGOFUNH5xMStXHLdmMxLpOwsAp6+ol+HnfNs0tLS9fOPfuMxS7ZbDat27BZDw18TpO//qHAgokknTl7Tv+Z8pMeGvicojZuMW7O4udXQrfe0lY+Fotxk1u790XrzNlzxmK3tu3Y7XEwCQkOUoe2NzmVRUSEOf07NxER4cYiACgQhBMAphB/+qxWr9toLHarbu0aCg4qqerVqqh6lUrGzW4tWbpKiReSjMW4TBo3rKfr69cxFudqx869uXbRs9lsmv7rHL325vv56gaWm7PnzmvU2A+0ZPka46YsNzRuqIhKnlfoT544pV0eBrCLFy9p63bPJ5GoeV11Va7kHEZCy5RRYECgU5k7/n5+KlMqxFgMAAWCcALAFPbui1bscc+uahcvXkytWjaXJAWVDFTTJtcbd3Hr4JFjOnjoiLEYl4mfXwk1vr6usThXMbFxOYZMm82maTP+0JTvZshqtRo3F7jU1FR98sU32rHL9aQLoWVLq00rz7vkWW02RW3c6tG4lvjTpxV96LCx2K3OHVplm2mrVKkgBQR4NvuWn38JhYQQTgAUDsIJgMsuLS1dCxYtU1paunGTS6GhZVW9WuWsf3vTtSslJUULl6w0FuMyali/jvz9/IzFOUpIvKD402eMxVl27t6nmX/MK9D1cnJzPiFR3//8m9sWnQ5tb1JIcJCx2K2t23fr3PkEY3E2+6MP68wZz7qAVSgfqusbZg+Dfn7+KlHCs3Di7+en4CDvBtADgKcIJwAuu5OnTmnXHs9XcK9TM1JlSpfK+re3Xbs2bdmp+NNnjcW4TMqXK6uSJQOMxTlKS01TcrLrgfqpqWma/ttcj6ekLkgbN2/Xlm27jMWSpGpVItSkUX1jsVvHYuO0d99BY7ETm82mFaujPA5h1zeoq7CKFYzFCgzw83g64fLlQlUy0LOLAQDgLcIJgMtu7YbNOnEy3ljs1k3NmzjNYhRUMlCNGtZz2icnscePa+++aGMxLhNvrtrbpVy8qITEC8ZiSdKB6EPatPnyTHyQlpauVWujjMVSZnfELp3aeTwDV1paujbmMsPXufMJ2rs/5wBjV7x4MXVse7MsLgbmBwQEqEI5z6Z2DmUaYQCFiHAC4LK6ePGSVq3eYCx2q1SpYNWqlX2a2pYtmnjcNcjbbmQoXN5ctXeU5GbMSdTGLV5NfyxJ11WvopEvPq0Z332uv2b/oL/n/Kh5v03VFx+9reZNPZ+yWJJ27dnvNjg1bFBHNSOrGYvd2rhlu9v7kqRDh4/qxIlTxmKXIqtVUYP6tY3FWcqFevYZVCwf6jLgAEBBIJwAuKyOHovTvgOeXfmVpBrVq6lSWPZuKddFVlOVyp7PhrRrz36dPOVZpQ7mdDL+tLHI65mrJKnPvd312QdvqVO7VipbplRWxdvXt7hq1aiuV14YrNo1I403c+vEqXidinc9HiaoZKA6tG9lLHbrSEycDh0+aizO4s0Uws1vaJzjWJEKFcsZi1zydD8AyAvCCYDLasnyNV6NDWh8fd1sMw1JUnBQSTVt1MBY7NaJk/Fau2GzsRhXuHPnE3T4aIyx2K3aNSPV+5475etb3LgpS6mQYDW53vOxIqmX0pWS7H69ntYtb1CF8p51oUpJSdHmrTuMxZKXQaxkYIDa3NTMWOyknIetV57uBwB5QTgBcNkkXkjSxk1bjcVu+fv5qWEOa2J407VLklat3uB2ZiVcmSqUD9UPX3+kGd99rp+/naT3xr6iV18cogEP3acundupzc0tFB5WQRXKh8rX11edOrRWqZBg491k482A/aTkJMWfcd1yIklhFSuoRbPGxmK31v+z1eXg/2NxJ7T3gGdjp+rUqqHq1aoYi52UDy2rkoE5v86SgQEqH1rWWAwABYZwAuCy2blrn/ZFHzIWu1Wlcriuy6G/vrddu/YdOKijx+KMxbhClHTTRcnHx0dly5RSudAyuqFJQ3Vsd7Me7HO3XnpmkEaPfFbff/Whfpryieb99q163XW78eZZrFar9h04pPc/+UrTf/2vcXOeWSwW3dqxjceDyg8fidGxuJPGYu3du1/nzuU+1bAktbq5mcsWR0dlypRWyZI5L8RYsmSgypQpbSwGgAJDOAFwWdhsNq1Ys96rQen2VeHd8bZr1/mExBxX9UbRsMmWp4USC7p7kc1m0+kz57Ri9XqNefdj9egzUE8+M1L/+2uJkpJdD77Pq1o1IlW/Tk1jsUvnExK1dYfz9MQ2m00bPJyRrEL5ULX0oKWmRAnfXANTqZBgBQTkvA8A5AfhBMBlcSr+jKI2et6lS5LmzFuozt375vg3Y9afxpvlaOOmrTmuNI7Cl5R0MccFFQtLYuIF/bNpm76eOl1Dho9S996PqXe/wRr9zkdavmqdUlI8G2ieF35+JXTrLW3l4+GsV2vWbnTqgujNFMJ1a9dQ+XK5D2L39yuh0qVyXvm9VEiISvj6GosBoMAQTgBcFpu27tBJD6dALUz7og9p5659xmIUodNnzno99a+3Yx+sVquiDx3VT9N/15Dho9TtvgG6+4En9NKod/XzzNnatfdAoYYRV25o3FARlTzrhhh96LDiT/87O5mnUwh7s7aKv79/rq1RIcFBuXYPA4D8IJwAKHJpaelatnKtx6taF6a0tHStWLNeNhM8l2vV8RMn3a5Z4o6/v5+Cg9138bM7fuKUJn05VXffP0iPDxmhKT/8clmCiCuhZUurTasWxmKX4k+f1bbtu7P+HbVxm0dTCIdXrKhaXkyDXDmXMVvVqlYyFgFAgSKcAChyR2JitWPXXmPxZRO1cavbdSlQuGw2m6I2bvU6qJYtU1rBwe5n2UpJSdHHn0/Rw48/p9/nLijwMSMF5Y4u7T2eVnhN1CalpaUrOTlZW7c7j0Fxp02rFgot6/kA9n4P3KO/5/zo9q9vn57GmwBAgSKcAChyGzZt9XiWoaJw8sQpbXKzlgQKV0xsnNas22gszlWl8DAFuZlZKv70Gb00arzmzFuYp4H2ypzxq1qVyurd805Nen+MHrq/cCrl3kwrvHtftM6cPae4E/E6EnPMuDkbf39/tWzu2X0DgFkQTgAUqeTkZK1Ytd5YfFlZbTatXrvBq5nDkH+pqWn6ftqsPA2Gv75hnayV3B2lpqbpP99M045dni1OaOfr66tGDepqyJP99c3kCfpz5jf6v8/H64kBD6punZry9WDMRl54M63wyROntGvPPo+nEK5fp6Zq1fC8SxcAmAHhBECR2h99WPsOeL62SVHZvG2njsTEGotRSFJSUvT+J//R4qWrjJtyVapUsBo3dL1i+8Yt27Vs5TpjsVvBQSX10jNPaPb0r/XBuFG6+84uqhJRKccV4wuap9MKW202rduwWVEbtxk3uXRjiyYeD17/cfqsbDPf5fT34/RZxrsAgAJBOAFQpFat2aCUlBRj8WV37lyCNnixWj28Y7VadfrMOe3YtVeTv/5BDz8+XAuXrPR6rIkkNW5YT1Uisg/cttlsWrhkpVJTU42bXCoZGKC33nhBXTq3zzGMpKWla9feA8biAuPNtMLLV63Xun82G4uzCS1bWq1ubGosBgDTI5wAKDLxp89qdR7GFxSVFavWKznZuylt4exCUrKefv71bFfau9z1sHr3G6xhL47WzD/m6czZc8abesTf31897rzV5dS4iReSdPhIjLHYrcqVK6lalcrG4my27dytjZsLd0ySp9MKJyZeUGLiBWNxNg3q1VaF8uWNxQBgeoQTAEVm775oxR4/biw2jX0HDml/9GFjMUykXasWalivjrFYknTx4iWdT0g0Frt19OgxHTpy1FicxWazafmqdXrjrQ8KvbXPm2mFc+Njsahju5tdBjgAMDvCCYAikZaWrgWLlpl60HlKSorWRuXeZQaXR3hYBT38wD0FVum+kJSst9+bpCUr1mS1mNlsNp0+c06Llq3SoGGvaMy7H3u9QGRedWh7k0KCg4zFXouoFK76dWsbiwHgikA4AVAkTp46pV179huLc1SxYnlN++bTbGstePrXu+edxrvM1co1UTp77ryxGJeZr6+vHuvXR+FhFYybslgsFvn4ePezduJkvN4a/6m69x6ozt376tYeD6l3v8F6Z8JnOnDwiHH3QlWtSoSaNHI90N8bTRrV82ptEwAwE+/O4gCQR1u37dKJk/HG4hxdV72KypQuZSz2WMsWTeTv52csztGx2Djt2u1diELh8vHxUf8H71H7Ni2Nm5yUCglW9Wq5jyExq+LFi6lLp3b5ahny9/fXLR1aG4sB4IpBOAFQ6C5evKS/l3g/ZWyzptfnq6JWrWplVahQzlico7S0dC1evlq2PMwihYLn6+urJwY8qD73dne5romj4sWLqV3rlh7NeuWtkoEB+QrKnmrYoI5qRlYzFnusepVKql6tirEYAK4YhBMAhe7osTjtO3DQWJyjnNay8FSpkGDVqlHdWJyrrdt3Ke74CWMxiljpUiEaO2q4et11e67BxO7mG29Q3Tq1jMX54uvrq+eGDFSbm5sbN7l16LD7gfY5CSoZqA7tWxmLPdahfSsFlQw0FgPAFYNwAqDQLVm+xqtZlCSpSkQlhVUINRZ7xWKxqM3Nzb2+kn7q1Glt27nXWIwi4uPjo263d9LU/3yg5k0bGTfnKKhkoJ587MECGVguSf7+fnrx2SfUvk1L1atdw7jZrV17D+R58ofWLW9QhfLeH/ulSgWrWZPrjcUAcEUhnAAoVIkXkrQxD4sb1q9TUwEBAcZir9WIrKoyZbzrjmO12bRw8UpdvHjJuAmFqHy5sur3wD36aconenbwYyoZmLfPv37dWhr35st5quA7Cg+rqAljR6pTu1ayWCyqVauGSpUKNu7m0vYde3TIizVXHIVVrKAWzRobi3PlbnFKALiSEE4AFKqdu/ZpX/QhY3GOihcvpqaNGxqL8yS0bFlFVqtqLM7Vnn0HdPRYnLEYBcDX11cVyoeqUcN6ur9XD7039hXNmfG1pn3zqfo9eK/KhZYx3sRrtWtG6qtJ49X99lvk6+tr3JyjMqVL6cnH+urrSeNUr27NrPIqEeFq3LCe077unE9I1JLla4zFHrFYLLq1Yxv5+/sbN7nlY7Ho5pbN8jVGCwDMwGKz2WyJF4pmDncAAIpaSkqKVq37RytXb9DuffsVH39WqampWdtLhQQrvGJ5NWncULd0aK1qVSp5PSVxQdsffVgvvvq2x90hq0RU0vvvvMoUwgCueIQTAABM5v++m6Fpv/xhLHbrzts66dnBAzyeOAAAzOryXhoCAABOjh6L1YKFy4zFbvn7+alTu5sJJgCuCoQTAABMIvFCkr746nvFnz5j3ORW9aoRqnGd91NmA4AZEU4AACgCO3bt1Vvvfar5fy9VXNwJp9ngLl68pGUr12n4y29qbdRmp9vlhrVNAFxNGHMCAEAR2L1nv14a9a4uJBXcb26F8qGa+M6rCg+raNwEAFckWk4AACgCoaFlFRRU0licL7d0aEMwAXBVIZwAAFAE/PxKFNjK9ZJUKTxM3W7rZCwGgCsa4QQAgCLgW7yYAgLytuq9UfHixdT/wXtUsUI54yYAuKIRTgAAKAL+/v4qV7aMsThPOnVorXatWxqLAeCKRzgBAKCIVK4cbizyio+Pj3rddbueG/yYfH2LGzcDwBWPcAIAQBHx8/U1FnnEx2JRndo19OH4URr0WF+CCYCrFlMJAwBQRE6eOq3NW3do775onTl7Xnv2RetS6iWdOXteqampWfv5+vqqTOkQRVarrJtaNFOrm5optGxpp/sCgKsR4QQAAACAKdCtCwAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApWGw2my3xQrKxHLjmxcwerednxRuLXaiv4d88pRuNxWaz4QvdP2mHInq+qok9woxbr1zp57Xtzz91ssUD6hieWZb5WlsP+VRDmxn2x7/O7dCsuXG6sW8nRRi3FbBLJ1bpy5UlNbRnY+OmIpSq6AWT9cFve3TyoiTfxhrxn4FqatwN+RSnWa+9rekxV8i5Met8L/V5Z7R62s8jHrvyXi9gZrScALkIrFpbHds2zuGvjsobb4QiE/PnB3pr1m6dNW5ALuI0a8IXmr4zybihEETpyxHTtPJ4qnFD0do7U2On7dHJUo310JMPaMiQ29TAuA8A4LIinAC5KNPsPg0aMDCHv06KNN7IjJo9pZ+/+fTqajUBvHD20GElSbqx10B1a9lKbRpVVgnjTigAYer51qf6+ZppRbjWXi9QuAgnAIBrwoWkjC7MvvzyAYBpMeYEcMM+5sTzMRrJWvzhS/pyi1Sr72iN7RyatSVpwxcaMmmHkhr11pTn2iowdq6eHzlf6jlUT2q+PvlvRh/4EkFhatL9Pj3ZpbYCne5bOrtrvqbMWKhN0cm6JKlEhcrq2GugHm3x7+Nk9X2u/IAm1tygcTP26KTVV5G3P6kx1Reqn6sxJ+d2aO6Ps/TbpjglpUryC1Gttndq2P2tVL7Yv7utmzxUHxztqokvXqd1U37UrO3ndSldKlGhtno+2l8964b8u7MkpcZpxYypmr78aEb/fjf3m20/3wBFNrlFAx7pqlrGN8GJvZ+3Y1lmn2/7mJPBb6tj7FR9mfn+yi9ETe/oq0F31ldpx+egVJ1cO1Of/LZee0+kSvJVYGQt3fPAI+pWK8BxR5fWTR6qD9bW1/AJjbXx89+1MjpZl4r5qvx1LfTwEw/oxnIOO+cwHibrfuxXYDd8ofsnHVefkb11durXmh+TqhLl6uuJEU+pTbl8vHeZx5/TW9eyv35+snnG/2ceE3N3xutsYkZXrBJBoap2U5dsn1+249K4X+brdeR0DHp4/OXk5JYZmvJzlLbHOnw37umvvi3DMltGovTpo1O10nA7V5+Bk1ObNf37WZq/M975ufVupfK+/+6W7XPLkvm4Du+t/Xs09oEETf50lWIu+qp8i556fXBb191Ds47l0Wq5dbImr8l4n1x+/wvqeLHfz1tDFbHka32fuX+JoDC17vekBrUIUfTSr/WfmXsVnZgq+YXpxr79Nbxt5cw7cDEGIy/HfZ4fPyfJ2rvgW02Zk3HbEuVqq+cTA3XjzvEuxpxknhf+2KhDJ5J1KV1SMV+VrlpL3ZzODe5eb/bP4r5mx/Xj/Hg1HfCeRrQ1nFvOLdKoZ2cppv1TmvJIfedtwDWk2OjRo0dfSk0zlgPXvITdS7RgV7JC6rVT1zpBxs0u+CqycbD2z9+u7duOKbRzS0X6SkpapU/fW6yDvvU1/JV7Vd1XUuIeLVi4X2eObtWC7ecV0f42Pdiltvyjt2rl6tX662SEbm8WJnvdLGbe23rhP//osC1Mne+5U7c3j1D6gW1atniRNvg0Uuc69mCQqF2Llmv7kf1asSVNN/S6W11rSKH1blXD1CjNXHfS+fXEzNUrr/6sZcfSVb3DbXqwSxNFWg9pzYp1mrvJqpvb1VZI5lXmmKh5Wn38gvb+tUTbSjRTz/va6OYqVsVs2aXVy1Yroe6tamqvhCdF6dMXP9Wvu5JV5oZb9WCPZqqZtk/LVm7Qwj3+6tgmUv72/V7+VL/utO93o+qFnNSW1VGat+SQqnZooQiHSqCzYgopV04V0g9ryzE/3fjAPerZpqHqVQ6Vf2zGa43fuVoLd6ap4Z3d1fOmytKxvdqwfq0WJVTXXY3tVcFkrftqlF6fFa3EkLrq1auzOtYrqRNb/tGSv1cotkobtazk9klI9vcmJlGbV2/UoWK11bNXZ7WJOK8t67dq8d8b5d+inWoHZ+6c+dyq3niHWlZydT/ldfPdLTIGqMdGaea6WEVv/EdxYTep/72NVKZEVbW/qYr88/PelQhUhcpllHpgj2IC6uuh/p3V7vqaqlo2UDq9SG+N+F5/H/NRzVtvUe9OzXRjrTJKid2v7Vs2a6nDe3dp09caOnG5jgVkvOaON1VXyPE9Wrdho+YdDNYdN1eTr38phV1XXGeijuh0jVYa0qeNmtetrvAQX6+OP3diZr+tZ7/ZodPB1+nO7l3U2f7dWLJEK85XV4fG5eUrP5UND1eE5bC2HLuoWl0e0IO3Xq8mNaoo1F32jJmrV177RavjA9Ww6y3q1cnhua09qbqtm6hC5vub7XPLckzr/tisI5WbqFfzjA87JmqeVh86prWr4lSpyz3q3SxYflVvUusqbtJk1rG8VssOFVPTbt3V86aySti2TatXLNEGvxvUuVbm97mgjpfYKM1cd1ZJ+5Zp3oFAte1xp26/vpgOb92vLWv/0f6j6/TD8mS17H6nbr8+RCf37dT2tf84nAMyz0MJxmPZm+M+P4/vTrI2fvWGxs6L1aWIm9Tn/ja6MeSw/vh2vo4UL6a441LDWzqoXuZ3dduPr+iVGdFKKd9Ad/XopI431Vek/xnt37Zf61buVHCbNqoZoBxeb/bPouttFbR7wR5tTy+jHjdXyzrHS9LZZTP03bZktbnvITWv4LABuMbkctoHEDPrbd3/6FC3f59ucNg5sK2G9q8tpe/R999F6ZKSte7bmVqXFKDWAx7RjYb6R1Ki1PqpMRrbt6vatOyqQe+8rUGNpKRVMzTrcOZOpxdp8q9xuhTRSRMnjtCjXVqpTftuGv7O2xrewlfRs77QdPu+dolpavnMaA3q0kodewxUz7qG7ZKkeM2fOl/RqaHqOea9zOfQVj0Hj9bkwfUVeHi+xv0e7XyT03E63qC/PnjtAXVr2Uodezylic80V6CStXzdv1fHt834SSsTfXXj4Lc1cXA3dWzZVj2HvK13bw/RpV1zNX175n4zZ2jlOcf9Wqlb3xGaPKaTIhJ36IOvVunSv49uEKCIRq3UtLJvRjC8vpXatKyt0g57JClSwz8areE92ma8Z2MHqmOglLR2vbbZd9o1Q5NXJSuwxQOa/M5T6tm+ldp0eUBjJ45Qz4hkrfzia6246HCnbiUrqWxXvTs24z463jtck19tq/LpcfphRk6vIzepSqrYVeNe6K2OLbvq0b6tVDq/751fZTVt2ViRAZICqqhpy1ZqUyPjCnzMqvU6IF91fO5tvXZvV7Vp2UptuvTWiLFPqmtZKWnZKm3MvJtNqzYrSfU1JPM1t2nZVY++NkaDGvmqxKFt+idRUtnaatOylipIUrlaatOylZpGBOTt+DM6PEPjZsVJVbvq3bFD1cf+3Rg7WoMaSScXf6vpuyQpVLVa2o8VqULtVmrTspVqlTXeoV20pn80X9EKU583RmvEvf8+t08eqS2diNIHM51bg7ySlKyIXiP02r1t1abLA3q0lWPrp2tJyaXU5w37sdxbr00crm7lpOhffzEcnwV1vJxX9OkGem1C5jmny0C9+3BtSee1cVOIBo1/NbP8Ab07vK1KG84B+VcIj394jqZkftcnvZ55Drt3uCaPaa4TWwwzM6ZGaeX6NKlGN33w+sDM47uteg4YoQ8ejpTSj2rJ2jjn22Tj4rMo21wta0jaskkrnT63eK1Yd1QKbKzWzNKAaxzhBMhFbrN11Tb0xQhsOzAjYKydoS+/+1aT16cqsGVvPdbMxSXaRt01yKk8QB3vbqvSOq/5SzJ+aGNWLNPedKl1z56KcOrmEqAbb79JpXVei9cYK3G11NRlIHEQu1oL9kpq2U19DPPIBra4TZ3KSidXrZfzPYeoU9fmzl3O6tZSQ0lJSfZZnzZr8YpUKaKT+rRwfs2RbW9WhG9x7dm1R0qP0uJlyS73U8Rt6tZI0qYNhh9w75Rue5tzICxWX02vl5R0PqNbi6R1S6KUpFB179nK+XUVq6yeXTOC5sp1nnR9DVG3R7o5f0aR3XVPI0mblmvxOYdyL0U0aOAUugrzvYvoNkJT/vOBBhkrSMVqq0EtSempcp5z67i27z3v8O8AdXzuA3330VNqk1ODY56OP2fblqzRSVfve7GQzO9RsuYv3+ywwUO7lmn+Kal0lwfV0/DcSrfvpm6GkOa9UDVskHsgcZTtuRSLVO8eGcfn/OWO73/BHS+l27ZTQ4fWtxI1IzNaBW5opo6OX5aqlVXb6RxQMAr68aPXbNZJhajT7Ybvuv09cOTbXIM++kA/v9bV+b2UFFi7tiIkXfJg8rlsn4VC1eamypL2aJ3jeeV0lNbul8p3aqeGjrsD1yDCCZCL3Gbr6lrVeIsAdRzUW02LJWvl4h1KCmqsoY8aKvSZImrUyj5bUOR1aiApaf8BnZQUcyxeUohS41dpxVrD3+HzKinpbMxR56ueZUNz769/7IhiJJW2xme/37XRuuAn6fRxHXSqsISovPFqc7ES8pWk0+czpvONPaKD6ZIqh2VfOyO8myb+5z1NvLe2dCIuYz//BEVne/yNOpHuKylOMceMd+K58mXdVQDjdfa0Mu7/qCQF6MJh43NYpXWnUlVCUnRsbldIJamyamebti1AtWuESjqqPTnVsnNRoZzhdRTBe6eL53Vy/2atWDtf06d8oVEjX9Kktc67NGnVWIGK1/zxr+r+J1/VqClztXj70Yy++bnJ0/HnKE4H96dmvO/ZvoP/fo90KONxvHEy+qiSJNWOzPaBSopU7VqS0o/qYKxxm6dCVSHH7kfZNaiR/bnYK+t7o50ProI6Xkr6u7igIiki3PCG288BBaxgH/+89uw77+Z4sX9PXUlV0rmj2rZ2lRbPnqYvP3xbT44xjNfKQbbPQlLptq3UVNLGqKis8/bZqE3aqxC1aZH9cwauNYQToDAENlfr6zN/Lq+rozp+xh0yRITnMNA+XbqUVXk+r3XTpmnSZMPft5szfiTjz+mk420DArOHHoOYmOOSpLPr52a/38mztDhWDpV4uxCVKeX4bwdJSbpgLMtJZuVU+1e5ePxpmrU9VdJ5nXF6fO9UKGMYpJ/NUR2NyfjvXBfPYdLsaF2SdPa0B4txhrsIY448qbC7ERhoqKQV5nuXHq91U0ar35OvauhbX2vS5LmatS5aaeWbq0UN511LNBmoSWN6q2utEJVIO6+9y+fry/fHq9+gl/TW7D3K6Tp23o4/Z6n29zS3IO6lS6kZl8QLb1av4irp5pzgWqgq5HCqULrzJfyCOl6qR7h50GLFjSWFomAfP0lZDSseHi9Je+fqrWeHa8Cz4/XW5Gn6ctZ6rT1ZXE1bOXcfzUm2z0KS/JqrdRPHrl1xWrzsqBRxs9pkC07AtafQTr3AtSxp/beasilVJXx9pS2zNHm9625BMa6uyKdfyug2ExiokgpTRCVJqqxHP/pUP3/j5u+tbjlXjF2ICK8oSarV9+3s95f1l5fVkj0UllGZD+yYsf6Kuz/jzD4FK0wVwjPC5GsuHjvrzz6LVU5i41xfTU2XJF8FlTRuMEqWxxMnFuJ7t+3H8fpgebyKN+mmsePf1ndff6qfJ7+nd5/rroYuXkNg1bZ6dOTb+u7LD/TpawP1UOfKKu+TrG2zPtWnq9y/oII4/nztlcx8BD9XSvhmXFhItRq3eOlico4BzXPxOuHiVCFrRigJDHTVLuugEI+X/PPiuM+XQGW9TZ4cL6cX6d3x87UtMUxdh4zQ5M8+0M/ffKAp74zQoLYZx27eBejGFrX/7doVG6UVMVKtdjd7fR4HrkaEE6CgJa3S5G8zunM9N66/Wgelat2332qdi1pKzLGjxiLp8FHtkVS6ZqRKSypfLiSjW9A+446SomfoySde0tBpuQwEdaVcqEpL2rtvj3GLpGj98PxwDXhx2r8Dxz1VIUzVi0k66qKynr5KHwwcric/X66kcuVUQVLSngPZ91OyFn84VP2e/VTzc7hynn+hqlBeUlK09rroonNp+ae6/8mXNH6xBy0nOq6YbPeRrD0H4yVFqrqht0ZS5pob/7K3knmg0N67HVq/NllSfT05pKtqVQhRiayrzMbnt1nfPP+S+r2zKKM7XzFfla/RWN36jtCngzKmQT1xKoeBNvk+/sJUvYavpL3avte4TVL0AW2XpGpVvK7wlY+srEBJ23a6+l5Fa8/ejDFJ1Z2CU5ouGLugHYvNccyMNw7GZE8nlw4cVYykOi67nzkotOPFe/k67vMlRLVrhrg9XjK6z/7r7D/rtTddirizvx5tVlmlA//tOHbp6PGMYz4fSrS8RR0DM7p2xWzdrhhVVpuW2buAAdciwglQoOyzc/nqxn591bRsYz3Wr74Ck3Zo8rersl9FjZqvWY61hfTzWvz7cp1VqLq2zahwRLbNuJq2ctYMRTv23kiP09xpy3U2NU0NG+VhTvzIduoaIWntXE0/7Nwt5OSCGZp7OlVq0Nj7wZnFmqtjG18pZpl+2+BcEUlatUHr0lNVvmZ9Bfo1V8eWvlLMIk1f6zyg99L2Gfp+i3SpXG21NI5xceNSnq5yB6h1u/oKVLzmzFqls45XVFN36JsZe6SLoWrQ2JNKQ7zm/BHl9Blf2vu7ftsiBba8Wa3t3XjKZlTKd+93rrYmbfifFnlaOSzI9y7z6nuGAPn6K+NqtqGinXFMyKESXlu1qibr0t5lmm+o8SYlZnzu2a7oWx1GRhXA8deww00qr1TNnzZXMY6fXdb3yFcdWzV22OChuu3UtZyUtGyW8/dT0tmlczX3tBTYJmPcgCSVLxuSEVqcLiAka938qHxXYu1i/prlfIEjdY9+nL1HCqyvjje66DrkqCCPl7wqiOM+nzLOpS6Ol5i5mhXl8G9JgX4Zx+4F4yD7pB36ZnZGoL6QYgxaXihWX61b+ErbN+mHLUelRq3U0V2XWeAawzongBv2dU7Sk44rLnqrNmz8x+1ffOgNqllKSlr/td7847hSG/XS2HtryFeSb0RDhUT/pTUbdys2op1ujvDNWuckwSdR25et02FbMaXHbdXMr77R7P1WRfYcpiHNM8dLBNdWQ5+NWrBuhxYu26oEHx9dOLpZM7/6QfMOWRXY8mG9ckelzAGhmfPtq6a6dqotpxEXmesM/LvOSZDq1bFqw9KtWr10mTYnWFUiOVYbZn+jj+cfV3pQfQ199o6stQ+yrUWQJXMth5B/H7NC7VDFLovSytUrnO73/T+PKr1qV736aBOF+Pgqon6oYpf9ozWrlmv1sVSVsJ7S9qU/6eNpu5TgE6Y+zz2qprkMG/E/sVWzNh7V4dNWhVqkEId1TtyvqRCYtZ5BsUoNFR63QsvWbdSC9Yek4qmK37NCX30+SxsSpMieT2lwk5yfRMZ9Sqlxm7V0yznJL0F7lv6kj3/YrtMl62voC3epqv3Ca5lQpUYt16atUVp61KoA23FtWvCzPp91TOWqBOrsuRAP1oYoiPeumOI2/6UN++N0snhJqVigqpatpNCL67Rg1zFtiDqkYiVSFX9os+b98LU+X5msyCqBOnvuomq26qB6wb6qWiXz+LEfw2djtGXpT/r0j8NKKdlcg59qqQo+knRRe5ev1p6Dp5RQqrjkW07hIWW8Ov5cKtVAjX02asHazVqyfr8upafr7NHNWd+j0q0e1ku32b8b/36ns7+fRmXUsIFVG5Zs1opl67T/0kUpIeO5ffjfo0ov1VhDh3XNem5lyiZr9cI92rJ2pQ6rhNLjtmreD1M1LbqUIoMSdTbUsM6Jy++RG5nHgNJPavXyzO///hX66vNfte5MgFoPek73VMl8IgV1vLi7n8zzlrKt/WRcz8XFuh8Fcdx7/Phu2M+lazdrSeZ3PW7LXE36ao0O2yTZHM4LYcUUu2iz9u7eqg0XrPJPPqEta3/Xl5/N167yYQo7m6iTobXdv153r8FBBf84/Xf5Vh05KTXt0V/t7Z8jcI2j5QTIRdLhPVq8fHOOf3tOOnTnKlZZj/Zv6zA7V4A69u+pWsWyd++K6DFUY7uFKvq/MzTp2/nalF5RXQeP1rs9nFc6jujxqiaP6KYby8Zr8bRpGfsmhOrGB4Zq0pOuZwLzSEQ3vTvxKT10QynFLJ2rSZNnaPqmJEW06Kax45/Kti6LxwKba+gHr2pIq1I6uTzzfrekqlrnB/Tp6w7Tvtr361xRl7bM15eTp+mHxcfle10rDR/3arZpXF0p0aqnHm0UorRN8zVp8i9accK4R24CdOOTb+vTJ1upWvpeTf92miZNW6VDfpVdfhbu1dfwcQ+oqXWjZkyeph+WxqvMDa7exzD1HDlUD7UI07lN8/Xl5BmatTtQXUeM0ZBmXlRO8v3eBahj755qWi5NG3+dpknfrdZJSRE9Rmhsz9oqf25Hxnvx4zIdLHuLXpv4nsZ0DpMUrz3RmVeMI7ppzDsPqGPlJG2aPUOTJk/TD8vPq3zHB/TpB/3VMKtLWKTuGdBKEcXjNP/baRr/38yuXAVw/EX0eFWfDm6lajqiP6dN06RvF2mbKqrrk6/qk8fz+d0Y94C6Vpd2z894brN2KuMYnjDQ+bmFd9PYkd10Y1iy1s2aoUnfL9K2ku00dvxjyq1Rw1OtB43W8EZpWj5zmiZNW6+YsvX10MgxGupqinJX8n285FcBHff5lHW8pGZ817+cG60SNz2gEXcaWkf9mmvomAfUMVKK+Ttj0oYZ65LU8LFX9fXrD6ppWUn7D+Sv217ddupUNqMVpWNLDz9H4BpgsdlstsSiGY0GwC52rp4fOV/q+aom9nAzIw2uGOsmD9UHa+tr+DdP6UbjRiA/Nnyh+yftUOshl2vAOgrPDn3z9Bda3uIpTXkkD11zgasULScAAABFLGn9Ui1PClHXTgQTwBHhBAAAoEgc1eJvv9aXn4/X8C93KKnWLS4W8gWubYQTAACAIlFcl47t0OL1R5UU1kqvPdvJ4wUdgWsFY04AAAAAmAItJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwCAfFk3eajuf/QLrTNuKGjndmjWj4sUYywHAFw1CCcAgCtAnGZN+ELTdyYZNwAAriKEEwAAAACmQDgBAAAAYAoWm81mS7yQbCwHAJjVhi90/6Qdaj14tFpunazJa+KUlCqVqFBZHXsN1KMtQv/dN3aunh85X9UHj1DtFV/oxy3ndckvTD1ffEl9avhKSlb00h/1n1k7FH0uVZKvAiNr6Z7efdWtbojjo0rp8Vr389f6fvlRnbwolYhorCHPDpRmDtUHa+tr+DdP6UaHx1TPVzWxR5jTXcTMHq3nZ0l93hmtnuH/lp/dNV9TZizUpuhkXbI/hwceUbdaAVn35zTWpGV//fxkcyn9vLYt/FE/ztmr6MTM5x9eUW17Gt4HAMAVodjo0aNHX0pNM5YDAMwqNkoz151U/M61WnaomJp2666eN5VVwrZtWr1iiTb43aDOtYIy9k3cowUL9yt65z/anFZTvR9qr3olSqtuh3qq4JOsdZPf0Ft/HpUqX6+77uqkjvVK6sSWf7Rk4WrtCqqndtfZA0qc5o4Zqy83nlfJJl318F03qnbaRn3zzVZdCIlX3MnyuvnuFopweEzVa6eudTKfR6aE3Uu0YJfU8JYOqhecURYz+20N+/ofxdjC1PmeO3X79SV1bOM/WrZkhRJq36qmlQJVoXIZpR7Yo5iA+nqof2e1u76mqpa1aOP/val3551Sycad9GCPG3VzrZI6sWO71q1Yov2hHdWmmq/T4wMAzI1uXQBwhUpKLqU+b4zW8B5t1aZ9b702cbi6lZOif/1FKy4a9rU20EtjB6pby7bqOaCbGhaTLq36Wh+sTVZgy/764LWB6tm+ldp0eUBjJwxV13LJ2vbzj5p/OuP2l1b9oh8OSxE9RujTId3UsWUrdRswWpMHFdfu7c6P5ZXT8zV5VpwU0UkTJ47Qo10ynsO743urqZI1f+Z8nfWrrKYtGysyQFJAFTVt2UptaoRK6Ru1eFWy1KKXJg7OeE5tujygse/0VlNfX0VvjRLD5wHgykI4AYArVOkuD6pnhENBsUj17lFbSt+j+cvPO2yQ1KCWGhZzLDivxUv2SKqthx9trkDHTb611bdHbSn9qJasjZOUrJVrM/btdkdlxz0V2OI2dSrrVOSVk2ujtFdS6549FeH4/ALbqmNzX5U4dUDbEh3KXTl2SNvOOfw7sK1G/OcDTR7c1vl1AQBMj3ACAFeoBjUijUUqUTNSEZL2Rkc7lZcu5zDAQ5IUrT0HJIVXVm0/wyaH+4k+dFRStA7uc7dvpGrXMpZ5LvpQnKRQVa5k3CLd+OQH+u6jp9TGuWfYv4o1VcdWAVLMKr317FD1GzFeX85erm2xqcY9AQBXCMIJAFyRQlXBeay5s3TnCnpJ/wCnf0upUrokH0/GZCQpMcnTfYtSgJo+PkbvPtJKtSr46tKJo1o8a4beGjlc9z/7gWbtZbIXALjSEE4A4IoUrxNxxjJJ1oxQEhiYW4cmX6nYv/vnLFBBgZ7um7NLKfm/D2cBimz/gMaO/0A/Tx6tsc90U9daISpxLlrTx3+dbewNAMDcCCcAcIU6GJM9nVw6cFQxkupEZu/y5SxSta+TFLvHebxGpkv7ohUjKbJaZUmRql7T3b5xijlqLMtwIcXYcpGsgzHOY2EiKoVKitfRY07FkqSYWaPVb+jbmr7fuCXTlml6cuhwjfo78z79QlWrSVc9OvJtDWmZMfXxycwB/QCAKwPhBACuUDF/zdI6x+moUvfox9l7pMD66nijsRuXUYg6dqgt6aimTzPMamW/H1VW65ZhkgLUul19BbrYN2nDLM1xWoBEUtlSqiDp7J49OutYHrNQCwwze0W0uVm1JK2cPVcx6Q4b0qO1eFW8Llkrq0F1h3LH1ps61RSZnKq9SxY631bJupAoSQHKtQEJAGAqrHMCAFeazHVOlH5Sq5dvVYKPjy7sX6GvPv9V684EqPWg53RPlczxITmsOVKsSiNVjVuhZWujtHTbMaXaUhS/x34/vorsPUiDG2Wsc1KsUkOFZ+17TvJL0J6lP+nTX/YrwUeSzWGdk+IVVCJuiVZv3aVFW86pmH+C9qz9XV/+3zoVqxSqs+cc1jkJqKk6Phu1YO1mLVl/SCqemvEcPpuhNWcC1HrQEHWP8JVUTHGb/9KG/XE6WbykVCxQVcvXUaQ2asG6rf/e9vgOLfx2iqbvTlNgx756vmX5jBcbO1fPD/tU30RZdXOn2jIsLwkAMAnCCQBcaTLDSeunRus+yzr9tmCNVm6OU0qluuoz+Dk93NCh1SSHcCL5KqJ5GzUOjNGuzbu1Yu1mrdsZJ2tYXfUaNEyDb86s2Bv23bL2H61cu01bYqR63Z7QXcHrtCHGIZzIVxFNm6tq0j7t3Lpda9Zt065jvmo64AWNqHtIM9ddcFqEMaROO3Wua9XhzZu1bMVGrd7m6rX4KrKqv/Zv2aWNGzZr3UE/te9YW2F1blb78Dht35j5/KP26IhvmDo/PFQj74xU1uzEme9DQkhNdSWcAIBpWWw2my3xgrFfMADAtDZ8ofsn7VDrIZ9qaDPjRgAArlyMOQEAAABgCoQTAAAAAKZAOAEAAABgCow5AQAAAGAKtJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAWLzWazJV5INpYDAEzEZrMZi1BALBaLsQgAcJkQTgDAhGw2m86esWnvfpuOHLYp9rhNZ05LCYk2XbpEZTq/SpSwKTjIojJlpfCKFlWpalGtGhaVLmMhrADAZUQ4AQCTsNlsSkqS1q63avMmqw4dNu6BwlatqtS4iY9atvBRYCCtKgBQ1AgnAHCZ2VtJFi22avlKum+ZRdvWFnXq6ENrCgAUIcIJAFxGNptN8xek63/zCSVmdVtXi7p2KUZAAYAiQDgBgMvAZrPpyBGrfv2N7ltXgmpVpXvv8VGVKj6EFAAoRIQTAChiNptN66Os+mma1bgJJvfgAz5q0ZyAAgCFhXACAEXIZrNp8ZJ0zZ5DN64rVY/uFnXsQDcvACgMLMIIAEWEYHJ1mD0n43Nk7RkAKHiEEwAoAjabTevWE0yuFrPnZHyeBBQAKFiEEwAoZPbB79N+piJ7NZn2c8bnSkABgIJDOAGAQmaz2TTzNwa/X41m/kY4AYCCRDgBgEJkX8fkMNMFX5UOH5bmL6B7FwAUFMIJABQSm82mM2esmr/AuAVXk/kLpDNnaEEBgIJAOAGAQrRoEd25rgV8zgBQMAgnAFAIbDabEhOtWrHKuAVXoxWrpMREWk8AIL8IJwBQSNZFUVG9lvB5A0D+EU4AoBDYbDZt3kRl9VqyeZONlhMAyCeLzWazJV5INpYDAPLIZrPp9Ol0jX27qCqqARr7pa/Cs/5t065fEvT+39m3x646r1FTs3YsWp0D9cl9xRUoKWl3ioZ9cMnFNuNzv7KMetWismWLyWKxGDcBADxAywkAFIJ9+4wlhcUYTCTJorr3BeuFzk6FphJYp4T6GwuvAkX3uQPA1YlwAgAFzGaz6fCRomk16TK8eGYwsWr5oPMaOChFu5KUEVDaBhh3NxEftR1j5ueXN4eP0LULAPKDcAIAhSDueBFXUJOsipUkXdL7mzKntQ2xqIvzXpKk/mNC9PWXmX+uAkL/oH+3fxmirz8MdLqfLsODs8r72///yxB9/WWQd60hYcU9at1xer5fhuiT4SUcNmY+V8fnmPX8HZ5P50B9YiwrBEX+uQPAVYZwAgAFyGbLuHJ+5nTRjDlYEJ9ZGQ4srt5fhmhsf0lTEzVw0HkNfC5JxvUfw1uFqG2YQ0GYr1Nlv8vwYH3dyvDTEFhcvV1V6gOLq20dx9fpo7au9svGpiR7686dzsHHWQm98KHh+UoKrOP/b6iKsSpJkgJ9srq29b/O/vx9VDPzyXRpVEyBkhSXrsIccnPmtCXrGAAAeI9wAgCFIPFCEVVOpyZqedy//wxvlUOLiCQlpWnGoPMaOOh81u0CqxTPCAidA9UtM2wk7U7JCDhZ3cRcd8OKXZVxXwN/ScsICQ6BICeHN2XuH1hcbd3s32W4n+oGKnOAv+FxwnwzgtjfaTqc+fwyHreEyof8ex+lQjOCV3ipjNcVe6BwJ4Apss8dAK5ShBMAKGA2m02XLhVNy4kkTX3DMRxkCvPN1h1LkmI3/duaEnvOuSKd1bqQlKa5WTNpOXQTCyvm3CqSlKbl9maIv5O0ITPshF+XPcRkE5OkubszHj+8lavWlhJqVMUelC7+O3tXtse5pC2Z43vCrwuQOhdX1UBJSTYlZQWvANUMkySbzsVk3k8huXQpo+UEAJA3hBMAuBr8naRhhhaRnFolcnTe5twdzN51ysiwnzHs5GbBB2mZ42R81Kyt+5+jc/EOUw67eJwFW9Iznl+IRV0ifDKmKj6S2aIS6KPw/sUyunwlpWvLFTpFMQBcK9z/GgAA8qxECe8q6nmTMSbDOEh86hv2rlh5ZBxIn1nhz8awn73rlOeSNWpVRqtMYJibx3DommWX7XHsXbsCi6lbEx9JNh3ekqyT55URfJpktsAcScs2BqegFc3nDgBXL8IJABSCoJLeVtTz4lJmBdywbkj/EpljNbyT1QIRWFzdssJOCb3QJPOnwjiY3LFlpnOgmmUOXPdqXMfUS26C1L/dtQLr+P07q5fLx7G/DxYFZo5ROfm3NPVAZvAJtGQGFucWmMJQNJ87AFy9CCcAUMAsFovKlC2aK+hTl/87EL2tfbpd+2xbjmNCPOEwniOwjn/mdLz+mUHHquVvZA8dWQPwM1d+9/oxdUnv/2kYL5Pp325fFtW9z/A4calOK93bg4jkEKIcu6MVUZeuMmVtrA4PAPlAOAGAQlCxgrGkkPydpGGDUjMr8Q7iUl1OJZybqW+c18DMrlZZktI0Y1Bi9il4k9K0PHNQu/3fM/LwmI6hyFmyRjmOocmUtDtFA41BaWp61nuQdC4943+yZvLKPj6msBTZ5w4AVymLzWazJV7IfjUMAJA3VqtVa9elafqMq/MKepfhwepdx5L3MHIV69PbppY3FpePD9f+ACAvOHsCQCGocZ2xBNcCPncAyB/CCQAUMIvForJlfVSlStGMO4E5VKliU9myPow5AYB8oFsXABQwm80mq9WqRYvT9ed/qaheK+68w6ZOHYvJx4eAAgB5RcsJABQwi8Uii8WiFs2NW3A1a9H8388eAJA3hBMAKAQWi0VBQT5qdZNh5itclVrdZFVQEC0mAJBfhBMAKCQWi0Xt2xtLcTVq3z7j8wYA5A/hBAAKgcVikY+Pj8qW9VHnzrSeXM06d7aqbFkfxpoAQAEgnABAISpWrJi6dLaoSmVm7roaValsU5fOFhUrVsy4CQCQB4QTACgk9sHRPj4+uvsuWk+uRnffZc1qMaHVBADyj3ACAIXIHk6qVi2m3r0IKFeT3r2sqlqVqYMBoCARTgCgkFksGd1+mje36I47CChXgzvusKp584zPlWACAAWHcAIAhcze5adYsWLq0M6iO25PN+6CK8gdt6erQ7t/gwnhBAAKDivEA0ARsa8cn56erqgom2bM5PrQlaZ3r39bTOjOBQAFj3ACAEXIMaAcPmzVH7N9dOQoFVyzq1LZprt6WFW1qg/BBAAKEeEEAIqYzWaTzWZTenq60tPT9fdC6e+FTEVrVp1vSVfnWzKmhaYrFwAULsIJAFwG9oBitVpltVoVH5+u5cstWrWGkGIWrW5KV9u2NoWGZrSUMGUwABQ+wgkAXEaOrShWq1UXEq2K+seirVvp7nU5VKls0/XXW9X8BptKBmUEElpLAKDoEE4A4DKz2TJWj7darU6tKWfO2HQg2qKYo1LcCYvOnrEo8YJ06RKV5PwqUcKmoJJS6TI2hVWwKaKydF2kTWXKZKxLY28l8fHJmLSAYAIARYNwAgAmYQ8p9tYUe1Cx/7/jPsg/e+Cwt4o4dtty3AYAKDqEEwAwIWMYMf4X+WcMIMb/AgCKHuEEAK4AhJLCQxgBAPMgnAAAAAAwBZYnBgAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAK/w8Jwem+7TDvrAAAAABJRU5ErkJggg==">
Muennighoff/Qwen2.5-1.5B-hl-true-v3
Muennighoff
2025-05-31T23:29:57Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:simplescaling/openaimath", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-27T04:43:25Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: simplescaling/openaimath library_name: transformers model_name: Qwen2.5-1.5B-hl-true-v3 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen2.5-1.5B-hl-true-v3 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [simplescaling/openaimath](https://huggingface.co/datasets/simplescaling/openaimath) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Muennighoff/Qwen2.5-1.5B-hl-true-v3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/halos/runs/8k0io02d) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
davgauch/MNLP_M3_mcqa_model_sciq_pref_480_WR
davgauch
2025-05-31T23:29:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T18:22:59Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B-Base tags: - generated_from_trainer model-index: - name: MNLP_M3_mcqa_model_sciq_pref_480_WR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M3_mcqa_model_sciq_pref_480_WR This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 480 - total_train_batch_size: 480 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 1.0 | 38 | 1.1682 | | 1.3213 | 2.0 | 76 | 1.1381 | | 1.21 | 3.0 | 114 | 1.1294 | | 1.1931 | 4.0 | 152 | 1.1268 | | 1.1931 | 4.8876 | 185 | 1.1266 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
gupta-tanish/mistral-7b-instruct-refa-iteration1
gupta-tanish
2025-05-31T23:24:24Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T23:20:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bruhzair/prototype0.4x42
bruhzair
2025-05-31T23:24:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T23:04:52Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x42 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--allura-org--Bigger-Body-70b/snapshots/2f30dbe536e295244831541666baa0a448025048 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c * /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35 parameters: select_topk: 0.35 - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c parameters: select_topk: 0.15 - model: /workspace/cache/models--allura-org--Bigger-Body-70b/snapshots/2f30dbe536e295244831541666baa0a448025048 parameters: select_topk: 0.65 base_model: /workspace/cache/models--allura-org--Bigger-Body-70b/snapshots/2f30dbe536e295244831541666baa0a448025048 merge_method: sce tokenizer: source: base chat_template: llama3 int8_mask: true dtype: float32 out_dtype: bfloat16 ```
Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF
Triangle104
2025-05-31T23:22:16Z
0
0
transformers
[ "transformers", "gguf", "reasoning", "thinking", "cognitivecomputations", "r1", "llama 3.1", "llama-3", "llama3", "llama-3.1", "cot", "deepseek", "Llama 3.1", "Hermes", "DeepHermes", "1,000,000 context", "fine tune", "merge", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "base_model:quantized:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-31T23:20:28Z
--- library_name: transformers tags: - reasoning - thinking - cognitivecomputations - r1 - llama 3.1 - llama-3 - llama3 - llama-3.1 - cot - deepseek - Llama 3.1 - Hermes - DeepHermes - 1,000,000 context - fine tune - merge - llama-cpp - gguf-my-repo base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B --- # Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model. --- Context : 1,000,000 tokens. Required: Llama 3 Instruct template. The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model. The goal of this model was to stablize long generation and long context "needle in a haystack" issues. According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context. [ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ] Additional changes: Model appears to be de-censored / more de-censored. Output generation is improved. Creative output generation is vastly improved. NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too. The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch: [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q8_0-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q8_0.gguf -c 2048 ```
QuantStack/Phantom-Wan-1.3B-GGUF
QuantStack
2025-05-31T23:19:33Z
0
0
gguf
[ "gguf", "image-to-video", "en", "base_model:bytedance-research/Phantom", "base_model:quantized:bytedance-research/Phantom", "license:apache-2.0", "region:us" ]
image-to-video
2025-05-31T21:48:40Z
--- base_model: bytedance-research/Phantom library_name: gguf quantized_by: wsbagnsv1 tags: - image-to-video language: - en license: apache-2.0 --- This is a direct GGUF conversion of [bytedance-research/Phantom](https://huggingface.co/bytedance-research/Phantom) . The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders: | Type | Name | Location | Download | | ------------ | ------------------| ------------------------------ | ---------------- | | Main Model | Phantom-Wan-1.3B | `ComfyUI/models/unet` | GGUF (this repo) | | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) | | VAE | wan_2.1_vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) | [**Example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B-GGUF/resolve/main/Phantom_example_workflow.json?download=true) !!!IMPORTANT!!! You need to change the causvid lora to the 1.3b version, the 14b version is not compatible! ### Notes *As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF
Triangle104
2025-05-31T23:17:00Z
0
0
transformers
[ "transformers", "gguf", "reasoning", "thinking", "cognitivecomputations", "r1", "llama 3.1", "llama-3", "llama3", "llama-3.1", "cot", "deepseek", "Llama 3.1", "Hermes", "DeepHermes", "1,000,000 context", "fine tune", "merge", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "base_model:quantized:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-31T23:12:42Z
--- library_name: transformers tags: - reasoning - thinking - cognitivecomputations - r1 - llama 3.1 - llama-3 - llama3 - llama-3.1 - cot - deepseek - Llama 3.1 - Hermes - DeepHermes - 1,000,000 context - fine tune - merge - llama-cpp - gguf-my-repo base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B --- # Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model. --- Context : 1,000,000 tokens. Required: Llama 3 Instruct template. The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model. The goal of this model was to stablize long generation and long context "needle in a haystack" issues. According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context. [ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ] Additional changes: Model appears to be de-censored / more de-censored. Output generation is improved. Creative output generation is vastly improved. NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too. The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch: [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_M-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_m.gguf -c 2048 ```
luckycanucky/droogs-x30
luckycanucky
2025-05-31T23:15:56Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-31T22:47:56Z
--- base_model: unsloth/llama-3.2-3b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** luckycanucky - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
guydebruyn/InstructionFollowing_DPO_V2.0
guydebruyn
2025-05-31T23:14:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T23:11:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibuki95/model3
ibuki95
2025-05-31T23:13:10Z
0
0
null
[ "region:us" ]
null
2025-05-31T23:04:49Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF
Triangle104
2025-05-31T23:10:16Z
0
0
transformers
[ "transformers", "gguf", "reasoning", "thinking", "cognitivecomputations", "r1", "llama 3.1", "llama-3", "llama3", "llama-3.1", "cot", "deepseek", "Llama 3.1", "Hermes", "DeepHermes", "1,000,000 context", "fine tune", "merge", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "base_model:quantized:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-31T23:07:05Z
--- library_name: transformers tags: - reasoning - thinking - cognitivecomputations - r1 - llama 3.1 - llama-3 - llama3 - llama-3.1 - cot - deepseek - Llama 3.1 - Hermes - DeepHermes - 1,000,000 context - fine tune - merge - llama-cpp - gguf-my-repo base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B --- # Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model. --- Context : 1,000,000 tokens. Required: Llama 3 Instruct template. The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model. The goal of this model was to stablize long generation and long context "needle in a haystack" issues. According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context. [ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ] Additional changes: Model appears to be de-censored / more de-censored. Output generation is improved. Creative output generation is vastly improved. NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too. The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch: [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ] --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048 ```
dkpanj/pretrained_diag_model
dkpanj
2025-05-31T23:10:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-31T19:19:31Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dkpanj - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
h3en1x/llm-router-poc
h3en1x
2025-05-31T23:08:24Z
2
0
null
[ "safetensors", "distilbert", "prompt-routing", "text-classification", "transformer", "llm-router", "license:mit", "region:us" ]
text-classification
2025-05-29T18:08:06Z
--- license: mit tags: - prompt-routing - text-classification - distilbert - transformer - llm-router --- # DistilBERT LLM Router (POC) This model is a proof-of-concept for learning to route prompts to the most suitable LLM (Large Language Model) based on prompt content. It is based on `distilbert-base-uncased` and fine-tuned as a sequence classification model to predict whether a prompt should be handled by a reference model (e.g., GPT-4o) or a cheaper local model (e.g., TinyLlama1b). ## Dataset & Labeling The model was trained on a dataset of 200 carefully curated prompts. Prompts are labeled using GPT-4o as the reference. Each candidate model's output is compared to the reference output and scored from 1 to 5. We treat a score of **5** as a match to the reference. Labels are then derived by choosing the **lowest-cost model** that achieves this score. ### Model Selection | Model | Inference Cost | Output Quality | |---------------|-----------------------------|----------------| | `GPT-4o` | High (cloud-only) | Excellent | | `TinyLlama1b` | Free (local GPU inference) | Good | The model is trained to output: - `label = 0` → Use TinyLlama - `label = 1` → Use GPT-4o ## Metrics ### Overall Metrics (Train / Test) | Model | Train Accuracy | Train Precision | Train Recall | Train F1 | Test Accuracy | Test Precision | Test Recall | Test F1 | |--------------------|----------------|------------------|--------------|----------|----------------|----------------|-------------|---------| | DummyClassifier | 0.6875 | 0.4727 | 0.6875 | 0.5602 | 0.6750 | 0.4556 | 0.6750 | 0.5440 | | LogisticRegression | 0.8688 | 0.8918 | 0.8688 | 0.8725 | 0.8000 | 0.8266 | 0.8000 | 0.8053 | | XGBoost | 0.9625 | 0.9624 | 0.9625 | 0.9623 | 0.9000 | 0.9006 | 0.9000 | 0.8976 | | DistilBERT | 0.8500 | 0.8541 | 0.8500 | 0.8515 | 0.8500 | 0.8593 | 0.8500 | 0.8525 | ### Per-Class Metrics (Test Set) | Model | Class | Precision | Recall | F1 Score | Support | |--------------------|-------------|-----------|----------|----------|---------| | DummyClassifier | TinyLlama | 0.0000 | 0.0000 | 0.0000 | 13 | | | GPT-4o | 0.6750 | 1.0000 | 0.8060 | 27 | | | **Macro Avg** | 0.3375 | 0.5000 | 0.4030 | 40 | | | **Weighted Avg** | 0.4556 | 0.6750 | 0.5440 | 40 | | LogisticRegression | TinyLlama | 0.6471 | 0.8462 | 0.7333 | 13 | | | GPT-4o | 0.9130 | 0.7778 | 0.8400 | 27 | | | **Macro Avg** | 0.7801 | 0.8120 | 0.7867 | 40 | | | **Weighted Avg** | 0.8266 | 0.8000 | 0.8053 | 40 | | XGBoost | TinyLlama | 0.9091 | 0.7692 | 0.8333 | 13 | | | GPT-4o | 0.8966 | 0.9630 | 0.9286 | 27 | | | **Macro Avg** | 0.9028 | 0.8661 | 0.8810 | 40 | | | **Weighted Avg** | 0.9006 | 0.9000 | 0.8976 | 40 | | DistilBERT | TinyLlama | 0.7333 | 0.8462 | 0.7857 | 13 | | | GPT-4o | 0.9200 | 0.8519 | 0.8846 | 27 | | | **Macro Avg** | 0.8267 | 0.8490 | 0.8352 | 40 | | | **Weighted Avg** | 0.8593 | 0.8500 | 0.8525 | 40 | ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("h3en1x/llm-router-poc") model = AutoModelForSequenceClassification.from_pretrained("h3en1x/llm-router-poc") prompt = "What is the capital of France?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model(**inputs) label_id = outputs.logits.argmax(dim=-1).item() # Get label name (optional) label = model.config.id2label[label_id] print(label)
TofuTank/pulse_ugo3d
TofuTank
2025-05-31T23:08:17Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-31T23:05:22Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO
AmberYifan
2025-05-31T23:08:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "kto", "conversational", "arxiv:2402.01306", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T22:33:57Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-KTO tags: - generated_from_trainer - trl - kto licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-KTO This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/2g2trdw3) This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite KTO as: ```bibtex @article{ethayarajh2024kto, title = {{KTO: Model Alignment as Prospect Theoretic Optimization}}, author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela}, year = 2024, eprint = {arXiv:2402.01306}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
menesnas/fine-tuned-gpt2-tweet-sentiment
menesnas
2025-05-31T22:56:57Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "en", "dataset:mteb/tweet_sentiment_extraction", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-05-31T22:35:04Z
--- library_name: transformers license: mit datasets: - mteb/tweet_sentiment_extraction language: - en metrics: - accuracy base_model: - openai-community/gpt2 pipeline_tag: text-classification --- # Model Card for Model ID This is a fine-tuned GPT-2 model for tweet sentiment classification. It categorizes tweets into positive, neutral, or negative sentiment based on their content. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** GPT-2 (with sequence classification head) - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** gpt2 #### Metrics The model was evaluated using the following metrics: - Training Loss: Measures how well the model fits the training data. A lower value indicates better learning. - Validation Loss: Measures how well the model generalizes to unseen data. It is used to detect overfitting. - Accuracy: Percentage of correctly classified samples in the validation dataset. It is the primary performance metric for this sentiment classification task. ### Results - The model was trained for 3 epochs. Below are the results per epoch: - | Epoch | Training Loss | Validation Loss | Accuracy | | ----- | ------------- | --------------- | -------- | | 1 | 0.832400 | 0.871651 | 62.7% | | 2 | 0.512700 | 0.794255 | 69.3% | | 3 | 0.517500 | 0.819540 | 71.8% |
DevQuasar/Writer.palmyra-small-GGUF
DevQuasar
2025-05-31T22:49:00Z
0
0
null
[ "text-generation", "base_model:Writer/palmyra-small", "base_model:finetune:Writer/palmyra-small", "region:us" ]
text-generation
2025-05-31T22:48:56Z
--- base_model: - Writer/palmyra-small pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [Writer/palmyra-small](https://huggingface.co/Writer/palmyra-small) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
jimnoneill/CarD-T
jimnoneill
2025-05-31T22:48:01Z
7
1
transformers
[ "transformers", "tensorboard", "safetensors", "electra", "token-classification", "biology", "chemistry", "medical", "cancer", "carcinogenesis", "biomedical", "ner", "oncology", "en", "dataset:jimnoneill/CarD-T-NER", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-08-09T20:56:49Z
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: token-classification tags: - biology - chemistry - medical - cancer - carcinogenesis - biomedical - ner - oncology datasets: - jimnoneill/CarD-T-NER metrics: - accuracy - precision - recall - f1 model-index: - name: CarD-T results: - task: type: token-classification name: Named Entity Recognition dataset: name: CarD-T-NER type: jimnoneill/CarD-T-NER metrics: - type: precision value: 0.894 - type: recall value: 0.857 - type: f1 value: 0.875 --- # CarD-T: Carcinogen Detection via Transformers ## Overview CarD-T (Carcinogen Detection via Transformers) is a novel text analytics approach that combines transformer-based machine learning with probabilistic statistical analysis to efficiently nominate carcinogens from scientific texts. This model is designed to address the challenges faced by current systems in managing the burgeoning biomedical literature related to carcinogen identification and classification. ## Model Details * **Architecture**: Based on Bio-ELECTRA, a 335 million parameter language model (sultan/BioM-ELECTRA-Large-SQuAD2) * **Training Data**: [CarD-T-NER dataset](https://huggingface.co/datasets/jimnoneill/CarD-T-NER) containing 19,975 annotated examples from PubMed abstracts (2000-2024) * Training set: 11,985 examples * Test set: 7,990 examples * **Task**: Named Entity Recognition (NER) for carcinogen identification using BIO tagging * **Performance**: * Precision: 0.894 * Recall: 0.857 * F1 Score: 0.875 ## Named Entity Labels The model recognizes 4 entity types using BIO (Beginning-Inside-Outside) tagging scheme, resulting in 9 total labels: | Label ID | Label | Description | |----------|-------|-------------| | 0 | O | Outside any entity | | 1 | B-carcinogen | Beginning of carcinogen entity | | 2 | I-carcinogen | Inside carcinogen entity | | 3 | B-negative | Beginning of negative/exculpatory evidence | | 4 | I-negative | Inside negative evidence | | 5 | B-cancertype | Beginning of cancer type/metadata | | 6 | I-cancertype | Inside cancer type/metadata | | 7 | B-antineoplastic | Beginning of anti-cancer agent | | 8 | I-antineoplastic | Inside anti-cancer agent | ### Entity Type Descriptions: * **carcinogen**: Substances or agents implicated in carcinogenesis * **negative**: Exculpating evidence for potential carcinogenic entities * **cancertype**: Metadata including organism (human/animal/cell), cancer type, and affected organs * **antineoplastic**: Chemotherapy drugs and cancer-protective agents ## Use Cases * Streamlining toxicogenomic literature reviews * Identifying potential carcinogens for further investigation * Augmenting existing carcinogen databases with emerging candidates * Extracting structured information from cancer research literature * Supporting evidence-based oncology research ## Limitations * Identifies potential candidates, not confirmed carcinogens * Analysis limited to abstract-level information * May be influenced by publication trends and research focus shifts * Requires validation by domain experts for clinical applications ## Installation ```bash pip install transformers torch datasets ``` ## Usage ### Basic Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification import torch # Load model and tokenizer model_name = "jimnoneill/CarD-T" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) # Define label mappings id2label = { 0: "O", 1: "B-carcinogen", 2: "I-carcinogen", 3: "B-negative", 4: "I-negative", 5: "B-cancertype", 6: "I-cancertype", 7: "B-antineoplastic", 8: "I-antineoplastic" } ``` ### Named Entity Recognition Pipeline ```python def predict_entities(text): # Tokenize input inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512) # Get predictions with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=2) # Convert tokens and predictions to entities tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0]) entities = [] current_entity = None current_tokens = [] for token, pred_id in zip(tokens, predictions[0]): pred_label = id2label[pred_id.item()] if pred_label == "O": if current_entity: entities.append({ "entity": current_entity, "text": tokenizer.convert_tokens_to_string(current_tokens) }) current_entity = None current_tokens = [] elif pred_label.startswith("B-"): if current_entity: entities.append({ "entity": current_entity, "text": tokenizer.convert_tokens_to_string(current_tokens) }) current_entity = pred_label[2:] current_tokens = [token] elif pred_label.startswith("I-") and current_entity: current_tokens.append(token) # Don't forget the last entity if current_entity: entities.append({ "entity": current_entity, "text": tokenizer.convert_tokens_to_string(current_tokens) }) return entities # Example usage text = "Benzene exposure has been linked to acute myeloid leukemia, while vitamin D shows antineoplastic properties." entities = predict_entities(text) for entity in entities: print(f"{entity['entity']}: {entity['text']}") ``` ### Using with Hugging Face Pipeline ```python from transformers import pipeline # Create NER pipeline ner_pipeline = pipeline( "token-classification", model=model_name, aggregation_strategy="simple" ) # Analyze text text = "Studies show asbestos causes mesothelioma in humans, but aspirin may have protective effects." results = ner_pipeline(text) # Display results for entity in results: print(f"{entity['entity_group']}: {entity['word']} (confidence: {entity['score']:.3f})") ``` ### Processing Scientific Abstracts ```python def analyze_abstract(abstract): """Analyze a scientific abstract for cancer-related entities.""" entities = predict_entities(abstract) # Organize by entity type results = { "carcinogens": [], "protective_agents": [], "cancer_types": [], "negative_findings": [] } for entity in entities: if entity['entity'] == "carcinogen": results["carcinogens"].append(entity['text']) elif entity['entity'] == "antineoplastic": results["protective_agents"].append(entity['text']) elif entity['entity'] == "cancertype": results["cancer_types"].append(entity['text']) elif entity['entity'] == "negative": results["negative_findings"].append(entity['text']) return results # Example with a scientific abstract abstract = """ Recent studies in male rats exposed to compound X showed increased incidence of hepatocellular carcinoma. However, concurrent administration of resveratrol demonstrated significant protective effects against liver tumor development. No carcinogenic activity was observed in female mice under similar conditions. """ analysis = analyze_abstract(abstract) print("Analysis Results:") for category, items in analysis.items(): if items: print(f"\n{category.replace('_', ' ').title()}:") for item in items: print(f" - {item}") ``` ## Training Configuration The model was fine-tuned using the following configuration: ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir="./card-t-model", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=5, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, metric_for_best_model="f1", push_to_hub=True, ) ``` If you use this model in your research, please cite: ```bibtex @article{oneill2024cardt, title={CarD-T: Interpreting Carcinomic Lexicon via Transformers}, author={O'Neill, Jamey and Reddy, G.A. and Dhillon, N. and Tripathi, O. and Alexandrov, L. and Katira, P.}, journal={MedRxiv}, year={2024}, doi={10.1101/2024.08.13.24311948} } ``` ## License This model is released under the Apache License 2.0, matching the license of the training dataset. ## Acknowledgments We thank the biomedical research community for making their findings publicly available through PubMed, enabling the creation of this model. Special thanks to the Bio-ELECTRA team for the base model architecture. ## Contact For questions, feedback, or collaborations: - **Author**: Jamey O'Neill - **Email**: [email protected] - **Hugging Face**: [@jimnoneill](https://huggingface.co/jimnoneill) - **Dataset**: [CarD-T-NER](https://huggingface.co/datasets/jimnoneill/CarD-T-NER) ## Disclaimer This model is intended for research purposes only. It should not be used as a sole source for medical decisions or clinical diagnoses. Always consult with qualified healthcare professionals and validate findings through appropriate experimental methods.
Jose11-2/t2
Jose11-2
2025-05-31T22:42:48Z
0
0
null
[ "region:us" ]
null
2025-05-31T22:41:21Z
flask transformers torch Pillow
Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF
Triangle104
2025-05-31T22:42:09Z
0
0
transformers
[ "transformers", "gguf", "nvidia", "reasoning", "math", "code", "reinforcement learning", "pytorch", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:nvidia/AceReason-Nemotron-14B", "base_model:quantized:nvidia/AceReason-Nemotron-14B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-31T22:38:33Z
--- library_name: transformers license: other license_name: nvidia-open-model-license license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation language: - en tags: - nvidia - reasoning - math - code - reinforcement learning - pytorch - llama-cpp - gguf-my-repo base_model: nvidia/AceReason-Nemotron-14B --- # Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model. --- We're thrilled to introduce AceReason-Nemotron-14B, a math and code reasoning model trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025 (+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6 (+7%), and 2024 on Codeforces (+543). We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first RL training on math-only prompts, then RL training on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks, but also code reasoning tasks. In addition, extended code-only RL further improves code benchmark performance while causing minimal degradation in math results. We find that RL not only elicits the foundational reasoning capabilities acquired during pre-training and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -c 2048 ```
cosmosistan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox
cosmosistan
2025-05-31T22:40:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sly nasty ox", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T13:13:44Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sly nasty ox - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cosmosistan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wuxs/Mistral_TopK_SAE_l16_FT_RewardModeling
wuxs
2025-05-31T22:40:15Z
0
0
null
[ "en", "dataset:Anthropic/hh-rlhf", "base_model:wuxs/Mistral_TopK_SAE_l16", "base_model:finetune:wuxs/Mistral_TopK_SAE_l16", "license:apache-2.0", "region:us" ]
null
2025-05-31T22:38:06Z
--- license: apache-2.0 datasets: - Anthropic/hh-rlhf language: - en base_model: - wuxs/Mistral_TopK_SAE_l16 ---
wuxs/Mistral_TopK_SAE_l16_FT_ToxicDetect
wuxs
2025-05-31T22:37:50Z
0
0
null
[ "en", "dataset:Anthropic/hh-rlhf", "base_model:wuxs/Mistral_TopK_SAE_l16", "base_model:finetune:wuxs/Mistral_TopK_SAE_l16", "license:apache-2.0", "region:us" ]
null
2025-05-31T22:32:42Z
--- license: apache-2.0 datasets: - Anthropic/hh-rlhf language: - en base_model: - wuxs/Mistral_TopK_SAE_l16 ---
dimasik2987/1dc012c6-dc3a-4a44-824b-4c61977f2574
dimasik2987
2025-05-31T22:37:40Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:numind/NuExtract-1.5", "base_model:adapter:numind/NuExtract-1.5", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-31T21:19:51Z
--- library_name: peft license: mit base_model: numind/NuExtract-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 1dc012c6-dc3a-4a44-824b-4c61977f2574 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: numind/NuExtract-v1.5 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - dc28067aa0597a70_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/1dc012c6-dc3a-4a44-824b-4c61977f2574 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/dc28067aa0597a70_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 56cc23c8-c1b5-4b3c-b6b5-41661701b16a wandb_project: s56-7 wandb_run: your_name wandb_runid: 56cc23c8-c1b5-4b3c-b6b5-41661701b16a warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 1dc012c6-dc3a-4a44-824b-4c61977f2574 This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.755 | 0.0000 | 1 | 1.1158 | | 2.1422 | 0.0117 | 250 | 0.8943 | | 1.866 | 0.0233 | 500 | 0.8828 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
HammadQ123/genai-compressed-predictor
HammadQ123
2025-05-31T22:36:48Z
0
0
null
[ "region:us" ]
null
2025-05-31T22:36:26Z
# Compressed GenAI RNA Binding Predictor ## Model Description This is a compressed version of the RNA-protein binding prediction model for faster loading and inference. ## Model Details - **Model Type**: Compressed PyTorch model for RNA binding prediction - **Input**: RNA sequences (A, U, G, C nucleotides) - **Output**: Binding score (RMSD prediction) - **Optimization**: Compressed for faster loading and reduced memory usage ## Usage ```python from huggingface_hub import hf_hub_download import torch # Download compressed model model_path = hf_hub_download( repo_id="HammadQ123/genai-compressed-predictor", filename="model_compressed.pt" ) # Load model model = torch.load(model_path, map_location='cpu') # Use for predictions... ``` ## Performance - Faster loading compared to original model - Reduced memory footprint - Maintained prediction accuracy ## Related Repositories - Original model: HammadQ123/genai-predictor ## License [Add your license here]
Sajjad313/jina-embedding-v3
Sajjad313
2025-05-31T22:35:35Z
137
1
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "Embedding", "custom_code", "en", "fa", "ar", "base_model:jinaai/jina-embeddings-v3", "base_model:finetune:jinaai/jina-embeddings-v3", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-04-28T13:45:39Z
--- license: apache-2.0 language: - en - fa - ar inference: true base_model: - jinaai/jina-embeddings-v3 pipeline_tag: feature-extraction tags: - Embedding library_name: transformers --- # Jina-embedding-v3 ## Overview This repository uses the original [jinai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) embedding model, without modification or fine-tuning. A versatile sentence embedding model suitable for a wide range of natural language processing tasks you can find the specific tasks in the *Tasks* section *The only difference is that you can use HF inference from this page, which is unavailable in the official page of jinaai.* --- ## Inference Access You can use this model for inference directly from this page using Hugging Face’s serverless inference API. This is not possible via the official Jina AI model page. Here’s an example of how to get sentence embeddings using the Hugging Face Inference API: you can use the function below for inference (it applies mean-pooling and normalization too and returns embeddings ready to store in your vector databases) ```python from huggingface_hub import InferenceClient import numpy as np def get_HF_embeddings(text, api_key, model, mean_pool=True, l2_normalize=True): """ Fetches embeddings from HuggingFace serverless Inference API for a single string or a list of strings. Args: text (str or list): Input text or list of texts. api_key (str): HuggingFace API key. model (str): Model repo. mean_pool (bool): If True, mean-pool the output. l2_normalize (bool): If True, L2 normalize the output. Returns: np.ndarray: Embedding(s) as numpy array(s). """ client = InferenceClient(api_key=api_key) if isinstance(text, str): texts = [text] single_input = True else: texts = text single_input = False result = client.feature_extraction( text=texts, model=model ) if mean_pool: embeddings = [np.mean(r, axis=0) for r in result] if l2_normalize: embeddings = [ e / np.linalg.norm(e) if np.linalg.norm(e) > 0 else e for e in embeddings] else: embeddings = [r for r in result] if single_input: return embeddings[0] return np.array(embeddings) ``` --- ## Tasks - **Extended Sequence Length:** Supports up to 8192 tokens with RoPE positional encoding. - **Task-Specific Embeddings:** Choose the `task` parameter for different application needs: - `retrieval.query` – For query embeddings in asymmetric retrieval - `retrieval.passage` – For passage embeddings in asymmetric retrieval - `separation` – For clustering and re-ranking - `classification` – For classification tasks - `text-matching` – For symmetric similarity tasks (e.g., STS) - **Matryoshka Embeddings:** Flexible embedding sizes (32, 64, 128, 256, 512, 768, 1024 dimensions). --- ## How to Use You can use this model with either the `transformers` or the `sentence-transformers` library. For **full feature support** (using task-specific LoRA heads and flexible embedding sizes), it is recommended to use the `sentence-transformers` library. ### Using sentence-transformers (Recommended) ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Sajjad313/jina-embedding-v3", trust_remote_code=True) # Task-specific usage: Retrieval query embedding query_embedding = model.encode( ["What is the weather like in Berlin today?"], task="retrieval.query" ) ``` ### Using transformers ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3") model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3") inputs = tokenizer( "What is the weather like in Berlin today?", return_tensors="pt", padding=True, truncation=True ) outputs = model(**inputs) embedding = outputs.last_hidden_state[:, 0] # CLS token embedding ``` > Note: Using the `transformers` library gives you basic access to the model’s output, but for full task-specific capabilities, use `sentence-transformers`. ---
BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk
BootesVoid
2025-05-31T22:31:10Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-31T22:31:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cmbcesd72001R10Ozzqcm5Ddu_Cmbcqmqri01Ak10Ozo5T0Yksk <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk/discussions) to add images that show off what you’ve made with this LoRA.
rtl-llm/qwen2.5coder-7b-origen-vhdl-vhdl-verilog-gs16
rtl-llm
2025-05-31T22:30:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T22:27:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-chisel
rtl-llm
2025-05-31T22:26:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T12:46:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
N-A-Me/Alaska
N-A-Me
2025-05-31T22:23:47Z
0
0
null
[ "base_model:Liberata/illustrious-xl-v1.0", "base_model:finetune:Liberata/illustrious-xl-v1.0", "license:cc", "region:us" ]
null
2025-05-07T06:04:03Z
--- widget: - text: Alaska base_model: Liberata/illustrious-xl-v1.0 instance_prompt: Alaska license: cc --- # Alaska <Gallery /> ## Trigger words You should use `Alaska` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/N-A-Me/Alaska/tree/main) them in the Files & versions tab.
Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF
Triangle104
2025-05-31T22:18:48Z
0
0
transformers
[ "transformers", "gguf", "nvidia", "reasoning", "math", "code", "reinforcement learning", "pytorch", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:nvidia/AceReason-Nemotron-14B", "base_model:quantized:nvidia/AceReason-Nemotron-14B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-31T22:14:36Z
--- library_name: transformers license: other license_name: nvidia-open-model-license license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation language: - en tags: - nvidia - reasoning - math - code - reinforcement learning - pytorch - llama-cpp - gguf-my-repo base_model: nvidia/AceReason-Nemotron-14B --- # Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model. --- We're thrilled to introduce AceReason-Nemotron-14B, a math and code reasoning model trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025 (+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6 (+7%), and 2024 on Codeforces (+543). We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first RL training on math-only prompts, then RL training on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks, but also code reasoning tasks. In addition, extended code-only RL further improves code benchmark performance while causing minimal degradation in math results. We find that RL not only elicits the foundational reasoning capabilities acquired during pre-training and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -c 2048 ```
Desieditwala/Newdataset
Desieditwala
2025-05-31T22:13:10Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-31T21:45:26Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Godfrey --- # Newdataset <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Godfrey` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Godfrey", "lora_weights": "https://huggingface.co/Desieditwala/Newdataset/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Desieditwala/Newdataset', weight_name='lora.safetensors') image = pipeline('Godfrey').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Desieditwala/Newdataset/discussions) to add images that show off what you’ve made with this LoRA.
arnaultsta/MNLP_M2_rag_training_MCQA_whole_RAG_1
arnaultsta
2025-05-31T22:09:52Z
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2025-05-31T15:20:36Z
--- library_name: peft license: apache-2.0 base_model: unsloth/qwen3-0.6b-base-unsloth-bnb-4bit tags: - unsloth - generated_from_trainer model-index: - name: MNLP_M2_rag_training_MCQA_whole_RAG_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M2_rag_training_MCQA_whole_RAG_1 This model is a fine-tuned version of [unsloth/qwen3-0.6b-base-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-0.6b-base-unsloth-bnb-4bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.0
mlx-community/Qwen3-4B-4bit-DWQ
mlx-community
2025-05-31T21:53:18Z
370
1
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-05-09T02:53:59Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-4B tags: - mlx --- # mlx-community/Qwen3-4B-4bit-DWQ This model [mlx-community/Qwen3-4B-4bit-DWQ](https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ) was converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) using mlx-lm version **0.24.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen3-4B-4bit-DWQ") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf
RichardErkhov
2025-05-31T21:51:51Z
326
0
transformers
[ "transformers", "gguf", "text-generation", "arxiv:2405.17743", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-08-21T11:59:56Z
--- library_name: transformers license: llama3 pipeline_tag: text-generation --- # ORLM-LLaMA-3-8B: A Large Language Model for Optimization Modeling This repository contains the quantized model weights for [ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B), a fine-tuned version of Meta's Llama 3 8B model, specifically trained for generating optimization models and corresponding Python code. It's presented in the paper [ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling](https://huggingface.co/papers/2405.17743). This model significantly improves the capability of automating optimization modeling and solving, achieving state-of-the-art performance on several benchmarks. The official repo can be found here: https://github.com/Cardinal-Operations/ORLM Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) ORLM-LLaMA-3-8B - GGUF - Model creator: https://huggingface.co/CardinalOperations/ - Original model: https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [ORLM-LLaMA-3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [ORLM-LLaMA-3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [ORLM-LLaMA-3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [ORLM-LLaMA-3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [ORLM-LLaMA-3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [ORLM-LLaMA-3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [ORLM-LLaMA-3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [ORLM-LLaMA-3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [ORLM-LLaMA-3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [ORLM-LLaMA-3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [ORLM-LLaMA-3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [ORLM-LLaMA-3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [ORLM-LLaMA-3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [ORLM-LLaMA-3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [ORLM-LLaMA-3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [ORLM-LLaMA-3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [ORLM-LLaMA-3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [ORLM-LLaMA-3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [ORLM-LLaMA-3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [ORLM-LLaMA-3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [ORLM-LLaMA-3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [ORLM-LLaMA-3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: llama3 --- see our paper in https://arxiv.org/abs/2405.17743 github repo: https://github.com/Cardinal-Operations/ORLM ## Model Details LLaMA-3-8B-ORLM is fully fine-tuned on the OR-Instruct data and built with Meta [LLaMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model. More training details can be seen at https://arxiv.org/abs/2405.17743 ## Model Usage Prompting Template: ```text Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question. # Question: {Question} # Response: ``` Please replace the `{Question}` with any natural language OR question. ## Examples ```text --------------------------OR question wrapped in the template-------------------------- Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question. # Question: A lab has 1000 units of medicinal ingredients to make two pills, a large pill and a small pill. A large pill requires 3 units of medicinal ingredients and 2 units of filler. A small pill requires 2 units of medicinal ingredients and 1 unit of filler. The lab has to make at least 100 large pills. However, since small pills are more popular at least 60% of the total number of pills must be small. How many of each should be made to minimize the total number of filler material needed? # Response: --------------------------top-1 completion output by LLaMA-3-8B-ORLM under greedy decoding-------------------------- ## Mathematical Model: To solve the lab's pill production problem, we need to construct a linear programming model to minimize the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. Here is the model formulation: ### Decision Variables: - \(x\): Number of large pills to be made. - \(y\): Number of small pills to be made. ### Objective Function: - Minimize the total number of filler material needed: \[Z = 2x + y\] ### Constraints: 1. Medicinal ingredient usage constraint: \[3x + 2y \leq 1000\] 2. Large pill production requirement: \[x \geq 100\] 3. Proportion constraint for small pills: \[y \geq 0.6(x + y)\] 4. Non-negativity constraint: \[x, y \geq 0\] ### Integer Constraints: - \(x, y\) must be integers, as the number of pills cannot be fractional. Using this model, we can apply linear programming techniques to find the optimal number of large and small pills to be made, minimizing the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. This will provide the lab with an economically efficient pill production strategy. ## Python Code Solution Using `coptpy`: Here is a Python script using the `coptpy` library to solve the problem: \`\`\`python import coptpy as cp from coptpy import COPT # Create a COPT environment env = cp.Envr() # Create the model model = env.createModel("PillProduction") # Add decision variables x = model.addVar(vtype=COPT.INTEGER, name="x") # Number of large pills y = model.addVar(vtype=COPT.INTEGER, name="y") # Number of small pills # Objective function: Minimize the total number of filler material needed model.setObjective(2 * x + y, sense=COPT.MINIMIZE) # Constraints model.addConstr(3 * x + 2 * y <= 1000, name="MaterialLimit") model.addConstr(x >= 100, name="LargePillRequirement") model.addConstr(y >= 0.6 * (x + y), name="SmallPillProportion") # Solve the model model.solve() # Output results if model.status == COPT.OPTIMAL: print("Minimum total number of filler material needed: {:.0f} units".format(model.objval)) print("Number of large pills to be made: {:.0f}".format(x.x)) print("Number of small pills to be made: {:.0f}".format(y.x)) else: print("No optimal solution found.") \`\`\` In this script, we first create a `COPT` environment and model. Then, we add two integer decision variables `x` and `y`, representing the number of large and small pills to be made, respectively. Next, we set the objective function to minimize the total number of filler material needed, which is `2 * x + y`. Then, we add three constraints: 1. Medicinal ingredient usage constraint: `3 * x + 2 * y <= 1000` 2. Large pill production requirement: `x >= 100` 3. Proportion constraint for small pills: `y >= 0.6 * (x + y)` Finally, we solve the model and output the results of the optimal solution, including the minimum total number of filler material needed and the number of large and small pills to be made. This script provides a complete example of using the `coptpy` library to solve the lab's pill production problem, while satisfying all the constraints mentioned in the problem. ``` ## Performances Below is the comparison of performance on the NL4OPT, MAMO, and IndustryOR benchmarks. Values marked with a <sup>*</sup> are directly copied from original papers, with blanks where data were not reported. The highest results are highlighted in bold. | **Method** | **NL4OPT** | **MAMO EasyLP** | **MAMO ComplexLP** | **IndustryOR** | **Micro Avg** | **Macro Avg** | |------------------------------------------------|-------------------------|-----------------------|----------------------|-------------------|-----------------|-----------------| | *Methods based on PLMs* | | | | | | | | `tag-BART` | 47.9%<sup>*</sup> | - | - | - | - | - | | *Methods based on GPT-3.5* | | | | | | | | `Standard` | 42.4%<sup>*</sup> | - | - | - | - | - | | `Reflexion` | 50.7%<sup>*</sup> | - | - | - | - | - | | `Chain-of-Experts` | 58.9%<sup>*</sup> | - | - | - | - | - | | *Methods based on GPT-4* | | | | | | | | `Standard` | 47.3%<sup>*</sup> | 66.5%<sup>*</sup> | 14.6%<sup>*</sup> | 28.0% | 50.2% | 39.1% | | `Reflexion` | 53.0%<sup>*</sup> | - | - | - | - | - | | `Chain-of-Experts` | 64.2%<sup>*</sup> | - | - | - | - | - | | `OptiMUS` | 78.8%<sup>*</sup> | - | - | - | - | - | | *ORLMs based on open-source LLMs* | | | | | | | | `ORLM-Mistral-7B` | 84.4% | 81.4% | 32.0% | 27.0% | 68.8% | 56.2% | | `ORLM-Deepseek-Math-7B-Base` | **86.5%** | 82.2% | **37.9%** | 33.0% | 71.2% | 59.9% | | `ORLM-LLaMA-3-8B` | 85.7% | **82.3%** | 37.4% | **38.0%** | **71.4%** | **60.8%** | ## Citation ```bibtex @article{tang2024orlm, title={ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling}, author={Tang, Zhengyang and Huang, Chenyu and Zheng, Xin and Hu, Shixi and Wang, Zizhuo and Ge, Dongdong and Wang, Benyou}, journal={arXiv preprint arXiv:2405.17743}, year={2024} } ``` ```bibtex @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/).
AmberYifan/Llama-3.1-8B-sft-all-pool-ORPO
AmberYifan
2025-05-31T21:43:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "orpo", "conversational", "arxiv:2403.07691", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T21:25:03Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-all-pool-ORPO tags: - generated_from_trainer - trl - orpo licence: license --- # Model Card for Llama-3.1-8B-sft-all-pool-ORPO This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-all-pool-ORPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/gp4fic5c) This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite ORPO as: ```bibtex @article{hong2024orpo, title = {{ORPO: Monolithic Preference Optimization without Reference Model}}, author = {Jiwoo Hong and Noah Lee and James Thorne}, year = 2024, eprint = {arXiv:2403.07691} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wuxs/Mistral_TopK_SAE_l16
wuxs
2025-05-31T21:35:24Z
0
0
null
[ "en", "license:apache-2.0", "region:us" ]
null
2025-05-31T21:31:09Z
--- license: apache-2.0 language: - en ---
jgmagarino/tourist-comments-classifier
jgmagarino
2025-05-31T21:34:55Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-31T21:34:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ekurtulus/cyberbullying_classifier
ekurtulus
2025-05-31T21:34:08Z
0
0
null
[ "tensorboard", "safetensors", "distilbert", "region:us" ]
null
2025-05-31T21:29:08Z
Example use: from transformers import pipeline text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three." classifier = pipeline("sentiment-analysis", model="ekurtulus/cyberbullying_classifier") classifier(text) # label=0 not bullying, label=1 bullying
mradermacher/Qwen2-Audio-7B-Instruct-GGUF
mradermacher
2025-05-31T21:26:23Z
0
0
transformers
[ "transformers", "gguf", "chat", "audio", "audio-text-to-text", "en", "base_model:Qwen/Qwen2-Audio-7B-Instruct", "base_model:quantized:Qwen/Qwen2-Audio-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
audio-text-to-text
2025-05-31T16:43:30Z
--- base_model: Qwen/Qwen2-Audio-7B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - chat - audio - audio-text-to-text --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.f16.gguf) | f16 | 15.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Abdualkader/MultiViewDiff
Abdualkader
2025-05-31T21:22:52Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:2312.02201", "license:apache-2.0", "region:us" ]
null
2025-05-31T13:26:09Z
--- license: apache-2.0 --- # ImageDream-diffusers Model Card This is a port of https://huggingface.co/Peng-Wang/ImageDream into diffusers. And get ported weights from https://huggingface.co/ashawkey/imagedream-ipmv-diffuser In ashawkey's work, UNet did not ported to diffusers. This work has been fully ported to diffusers, including UNet. And separated the IP-adapter-plus from the unet. ## Diffusers ```python import torch from diffusers import DiffusionPipeline from diffusers.utils import make_image_grid from PIL import Image pipe = DiffusionPipeline.from_pretrained( "kiigii/imagedream-ipmv-diffusers", torch_dtype=torch.float16, trust_remote_code=True, ) pipe.load_ip_adapter() pipe.to("cude") prompt = "" # no need to input prompt image = Image.open(...) mv_images = pipe( prompt=prompt, ip_adapter_image=image, guidance_scale=5, num_inference_steps=30, elevation=0, num_images_per_prompt=1 ).images mv_grid = make_image_grid(mv_images[:4], 2, 2) mv_grid.save("mv_image.png") ``` ## Citation ``` @article{wang2023imagedream, title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation}, author={Wang, Peng and Shi, Yichun}, journal={arXiv preprint arXiv:2312.02201}, year={2023} } ``` ## Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
guydebruyn/InstructionFollowing_SFT_V2.5
guydebruyn
2025-05-31T21:20:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T21:20:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]