Search is not available for this dataset
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-03 12:28:27
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
411 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-03 12:28:07
card
stringlengths
11
1.01M
damgomz/ft_8_18e6_x8
damgomz
"2024-07-13T07:17:44Z"
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-20T16:01:00Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 77352.77055954933 | | Emissions (Co2eq in kg) | 0.0468073450853753 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.9131904317042898 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.080574989101539 | | Consumed energy (kWh) | 0.9937654208058287 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14890408332713248 | | Emissions (Co2eq in kg) | 0.03029650180249015 | ## Note 12 juillet 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs32_lr1e4_x8 | | model_name | ft_8_18e6_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.8e-05 | | batch_size | 8 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.697937 | 0.331425 | | 1 | 0.263267 | 0.220389 | 0.904429 | | 2 | 0.163043 | 0.210519 | 0.942222 | | 3 | 0.101605 | 0.267899 | 0.921057 | | 4 | 0.056810 | 0.332097 | 0.902358 | | 5 | 0.036866 | 0.327206 | 0.931372 | | 6 | 0.028792 | 0.383207 | 0.917303 |
LarryAIDraw/KasumiMiwa003
LarryAIDraw
"2023-10-08T09:53:22Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-10-08T09:49:46Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/157587/kasumi-miwa-jujutsu-kaisen-lora
versae/stt_nn-NO_conformer_transducer_large
versae
"2022-11-07T17:57:43Z"
4
0
nemo
[ "nemo", "region:us" ]
null
"2022-11-07T17:51:45Z"
Colab → https://colab.research.google.com/drive/1ggqsd5tu6cKf22EiKckbUNTJOwMMqKAh?usp=sharing
farooqkhan2840503/gemma-Code-Instruct-Finetune-test
farooqkhan2840503
"2024-03-01T03:54:35Z"
115
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-01T03:50:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trl-lib/OpenHermes-2-Mistral-7B-sigmoid-beta-0.9-steps-200
trl-lib
"2023-12-20T14:55:17Z"
0
0
peft
[ "peft", "safetensors", "en", "arxiv:1910.09700", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
null
"2023-12-20T14:54:52Z"
--- library_name: peft base_model: teknium/OpenHermes-2.5-Mistral-7B model-index: - name: OpenHermes-2-Mistral-7B-sigmoid-beta-0.9-steps-200 results: [] license: apache-2.0 language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Bharatdeep-H/stella_finetuned_en_dataset_to_mine_negatives_from
Bharatdeep-H
"2025-03-01T09:12:40Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:164619", "loss:TripletLoss", "custom_code", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:NovaSearch/stella_en_400M_v5", "base_model:finetune:NovaSearch/stella_en_400M_v5", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-03-01T09:09:51Z"
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:164619 - loss:TripletLoss base_model: NovaSearch/stella_en_400M_v5 widget: - source_sentence: 'Casado''s house bookshelf is so authentic Like your college degrees. TURNER amazon.es Now Contil Hello Choose your address All Best Sellers Amazon Basics Deals Latest News ELECTRONICS Mobile phones and telephony Photography and video camera Electronics Photography and camcorders Accessories Photo studio [USER] UNDE Audio and H TV, video and Home Cinema photographic backgrounds Funds library for suckers that they want to pretend in a video call Visit the Store Price: €19.99 Y Of 586 FREE returns' sentences: - Quotes show Democrats supported riots “when BLM was BURNING down cities and killing people in the streets!” - This is how C5N lies with fake news fakenews cases of COVID Plaza de Mayo Argerich - Pablo Casado has a fake library to appear on video calls - source_sentence: 'Kendall Jenner. #BlackLivesMatter BLACK LIVES MATTER 79 GRID WILD' sentences: - Photo shows basketball legend Kobe Bryant’s body - Kendall Jenner posted a photoshopped picture holding a "Black Lives Matter" sign - Video shows the arrest of US military officer by Russian forces in 2022. - source_sentence: 4 severe level one, fatigue Headache loss of smell -Cough Fever - hoarseness Chest pain -Fatigue sentences: - THE INGREDIENTS OF THE VACCINES REVEALED - The CROWN VIRUS from Wuhan. can be cured* with a bowl of freshly boiled garlic water - There are 6 "types" of COVID-19 - source_sentence: 'HUEN Airport Entebbe International Airport (IATA: EBB, ICAO: HUEN) is the principal international airport of Uganda. ... It is the only international airport of Uganda. Built: 1972-1973 (main terminal building) Location: Entebbe, Uganda Hub for: Eagle Air; Uganda Airlines' sentences: - Uganda’s new police spokesman shoots catapult at journalist - Uganda’s Entebbe Airport changes name to HUEN Airport - Tweets from the Israeli prime minister’s official Twitter account show the country was responsible for the Beirut explosion - source_sentence: 'day I was acquitted 12/12/12 i hocus45th GP SERVICES USA CDC CENTERS FOR DISH CONTROL AND P EXCLUSIVE: Per the CDC There Are Nearly Twice As Many Vaccine Related Deaths SO FAR in 2021 (1,755) Than All the Vaccine Deaths this Past Decade (994) For information about vaccines. visit who.int.' sentences: - New Zealand PM links booster dose to six months of freedom - Side effects of the first published vaccine According to Pfizer documents, 1,200 deaths. - “Thousands of COVID Vaccine Injuries and 13 U.S. Deaths Reported in December Alone”; “In December, 3,916 COVID vaccine-related adverse events, including 13 deaths, were reported to VAERS” pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on NovaSearch/stella_en_400M_v5 results: - task: type: triplet name: Triplet dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy value: 0.9713522050783623 name: Cosine Accuracy --- # SentenceTransformer based on NovaSearch/stella_en_400M_v5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision 32b4baf84d02a1b1beb2df8952e875232e8ebe1d --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Bharatdeep-H/stella_finetuned_en_dataset_to_mine_negatives_from") # Run inference sentences = [ 'day I was acquitted 12/12/12 i hocus45th GP SERVICES USA CDC CENTERS FOR DISH CONTROL AND P EXCLUSIVE: Per the CDC There Are Nearly Twice As Many Vaccine Related Deaths SO FAR in 2021 (1,755) Than All the Vaccine Deaths this Past Decade (994) For information about vaccines. visit who.int.', '“Thousands of COVID Vaccine Injuries and 13 U.S. Deaths Reported in December Alone”; “In December, 3,916 COVID vaccine-related adverse events, including 13 deaths, were reported to VAERS”', 'Side effects of the first published vaccine According to Pfizer documents, 1,200 deaths.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9714** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 164,619 training samples * Columns: <code>query</code>, <code>answer</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | answer | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 83.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.17 tokens</li><li>max: 141 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.69 tokens</li><li>max: 120 tokens</li></ul> | * Samples: | query | answer | negative | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>YEAST EXTRACK C DDees.co NO AFTER 40 YEARS OF RESEARCH, THERE IS A VACCINE AGAINST AIDS AFTER 76 YEARS OF RESEARCH, NO THERE IS A VACCINE AGAINST INFLUENZA AFTER 100 YEARS OF RESEARCH, NO THERE IS A VACCINE AGAINST CANCER BUT AFTER ONLY 6 MONTHS THERE IS A VACCINE AGAINST A "VIRUS" THAT APPEARED "FROM SUDDEN" AND THOSE WHO</code> | <code>After years of research there is no vaccine against AIDS, cancer and influenza, but in 6 months there is a vaccine against covid-19</code> | <code>Chronology of decline in vaccine effectiveness shows that immunizers are not effective</code> | | <code>Donald J. Trump [USER] Trump We have reports of NIGERIA supporting Iran ACTIVELY and PASSIVELY....just got of the phone with secretary Pompeo....the USA MUST and WILL respond to terrorist nations IMMEDIATELY! 12:11 AM - 08 Jan 2020 244,391 Retweets 319,284 Likes 167K [244K Follow 319K</code> | <code>US President Donald Trump threatens to attack Nigeria</code> | <code>Vice President of the United States Mike Pence unfollowed President Donald Trump on Twitter and put a picture of Joe Biden and Kamala Harris on the profile cover</code> | | <code>4 k 40 years worth of research... *no vaccine for HIV cancer MA At least 100 years of research...no vaccine for Ongoing research... no vaccine for the common cold Less than a year for a Covid vaccine? I think I'll pass on that shot!! PIC COLLAGE</code> | <code>Vaccines for HIV, cold and cancer should deter you from getting the Covid-19 vaccine</code> | <code>Anticovid vaccines cause deterioration of the immune system and AIDS</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 164,619 evaluation samples * Columns: <code>query</code>, <code>answer</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | answer | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 73.2 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.27 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.3 tokens</li><li>max: 179 tokens</li></ul> | * Samples: | query | answer | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------| | <code>--- Nora Al Riyadh Tik Tok Replying to [USER]</code> | <code>A restaurant in Riyadh in one of the malls, the opening of the meal is free, of course, Corona in the farthest corner is surprising</code> | <code>Video of Rakhi Sawant wearing a hijab to support protestors in Karnataka</code> | | <code>SQUID FOR BRAZIL BELTER SALT Corumbau pasil Milk</code> | <code>Lula expelled from the city of Itanagrà in Bahia in May 2021. The Army had to provide security.</code> | <code>All workers in Gardenia Philippines bread factory COVID-19 positive in July 2020</code> | | <code>I just ran out of words William Barr, Attorney General of the America literally most important person of all the American court system just publicly denounced that there has been electoral fraud 2rad10 TM [USER].6h US Attorney General William Barr denounces Vote-by-mail fraud. OM BLITZER [USER] THE WITH THE WITH SITUATION WOLFOOTION WOLF S BLITZE ROOM OUTZER TH HE WITH DERNIE CNN EXCLUSIVE Jimmy Carter & James Baker WOLF ONE-ON-ONE WITH ATTORNEY GENERAL WILLIAM BARR CAN WELL DEEST</code> | <code>The US attorney general denounces that there has been electoral fraud</code> | <code>Hillary Clinton appeared before the US justice on June 2, 2020</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 2 - `learning_rate`: 3e-05 - `max_steps`: 4000 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.2 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: 4000 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | |:------:|:----:|:-------------:|:---------------:|:---------------:| | 0.0016 | 100 | 1.6958 | - | - | | 0.0032 | 200 | 1.3647 | - | - | | 0.0049 | 300 | 1.1698 | - | - | | 0.0065 | 400 | 0.8551 | - | - | | 0.0081 | 500 | 0.8275 | - | - | | 0.0097 | 600 | 0.8878 | - | - | | 0.0113 | 700 | 0.9717 | - | - | | 0.0130 | 800 | 1.0219 | - | - | | 0.0146 | 900 | 0.9074 | - | - | | 0.0162 | 1000 | 0.903 | 0.8201 | 0.9452 | | 0.0178 | 1100 | 0.9236 | - | - | | 0.0194 | 1200 | 0.7935 | - | - | | 0.0211 | 1300 | 1.0483 | - | - | | 0.0227 | 1400 | 1.0878 | - | - | | 0.0243 | 1500 | 0.9258 | - | - | | 0.0259 | 1600 | 1.011 | - | - | | 0.0275 | 1700 | 0.7785 | - | - | | 0.0292 | 1800 | 0.7643 | - | - | | 0.0308 | 1900 | 0.9918 | - | - | | 0.0324 | 2000 | 0.7941 | 0.7678 | 0.9387 | | 0.0340 | 2100 | 1.106 | - | - | | 0.0356 | 2200 | 0.7571 | - | - | | 0.0373 | 2300 | 0.6687 | - | - | | 0.0389 | 2400 | 0.6914 | - | - | | 0.0405 | 2500 | 0.5925 | - | - | | 0.0421 | 2600 | 0.8085 | - | - | | 0.0437 | 2700 | 0.5775 | - | - | | 0.0454 | 2800 | 0.5051 | - | - | | 0.0470 | 2900 | 0.6894 | - | - | | 0.0486 | 3000 | 0.4202 | 0.4875 | 0.9667 | | 0.0502 | 3100 | 0.4704 | - | - | | 0.0518 | 3200 | 0.4511 | - | - | | 0.0535 | 3300 | 0.3991 | - | - | | 0.0551 | 3400 | 0.4166 | - | - | | 0.0567 | 3500 | 0.3402 | - | - | | 0.0583 | 3600 | 0.6621 | - | - | | 0.0599 | 3700 | 0.5999 | - | - | | 0.0616 | 3800 | 0.443 | - | - | | 0.0632 | 3900 | 0.6503 | - | - | | 0.0648 | 4000 | 0.42 | 0.4156 | 0.9714 | ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 3.3.1 - Transformers: 4.49.0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
ying-zh/Reinforce-Pixelcopter-PLE-v0
ying-zh
"2023-05-24T18:27:51Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-05-23T14:37:35Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 32.10 +/- 22.58 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
google/metricx-23-qe-xxl-v2p0
google
"2025-01-07T21:10:24Z"
945
6
transformers
[ "transformers", "pytorch", "mt5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-02-07T16:34:57Z"
--- license: apache-2.0 --- # MetricX-23 *This is not an officially supported Google product.* **GitHub repository: [https://github.com/google-research/metricx](https://github.com/google-research/metricx)** This repository contains the MetricX-23 models, a family of models for automatic evaluation of translations that were proposed in the WMT'23 Metrics Shared Task submission [MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.63/). The models were trained in [T5X](https://github.com/google-research/t5x) and then converted for use in PyTorch. ## Available Models There are 6 models available on HuggingFace that vary in the number of parameters and whether or not the model is reference-based or reference-free (also known as quality estimation, or QE): * [MetricX-23-XXL](https://huggingface.co/google/metricx-23-xxl-v2p0) * [MetricX-23-XL](https://huggingface.co/google/metricx-23-xl-v2p0) * [MetricX-23-Large](https://huggingface.co/google/metricx-23-large-v2p0) * [MetricX-23-QE-XXL](https://huggingface.co/google/metricx-23-qe-xxl-v2p0) * [MetricX-23-QE-XL](https://huggingface.co/google/metricx-23-qe-xl-v2p0) * [MetricX-23-QE-Large](https://huggingface.co/google/metricx-23-qe-large-v2p0) We recommend using the XXL model versions for the best agreement with human judgments of translation quality, the Large versions for best speed, and the XL for an intermediate use case. ## Changes to the WMT'23 Submission These models available here are most similar to the primary submission to the WMT'23 Metrics Shared Task. They are initialized with [mT5](https://aclanthology.org/2021.naacl-main.41/) then fine-tuned on a combination of direct assessment and MQM data. However, we made some changes that make these models different from the WMT'23 submissions. First, the models are trained to regress the actual MQM score rather than a normalized score between 0 and 1. **That means the output from the MetricX-23 models is a score in the range [0, 25] where lower is better (i.e., it predicts an error score).** Second, these models were trained with a larger variety of synthetic data that makes them more robust to translation edge cases like over- and undertranslation, described in more detail in the following section. ### Synthetic Data In order for our MetricX models to learn to identify certain types of bad translations that are not sufficiently (or at all) represented in the regular training data, we created synthetic examples and mixed them in during training. The synthetic training data was generated from the DA datasets ranging from WMT15 to WMT21 (~ 43 language pairs). In most cases, the synthetic examples have the candidate translation manipulated so as to turn it into a bad translation with a specific issue commonly unrecognized by learned metrics. The table below provides an overview of the various failure modes that we considered, including brief descriptions of how we prepared the synthetic data to address them. | Failure mode | Synthetic example description | | ----------- | ----------- | | Undertranslation | Candidate translation with an arbitrary sentence removed (if multi-sentence); alternatively, candidate with a certain proportion of words removed from the end. | | Overtranslation | Candidate translation duplicated (with space in between). | | Fluent but unrelated translation | Arbitrary reference of a similar length from the dataset. | | Gibberish | Text of a similar length as the reference, generated by sampling words from the reference translation vocabulary (built from all references in the data). | | Missing punctuation | Reference translation with the end punctuation removed (11 punctuation symbols considered). | | Latin instead of Chinese/Japanese or Hindi/Bengali punctuation | Candidate translation with the language-specific punctuation symbol at the end replaced with the Latin equivalent (e.g., "." instead of "。" or "।"); alternatively, the punctuation symbol is replaced with the Latin equivalent in the reference, keeping the correct one in the candidate. | | Reference-matching translation | Reference translation copied as the candidate translation (unlike the rest of the synthetic data, these examples are meant to train the metric to predict a perfect score for candidates matching the reference). | Examples from the first 4 categories were assigned a label corresponding to the worst score on the given rating scale (e.g., 25 when mixed with MQM training data), whereas the reference-matching translation examples are assigned the best score (e.g., 0 when used with MQM data). The missing/incorrect punctuation examples were labeled with a score slightly worse than perfect. Note that some of the synthetic datasets are only meaningful in the reference-based scenario, and we thus excluded them when training a QE variant of MetricX. These are the Latin-vs-special punctuation and the reference-matching translation examples. Most of the synthetic training sets were created using stratified sampling across target languages, taking 500 examples per target language. One exception is the missing punctuation set, which used a stratified sample across different punctuation symbols instead. When training MetricX, a small proportion of the synthetic examples was mixed with the regular training examples. During the first-stage fine-tuning on DA data, each synthetic training set constituted between 0.1% and 1% of all training examples, whereas in the second-stage fine-tuning on MQM data we used an even smaller proportion, around 0.05%. As for evaluating the effect of the synthetic training data on the model's performance, the DEMETR challenge set - which we originally used to evaluate the models submitted to the WMT23 Metrics Shared Task - was not adequate anymore. We therefore created a new DEMETR-style test set based on the WMT22 DA data, with examples constructed analogically to the synthetic training examples, as described above. This test set helped us determine the right proportions of synthetic data for fine-tuning in order to make MetricX robust for the failure modes in consideration, without sacrificing the system- and segment-level correlations with human ratings. ## Usage The code for using MetricX models can be found at [https://github.com/google-research/metricx](https://github.com/google-research/metricx). The repository contains example prediction scripts, described below. The `metricx23/predict.py` script contains an example for how to run inference on the models. ### Reference-Based Example usage for a reference-based model: ```bash python -m metricx23.predict \ --tokenizer google/mt5-xl \ --model_name_or_path google/metricx-23-xl-v2p0 \ --max_input_length 1024 \ --batch_size 1 \ --input_file input.jsonl \ --output_file output.jsonl ``` `input.jsonl` is expected to have 1 serialized JSON object per line with `"reference"` and `"hypothesis"` fields. The output jsonl will be parallel to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score. Note that the model was trained with a maximum input length of 1024 tokens, so significantly increasing that value may lead to unpredictable behavior. ### Reference-Free Example usage for a reference-free model: ```bash python -m metricx23.predict \ --tokenizer google/mt5-xl \ --model_name_or_path google/metricx-23-qe-xl-v2p0 \ --max_input_length 1024 \ --batch_size 1 \ --input_file input.jsonl \ --output_file output.jsonl \ --qe ``` `input.jsonl` is expected to have 1 serialized JSON object per line with `"source"` and `"hypothesis"` fields. The output jsonl will be parallel to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score. ## Meta-Evaluation The `metricx23/evaluate.py` script contains code to calculate various correlations between the MetricX-23 scores and MQM ratings of translation quality using the [MT Metrics Eval](https://github.com/google-research/mt-metrics-eval) library. Example usage: ```bash python -m metricx23.evaluate \ --dataset wmt22 \ --lp en-de \ --input_file input.jsonl \ --output_file output.json ``` `input.jsonl` is expected to have one JSON object serialized per line. Each JSON object is expected to contain 4 fields: * `"system_id"`: The name of the system that generated the translation. * `"segment_id"`: The 0-based index of the corresponding segment in the MT Metrics Eval data. * `"label"`: The ground-truth translation quality score (with higher is better). * `"prediction"`: The model predicted translation quality score (with lower is better; the script negates the scores so higher is better). The script will calculate the 4 agreement/correlations that were used in the WMT'23 Shared Task. Below are the results for the MetricX-23 models on the WMT'22 Metrics Shared Task data: English-German: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.795 | 0.835 | 0.546 | 0.619 | | MetricX-23-XL | 0.756 | 0.813 | 0.540 | 0.605 | | MetricX-23-Large | 0.769 | 0.759 | 0.507 | 0.595 | | MetricX-23-QE-XXL | 0.769 | 0.830 | 0.490 | 0.606 | | MetricX-23-QE-XL | 0.718 | 0.684 | 0.421 | 0.594 | | MetricX-23-QE-Large | 0.744 | 0.671 | 0.387 | 0.579 | English-Russian: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.905 | 0.943 | 0.477 | 0.609 | | MetricX-23-XL | 0.876 | 0.906 | 0.498 | 0.589 | | MetricX-23-Large | 0.876 | 0.841 | 0.474 | 0.569 | | MetricX-23-QE-XXL | 0.895 | 0.940 | 0.470 | 0.602 | | MetricX-23-QE-XL | 0.848 | 0.861 | 0.415 | 0.570 | | MetricX-23-QE-Large | 0.819 | 0.778 | 0.411 | 0.551 | Chinese-English: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.868 | 0.919 | 0.605 | 0.551 | | MetricX-23-XL | 0.868 | 0.924 | 0.584 | 0.543 | | MetricX-23-Large | 0.857 | 0.919 | 0.555 | 0.539 | | MetricX-23-QE-XXL | 0.857 | 0.928 | 0.573 | 0.544 | | MetricX-23-QE-XL | 0.802 | 0.879 | 0.546 | 0.529 | | MetricX-23-QE-Large | 0.758 | 0.904 | 0.522 | 0.529 | The `metricx23/evaluate_wmt23.py` script re-calculates the average correlation score that was used to rank submissions from the [WMT'23 Shared Task](https://www2.statmt.org/wmt23/pdf/2023.wmt-1.51.pdf). Example usage: ```bash python -m metricx23.evaluate_wmt23 \ --en_de predictions_ende.jsonl \ --he_en predictions_heen.jsonl \ --zh_en predictions_zhen.jsonl \ --output_file output.json ``` Each of the 3 input files is expected to be in the same format as described above. Each file should correspond to running inference on each of the language pairs from the WMT'23 dataset. The results for each of the models is the following: | Model | Average Correlation | | ----------- | ----------- | | MetricX-23-XXL | 0.812 | | MetricX-23-XL | 0.813 | | MetricX-23-Large | 0.794 | | MetricX-23-QE-XXL | 0.797 | | MetricX-23-QE-XL | 0.767 | | MetricX-23-QE-Large | 0.762 | ## Citation If you use MetricX-23 in your research, please cite the following publication: ```bibtex @inproceedings{juraska-etal-2023-metricx, title = {{MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task}}, author = "Juraska, Juraj and Finkelstein, Mara and Deutsch, Daniel and Siddhant, Aditya and Mirzazadeh, Mehdi and Freitag, Markus", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.63", doi = "10.18653/v1/2023.wmt-1.63", pages = "756--767", } ```
robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75
robiulawaldev
"2025-03-01T05:01:32Z"
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "region:us" ]
null
"2025-03-01T05:01:15Z"
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/codegemma-7b model-index: - name: robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
LucasMagnana/Pictalk_distil
LucasMagnana
"2024-04-19T18:35:09Z"
14
0
transformers
[ "transformers", "safetensors", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-01-25T11:45:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aidonuts/ancient-disco-31-ep1
aidonuts
"2024-02-28T02:44:45Z"
92
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-28T02:42:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GrennKren/Arcee-Blitz-4bit
GrennKren
"2025-02-21T04:51:20Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-02-21T04:47:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shidowake/240402-Swal-MS-7b-CVec-co0.5-mist-inst-v0.1-co0.5-Hermes-2-Pro-co0.5-openchat_3.5
shidowake
"2024-04-02T14:10:23Z"
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-02T14:04:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jiayueyuan/filter-class
jiayueyuan
"2023-11-10T07:32:00Z"
0
1
null
[ "biology", "zh", "license:apache-2.0", "region:us" ]
null
"2023-11-10T07:30:19Z"
--- license: apache-2.0 language: - zh tags: - biology ---
TTNVXX/BokehOrNot
TTNVXX
"2024-03-05T11:52:00Z"
7
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:BokehOrNot/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-03-05T11:51:28Z"
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - BokehOrNot/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.3941328525543213 f1_macro: 0.8130457113507962 f1_micro: 0.8355263157894737 f1_weighted: 0.8288865461033169 precision_macro: 0.8533012943450432 precision_micro: 0.8355263157894737 precision_weighted: 0.8434833671575431 recall_macro: 0.8000841750841751 recall_micro: 0.8355263157894737 recall_weighted: 0.8355263157894737 accuracy: 0.8355263157894737
Sunbird/translate-nllb-1.3b-salt
Sunbird
"2024-11-06T23:01:34Z"
5,450
0
transformers
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "dataset:Sunbird/salt", "base_model:facebook/nllb-200-1.3B", "base_model:finetune:facebook/nllb-200-1.3B", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-25T16:43:55Z"
--- base_model: facebook/nllb-200-1.3B model-index: - name: translate-nllb-1.3b-salt results: [] datasets: - Sunbird/salt --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model details This machine translation model can convert single sentences from and to any combination of the following languages: | ISO 693-3 | Language name | | --- | --- | | eng | English | | ach | Acholi | | lgg | Lugbara | | lug | Luganda | | nyn | Runyankole | | teo | Ateso | It was trained on the [SALT](http://huggingface.co/datasets/Sunbird/salt) dataset and a variety of additional external data resources, including back-translated news articles, FLORES-200, MT560 and LAFAND-MT. The base model was [facebok/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B), with tokens adapted to add support for languages not originally included. # Usage example ```python tokenizer = transformers.NllbTokenizer.from_pretrained( 'Sunbird/translate-nllb-1.3b-salt') model = transformers.M2M100ForConditionalGeneration.from_pretrained( 'Sunbird/translate-nllb-1.3b-salt') text = 'Where is the hospital?' source_language = 'eng' target_language = 'lug' language_tokens = { 'eng': 256047, 'ach': 256111, 'lgg': 256008, 'lug': 256110, 'nyn': 256002, 'teo': 256006, } device = torch.device("cuda" if torch.cuda.is_available() else "cpu") inputs = tokenizer(text, return_tensors="pt").to(device) inputs['input_ids'][0][0] = language_tokens[source_language] translated_tokens = model.to(device).generate( **inputs, forced_bos_token_id=language_tokens[target_language], max_length=100, num_beams=5, ) result = tokenizer.batch_decode( translated_tokens, skip_special_tokens=True)[0] # Eddwaliro liri ludda wa? ``` # Evaluation metrics Results on salt-dev: | Source language | Target language | BLEU | | --- | --- | --- | | ach | eng | 28.371 | | lgg | eng | 30.45 | | lug | eng | 41.978 | | nyn | eng |32.296 | | teo | eng | 30.422 | | eng | ach | 20.972 | | eng | lgg | 22.362 | | eng | lug | 30.359 | | eng | nyn | 15.305 | | eng | teo | 21.391 |
jvadlamudi2/convnext-tiny-224-jvadlamudi2
jvadlamudi2
"2023-07-24T18:05:38Z"
193
0
transformers
[ "transformers", "pytorch", "tensorboard", "convnext", "image-classification", "generated_from_trainer", "base_model:facebook/convnext-tiny-224", "base_model:finetune:facebook/convnext-tiny-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-07-24T17:51:37Z"
--- license: apache-2.0 base_model: facebook/convnext-tiny-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: convnext-tiny-224-jvadlamudi2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-jvadlamudi2 This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5780 - Accuracy: 0.7946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.5882 | 0.8036 | | 0.6213 | 2.0 | 14 | 0.5821 | 0.7857 | | 0.6123 | 3.0 | 21 | 0.5780 | 0.7946 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.0 - Tokenizers 0.13.3
r1char9/rubert-tiny2-ru-go-emotions
r1char9
"2024-06-14T06:58:31Z"
110
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "sentiment-analysis", "multi-label-classification", "sentiment analysis", "rubert", "sentiment", "tiny", "russian", "multilabel", "classification", "emotion-classification", "emotion-recognition", "emotion", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-13T16:58:39Z"
--- license: mit language: - ru pipeline_tag: text-classification tags: - sentiment-analysis - multi-label-classification - sentiment analysis - rubert - sentiment - bert - tiny - russian - multilabel - classification - emotion-classification - emotion-recognition - emotion --- Модель [RuBERT-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) была fine-tuned для задачи __emotion classification__, предназначенная для __Russian__ текст. Выполняет задачу __multi-label classification__ с слудующимим категориями: ```yaml 0: admiration 1: amusement 2: anger 3: annoyance 4: approval 5: caring 6: confusion 7: curiosity 8: desire 9: disappointment 10: disapproval 11: disgust 12: embarrassment 13: excitement 14: fear 15: gratitude 16: grief 17: joy 18: love 19: nervousness 20: optimism 21: pride 22: realization 23: relief 24: remorse 25: sadness 26: surprise 27: neutral ``` Категории для русского языка: ```yaml admiration: восхищение amusement: веселье anger: злость annoyance: раздражение approval: одобрение caring: забота confusion: непонимание curiosity: любопытство desire: желание disappointment: разочарование disapproval: неодобрение disgust: отвращение embarrassment: смущение excitement: возбуждение fear: страх gratitude: признательность grief: горе joy: радость love: любовь nervousness: нервозность optimism: оптимизм pride: гордость realization: осознание relief: облегчение remorse: раскаяние sadness: грусть surprise: удивление neutral: нейтральность ``` ## Usage ```python from transformers import pipeline model = pipeline(model="r1char9/rubert-tiny2-ru-go-emotions") model("Привет, ты мне нравишься!") # [{'label': 'love', 'score': 0.5955629944801331}] ```
Triangle104/ADELIE-DPO-Q6_K-GGUF
Triangle104
"2024-11-24T22:34:32Z"
7
1
null
[ "gguf", "text-generation-inference", "Information Extraction", "IE", "Named Entity Recogniton", "Event Extraction", "Relation Extraction", "LLaMA", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:ACE05", "dataset:conll2003", "dataset:conll2012_ontonotesv5", "dataset:rams", "dataset:tacred", "dataset:fewrel", "dataset:maven", "base_model:THU-KEG/ADELIE-DPO", "base_model:quantized:THU-KEG/ADELIE-DPO", "license:llama2", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-24T22:33:38Z"
--- license: llama2 datasets: - ACE05 - conll2003 - conll2012_ontonotesv5 - rams - tacred - fewrel - maven language: - en metrics: - f1 pipeline_tag: text-generation tags: - text-generation-inference - Information Extraction - IE - Named Entity Recogniton - Event Extraction - Relation Extraction - LLaMA - llama-cpp - gguf-my-repo base_model: THU-KEG/ADELIE-DPO --- # Triangle104/ADELIE-DPO-Q6_K-GGUF This model was converted to GGUF format from [`THU-KEG/ADELIE-DPO`](https://huggingface.co/THU-KEG/ADELIE-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/THU-KEG/ADELIE-DPO) for more details on the model. --- Model details: - We introduce ADELIE (Aligning large language moDELs on Information Extraction), an aligned LLM that effectively solves various IE tasks, including closed IE, open IE, and on-demand IE. We first collect and construct a high-quality alignment corpus IEInstruct for IE. Then we train ADELIESFT using instruction tuning on IEInstruct. We further train ADELIESFT with direct preference optimization (DPO) objective, resulting in ADELIEDPO. Extensive experiments on various held-out IE datasets demonstrate that our models (ADELIESFT and ADELIEDPO) achieve state-of-the-art (SoTA) performance among open-source models. We further explore the general capabilities of ADELIE, and experimental results reveal that their general capabilities do not exhibit a noticeable decline. 📖 Paper: ADELIE: Aligning Large Language Models on Information Extraction 🐧 Github: THU/ADELIE Model Description - Developed by: Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li Model type: Text Generation Language(s) (NLP): English License: LLaMA2 License for the base model. Finetuned from model [optional]: LLaMA2-7B --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/ADELIE-DPO-Q6_K-GGUF --hf-file adelie-dpo-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/ADELIE-DPO-Q6_K-GGUF --hf-file adelie-dpo-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/ADELIE-DPO-Q6_K-GGUF --hf-file adelie-dpo-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/ADELIE-DPO-Q6_K-GGUF --hf-file adelie-dpo-q6_k.gguf -c 2048 ```
DevQuasar/llama3.2_3b_chat_brainstorm-v3.2.3
DevQuasar
"2025-02-01T23:04:38Z"
5
0
null
[ "safetensors", "llama", "license:llama3.2", "region:us" ]
null
"2024-11-09T14:54:21Z"
--- license: llama3.2 --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) 'Make knowledge free for everyone' <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
TechxGenus/CursorCore-QW2.5-1.5B-SR
TechxGenus
"2024-10-10T06:43:22Z"
130
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:Qwen/Qwen2.5-Coder-1.5B", "base_model:finetune:Qwen/Qwen2.5-Coder-1.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-08T04:06:35Z"
--- tags: - code base_model: - Qwen/Qwen2.5-Coder-1.5B library_name: transformers pipeline_tag: text-generation license: apache-2.0 --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
VPTQ-community/Meta-Llama-3.3-70B-Instruct-v8-k65536-0-woft
VPTQ-community
"2025-02-25T17:19:44Z"
26
0
null
[ "safetensors", "llama", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "vptq", "region:us" ]
null
"2024-12-15T15:39:57Z"
--- license: llama3.3 base_model: - meta-llama/Llama-3.3-70B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model.
Nhat1904/test_trainer_XLNET_3ep_5e-5
Nhat1904
"2022-12-06T03:10:16Z"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-12-06T01:30:37Z"
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_trainer_XLNET_3ep_5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer_XLNET_3ep_5e-5 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5405 - Accuracy: 0.8773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7984 | 1.0 | 1125 | 0.6647 | 0.7923 | | 0.5126 | 2.0 | 2250 | 0.4625 | 0.862 | | 0.409 | 3.0 | 3375 | 0.5405 | 0.8773 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
SanteriVtj/ppo-SnowballTarget
SanteriVtj
"2025-02-28T18:22:37Z"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2025-02-28T18:22:34Z"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: SanteriVtj/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
EMahdi/whisper-large-v3-turbo-ar-finetune
EMahdi
"2024-12-04T12:50:33Z"
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "dataset:EMahdi/WhisperFinetune", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-12-04T10:48:45Z"
--- library_name: transformers language: - ar license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer datasets: - EMahdi/WhisperFinetune metrics: - wer model-index: - name: Whisper Large V3 Turbo Finetune Ar - EMahdi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: EMahdi/WhisperFinetune Sudanese Corpus type: EMahdi/WhisperFinetune args: 'config: sudanese_corpus, split: test' metrics: - name: Wer type: wer value: 42.80180761781795 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V3 Turbo Finetune Ar - EMahdi This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the EMahdi/WhisperFinetune Sudanese Corpus dataset. It achieves the following results on the evaluation set: - Loss: 0.8721 - Wer: 42.8018 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.2464 | 1.0 | 89 | 0.9025 | 71.2072 | | 0.7343 | 2.0 | 178 | 0.7835 | 55.7779 | | 0.5441 | 3.0 | 267 | 0.7463 | 56.3105 | | 0.4076 | 4.0 | 356 | 0.7532 | 47.5468 | | 0.325 | 5.0 | 445 | 0.7811 | 51.4526 | | 0.2635 | 6.0 | 534 | 0.8050 | 62.1369 | | 0.1866 | 7.0 | 623 | 0.8226 | 45.7715 | | 0.1171 | 8.0 | 712 | 0.8406 | 45.4810 | | 0.0679 | 9.0 | 801 | 0.8664 | 43.5119 | | 0.0399 | 10.0 | 890 | 0.8721 | 42.8018 | ### Framework versions - Transformers 4.45.0 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
in-diretta-sapna-shah-video-leak/sapna.shah.viral.video.official.tutorial
in-diretta-sapna-shah-video-leak
"2025-03-29T18:42:13Z"
0
0
null
[ "region:us" ]
null
"2025-03-29T18:41:32Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
DarijaM/XLM-R-Large-Tweet-base
DarijaM
"2025-01-10T22:47:32Z"
10
0
null
[ "safetensors", "xlm-roberta", "license:mit", "region:us" ]
null
"2025-01-10T16:45:31Z"
--- license: mit --- # **XLM-R-Large-Tweet-Base** **XLM-R-Large-Tweet-Base** is an additionally pretrained version of the [XLM-RoBERTa large-sized model]( https://huggingface.co/FacebookAI/xlm-roberta-large), tailored specifically for the social media domain. The model has been pretrained using 37,200 COVID-19 vaccination-related tweets in the Serbian language (approximately 1.3 million tokens), leveraging the unique linguistic features and informal writing styles prevalent on social media platforms. Its fine-tuned version for the **five-class sentiment analysis task** is available as [XLM-R-Large-Tweet](https://huggingface.co/DarijaM/XLM-R-Large-Tweet).
John6666/epicrealism-xl-v9unflux-sdxl
John6666
"2024-12-23T06:36:38Z"
5,578
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "photo", "photography", "photorealism", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-10-11T13:48:37Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - photo - photography - photorealism --- Original model is [here](https://civitai.com/models/277058?modelVersionId=931522). This model created by [epinikion](https://civitai.com/user/epinikion).
Primeness/primeh6v5c4
Primeness
"2025-02-05T09:13:24Z"
22
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-05T07:01:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IlyaGusev/mt0_xxl_ru_turbo_alpaca_lora
IlyaGusev
"2023-03-31T18:41:13Z"
0
1
null
[ "text2text-generation", "ru", "dataset:IlyaGusev/ru_turbo_alpaca", "region:us" ]
text2text-generation
"2023-03-28T21:38:27Z"
--- datasets: - IlyaGusev/ru_turbo_alpaca language: - ru pipeline_tag: text2text-generation inference: false ---
oregapam/ioniclora1
oregapam
"2025-03-26T22:24:08Z"
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-03-26T19:10:41Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: ioniclora1 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # ioniclora1 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `ioniclora1` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
GeneralAwareness/VintagePhotos
GeneralAwareness
"2022-12-29T03:26:13Z"
0
6
null
[ "stable-diffusion", "v2", "text-to-image", "image-to-image", "Embedding", "en", "license:cc-by-nc-sa-4.0", "region:us" ]
text-to-image
"2022-12-29T03:22:22Z"
--- license: cc-by-nc-sa-4.0 language: - en thumbnail: "https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00122-2365281862-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint.png" tags: - stable-diffusion - v2 - text-to-image - image-to-image - Embedding --- Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources. Install by downloading the .pt embedding, and put it in the \embeddings folder. The two embeddings are a one two punch as Vint-3000 is more 1880s style of photography (some seeds will be different) while the Vint is more for 1940s onward though both can be used for anything you can dream of. Use keyword: vint, or vint-3000 depending on the embedding, and effect you are trying to achieve. color photo morgan freeman in the style of Vint-3000 ![Single Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00120-2365281862-color%20photo%20morgan%20freeman%20in%20the%20style%20of%20Vint-3000.png) color photo morgan freeman in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00121-2365281862-color%20photo%20morgan%20freeman%20in%20the%20style%20of%20Vint.png) color photo emma stone in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00122-2365281862-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint.png) color photo emma stone in the style of Vint-3000 ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00124-345136640-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint-3000.png) color photo post apocalyptic city in the style of Vint-3000 ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00127-345136640-color%20photo%20post%20apocalyptic%20city%20in%20the%20style%20of%20Vint-3000.png) color photo post apocalyptic city in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00128-345136640-color%20photo%20post%20apocalyptic%20city%20in%20the%20style%20of%20Vint.png)
husnu/electra-small-turkish-uncased-discriminator
husnu
"2022-01-16T19:01:47Z"
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- tags: - generated_from_trainer datasets: - squad model-index: - name: ft_electra-small-turkish-uncased-discriminator_lr-2e-1_epochs-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 5.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.2 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.951 | 1.0 | 5818 | 5.9506 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
imdatta0/llama_2_13b_Magiccoder_evol_10k_reverse
imdatta0
"2024-06-10T17:34:07Z"
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
"2024-06-10T13:59:29Z"
--- license: llama2 library_name: peft tags: - unsloth - generated_from_trainer base_model: meta-llama/Llama-2-13b-hf model-index: - name: llama_2_13b_Magiccoder_evol_10k_reverse results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_2_13b_Magiccoder_evol_10k_reverse This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.173 | 0.0262 | 4 | 1.1853 | | 1.1716 | 0.0523 | 8 | 1.1587 | | 1.105 | 0.0785 | 12 | 1.1410 | | 1.0534 | 0.1047 | 16 | 1.1289 | | 1.0911 | 0.1308 | 20 | 1.1239 | | 1.0565 | 0.1570 | 24 | 1.1172 | | 1.0589 | 0.1832 | 28 | 1.1140 | | 1.1027 | 0.2093 | 32 | 1.1106 | | 1.0379 | 0.2355 | 36 | 1.1096 | | 1.1134 | 0.2617 | 40 | 1.1087 | | 1.0969 | 0.2878 | 44 | 1.1049 | | 1.1361 | 0.3140 | 48 | 1.1056 | | 1.1121 | 0.3401 | 52 | 1.1023 | | 1.0828 | 0.3663 | 56 | 1.1047 | | 1.1246 | 0.3925 | 60 | 1.1027 | | 1.1285 | 0.4186 | 64 | 1.0990 | | 1.0788 | 0.4448 | 68 | 1.0998 | | 1.0917 | 0.4710 | 72 | 1.0950 | | 1.0395 | 0.4971 | 76 | 1.0977 | | 1.1267 | 0.5233 | 80 | 1.0954 | | 1.1414 | 0.5495 | 84 | 1.0955 | | 1.0821 | 0.5756 | 88 | 1.0930 | | 1.0277 | 0.6018 | 92 | 1.0908 | | 1.0303 | 0.6280 | 96 | 1.0917 | | 1.0947 | 0.6541 | 100 | 1.0905 | | 1.0824 | 0.6803 | 104 | 1.0903 | | 1.0726 | 0.7065 | 108 | 1.0912 | | 1.1064 | 0.7326 | 112 | 1.0907 | | 1.0467 | 0.7588 | 116 | 1.0892 | | 1.0725 | 0.7850 | 120 | 1.0885 | | 1.09 | 0.8111 | 124 | 1.0893 | | 1.0506 | 0.8373 | 128 | 1.0900 | | 0.9951 | 0.8635 | 132 | 1.0902 | | 1.1032 | 0.8896 | 136 | 1.0895 | | 1.0116 | 0.9158 | 140 | 1.0891 | | 1.0683 | 0.9419 | 144 | 1.0889 | | 1.0902 | 0.9681 | 148 | 1.0888 | | 1.0721 | 0.9943 | 152 | 1.0887 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
furyvngannoulive/fury-vs-ngannou-live
furyvngannoulive
"2023-10-27T14:54:19Z"
0
0
diffusers
[ "diffusers", "music", "text-to-image", "ab", "dataset:open-web-math/open-web-math", "license:mit", "region:us" ]
text-to-image
"2023-10-27T14:40:49Z"
--- license: mit datasets: - open-web-math/open-web-math language: - ab metrics: - bleurt library_name: diffusers pipeline_tag: text-to-image tags: - music --- <a rel="noopener nofollow" href="https://sportsanywhere.org/boxing/">https://sportsanywhere.org/boxing/</a>
alchemist69/82cca85d-2838-4a52-9d1c-6f678a2f0890
alchemist69
"2025-03-29T20:46:59Z"
0
0
null
[ "region:us" ]
null
"2025-03-29T20:40:28Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
souvik0306/test_quant_merge_facebook_opt
souvik0306
"2024-05-20T00:51:06Z"
84
1
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
"2024-05-20T00:50:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nyanxyz/mistral-sat
nyanxyz
"2023-12-06T13:22:28Z"
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-06T13:18:37Z"
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
kra0538/gemma3-e5
kra0538
"2025-03-20T11:54:21Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-20T11:54:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pristinawang/tableQA-GRPO-Meta-Llama-3-8B-Instruct-20250323010721-step5
pristinawang
"2025-03-23T05:11:19Z"
0
0
transformers
[ "transformers", "safetensors", "trl", "grpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-23T05:11:18Z"
--- library_name: transformers tags: - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Niggendar/eonXL_v10
Niggendar
"2024-05-19T21:19:26Z"
112
1
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-19T21:07:56Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1
Cognitive-Lab
"2024-04-20T10:17:28Z"
145
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "hindi", "bilingual", "conversational", "hi", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-20T06:32:49Z"
--- library_name: transformers tags: - hindi - bilingual license: llama2 language: - hi - en --- # LLama3-Gaja-Hindi-8B-v0.1 ## Overview LLama3-Gaja-Hindi-8B-v0.1 is an extension of the Ambari series, a bilingual English/Hindi model developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the [Llama3 8b](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs. <img src="https://cdn-uploads.huggingface.co/production/uploads/6442d975ad54813badc1ddf7/G0u9L6RQJFinST0chQmfL.jpeg" width="500px"> ## Generate ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import GenerationConfig, TextStreamer , TextIteratorStreamer model = AutoModelForCausalLM.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", trust_remote_code=True) # Existing messages list messages = [ {"role": "system", "content": " You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model (LLM), proficient in English and Hindi. You can respond in both languages based on the user's request."}, {"role": "user", "content": "Who are you"} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, # tokenize=False, return_tensors="pt" ).to("cuda") outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=tokenizer.convert_tokens_to_ids("<|eot_id|>"), do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Multi-turn Chat To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import GenerationConfig, TextStreamer , TextIteratorStreamer model = AutoModelForCausalLM.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", trust_remote_code=True) # Existing messages list messages = [ {"role": "system", "content": " You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model (LLM), proficient in English and Hindi. You can respond in both languages based on the user's request."}, ] # Function to add user input and generate response def process_user_input(user_input): global messages # Add user's input to messages list messages.append({"role": "user", "content": user_input}) # Prepare the prompt for generation prompt_formatted_message = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) # Configure generation parameters generation_config = GenerationConfig( repetition_penalty=1.2, max_new_tokens=8000, temperature=0.2, top_p=0.95, top_k=40, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.convert_tokens_to_ids("<|eot_id|>"), pad_token_id=tokenizer.pad_token_id, do_sample=True, use_cache=True, return_dict_in_generate=True, output_attentions=False, output_hidden_states=False, output_scores=False, ) streamer = TextStreamer(tokenizer) batch = tokenizer(str(prompt_formatted_message.strip()), return_tensors="pt") print("\033[32mResponse: \033[0m") # Print an empty response # Generate response generated = model.generate( inputs=batch["input_ids"].to("cuda"), generation_config=generation_config, streamer=streamer, ) # Extract and format assistant's response # print(tokenizer.decode(generated["sequences"].cpu().tolist()[0])) assistant_response = tokenizer.decode(generated["sequences"].cpu().tolist()[0]) # Find the last occurrence of "assistant" and empty string ("") assistant_start_index = assistant_response.rfind("<|start_header_id|>assistant<|end_header_id|>") empty_string_index = assistant_response.rfind("<|eot_id|>") # Extract the text between the last "assistant" and "" if assistant_start_index != -1 and empty_string_index != -1: final_response = assistant_response[assistant_start_index + len("<|start_header_id|>assistant<|end_header_id|>") : empty_string_index] else: # final_response = assistant_response # If indices not found, use the whole response assert "Filed to generate multi turn prompt formate" # Append the extracted response to the messages list messages.append({"role": "assistant", "content": final_response}) # messages.append({"role": "assistant", "content": assistant_response}) # Print assistant's response # print(f"Assistant: {assistant_response}") # Main interaction loop while True: print("=================================================================================") user_input = input("Input: ") # Prompt user for input # Check if user_input is empty if not user_input.strip(): # .strip() removes any leading or trailing whitespace break # Break out of the loop if input is empty # Print response placeholder process_user_input(user_input) # Process user's input and generate response ``` ## Prompt formate system prompt = `You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model(LLM), proficient in English and Hindi. You can respond in both languages based on the users request.` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Benchmarks coming soon ## Bilingual Instruct Fine-tuning The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Hindi based on the language specified in the user prompt or instruction. ## References - [Ambari-7B-Instruct Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1)
mradermacher/Viper-Coder-v1.7-Vsm6-GGUF
mradermacher
"2025-03-21T21:20:06Z"
642
2
transformers
[ "transformers", "gguf", "coder", "text-generation-inference", "viper", "StreamlinedMemory", "Qwen", "chemistry", "code", "en", "base_model:prithivMLmods/Viper-Coder-v1.7-Vsm6", "base_model:quantized:prithivMLmods/Viper-Coder-v1.7-Vsm6", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-07T14:06:03Z"
--- base_model: prithivMLmods/Viper-Coder-v1.7-Vsm6 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - coder - text-generation-inference - viper - StreamlinedMemory - Qwen - chemistry - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/prithivMLmods/Viper-Coder-v1.7-Vsm6 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.7-Vsm6-GGUF/resolve/main/Viper-Coder-v1.7-Vsm6.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ameerazam08/DiffSynth-Studio
ameerazam08
"2024-02-02T20:00:54Z"
0
8
null
[ "arxiv:2401.16224", "region:us" ]
null
"2024-02-02T19:55:39Z"
# DiffSynth Studio ## Introduction DiffSynth is a new Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. This version is currently in its initial stage, supporting SD and SDXL architectures. In the future, we plan to develop more interesting features based on this new codebase. ## Installation Create Python environment: ``` conda env create -f environment.yml ``` We find that sometimes `conda` cannot install `cupy` correctly, please install it manually. See [this document](https://docs.cupy.dev/en/stable/install.html) for more details. Enter the Python environment: ``` conda activate DiffSynthStudio ``` ## Usage (in WebUI) ``` python -m streamlit run Diffsynth_Studio.py ``` https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954 ## Usage (in Python code) ### Example 1: Stable Diffusion We can generate images with very high resolution. Please see `examples/sd_text_to_image.py` for more details. |512*512|1024*1024|2048*2048|4096*4096| |-|-|-|-| |![512](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/55f679e9-7445-4605-9315-302e93d11370)|![1024](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/6fc84611-8da6-4a1f-8fee-9a34eba3b4a5)|![2048](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/9087a73c-9164-4c58-b2a0-effc694143fb)|![4096](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/edee9e71-fc39-4d1c-9ca9-fa52002c67ac)| ### Example 2: Stable Diffusion XL Generate images with Stable Diffusion XL. Please see `examples/sdxl_text_to_image.py` for more details. |1024*1024|2048*2048| |-|-| |![1024](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/67687748-e738-438c-aee5-96096f09ac90)|![2048](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/584186bc-9855-4140-878e-99541f9a757f)| ### Example 3: Stable Diffusion XL Turbo Generate images with Stable Diffusion XL Turbo. You can see `examples/sdxl_turbo.py` for more details, but we highly recommend you to use it in the WebUI. |"black car"|"red car"| |-|-| |![black_car](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/7fbfd803-68d4-44f3-8713-8c925fec47d0)|![black_car_to_red_car](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/aaf886e4-c33c-4fd8-98e2-29eef117ba00)| ### Example 4: Toon Shading (Diffutoon) This example is implemented based on [Diffutoon](https://arxiv.org/abs/2401.16224). This approach is adept for rendering high-resoluton videos with rapid motion. You can easily modify the parameters in the config dict. See `examples/diffutoon_toon_shading.py`. https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd ### Example 5: Toon Shading with Editing Signals (Diffutoon) Coming soon. https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c ### Example 6: Toon Shading (in native Python code) This example is provided for developers. If you don't want to use the config to manage parameters, you can see `examples/sd_toon_shading.py` to learn how to use it in native Python code. https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/607c199b-6140-410b-a111-3e4ffb01142c ### Example 7: Text to Video Given a prompt, DiffSynth Studio can generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See `examples/sd_text_to_video.py`. https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/8f556355-4079-4445-9b48-e9da77699437 ### Example 8: Video Stylization We provide an example for video stylization. In this pipeline, the rendered video is completely different from the original video, thus we need a powerful deflickering algorithm. We use FastBlend to implement the deflickering module. Please see `examples/sd_video_rerender.py` for more details. https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea ### Example 9: Prompt Processing If you are not native English user, we provide translation service for you. Our prompter can translate other language to English and refine it using "BeautifulPrompt" models. Please see `examples/sd_prompt_refining.py` for more details. Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English. |seed=0|seed=1|seed=2|seed=3| |-|-|-|-| |![0_](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/ebb25ca8-7ce1-4d9e-8081-59a867c70c4d)|![1_](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/a7e79853-3c1a-471a-9c58-c209ec4b76dd)|![2_](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/a292b959-a121-481f-b79c-61cc3346f810)|![3_](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/1c19b54e-5a6f-4d48-960b-a7b2b149bb4c)| Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English. Then the [refining model](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd) will refine the translated prompt for better visual quality. |seed=0|seed=1|seed=2|seed=3| |-|-|-|-| |![0](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/778b1bd9-44e0-46ac-a99c-712b3fc9aaa4)|![1](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/c03479b8-2082-4c6e-8e1c-3582b98686f6)|![2](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/edb33d21-3288-4a55-96ca-a4bfe1b50b00)|![3](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/7848cfc1-cad5-4848-8373-41d24e98e584)|
LSX-UniWue/LLaMmlein_120M
LSX-UniWue
"2024-11-19T16:48:19Z"
684
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "de", "dataset:togethercomputer/RedPajama-Data-V2", "arxiv:2411.11171", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-01T09:31:38Z"
--- datasets: - togethercomputer/RedPajama-Data-V2 language: - de pipeline_tag: text-generation library_name: transformers license: other --- # LLäMmlein 120M This is a German Tinyllama 120M language model trained from scratch using the [Tinyllama](https://github.com/jzhang38/TinyLlama) codebase on the German portion of [RedPajama V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). Find more details on our [page](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) and our [preprint](arxiv.org/abs/2411.11171)! ### Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("LSX-UniWue/LLaMmlein_120M") tokenizer = AutoTokenizer.from_pretrained("LSX-UniWue/LLaMmlein_120M") ``` ### Performance We evaluated our model on the [SuperGLEBer](https://lsx-uniwue.github.io/SuperGLEBer-site/) benchmark.
XelotX/DeepSeek-V3-Original
XelotX
"2024-12-26T12:45:22Z"
7
0
null
[ "safetensors", "deepseek_v3", "custom_code", "fp8", "region:us" ]
null
"2024-12-26T12:45:21Z"
<!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE" style="margin: 2px;"> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" style="margin: 2px;"> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> ## 2. Model Summary --- **Architecture: Innovative Load Balancing Strategy and Training Objective** - On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing. - We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration. --- **Pre-Training: Towards Ultimate Training Efficiency** - We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model. - Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead. - At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours. --- **Post-Training: Knowledge Distillation from DeepSeek-R1** - We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3. --- ## 3. Model Downloads <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) | | DeepSeek-V3 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3) | </div> **NOTE: The total size of DeepSeek-V3 models on HuggingFace is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.** To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally). For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback. ## 4. Evaluation Results ### Base Model #### Standard Benchmarks <div align="center"> | | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 | |---|-------------------|----------|--------|-------------|---------------|---------| | | Architecture | - | MoE | Dense | Dense | MoE | | | # Activated Params | - | 21B | 72B | 405B | 37B | | | # Total Params | - | 236B | 72B | 405B | 671B | | English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 | | | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** | | | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87.1** | | | MMLU-Redux (Acc.) | 5-shot | 75.6 | 83.2 | 81.3 | **86.2** | | | MMLU-Pro (Acc.) | 5-shot | 51.4 | 58.3 | 52.8 | **64.4** | | | DROP (F1) | 3-shot | 80.4 | 80.6 | 86.0 | **89.0** | | | ARC-Easy (Acc.) | 25-shot | 97.6 | 98.4 | 98.4 | **98.9** | | | ARC-Challenge (Acc.) | 25-shot | 92.2 | 94.5 | **95.3** | **95.3** | | | HellaSwag (Acc.) | 10-shot | 87.1 | 84.8 | **89.2** | 88.9 | | | PIQA (Acc.) | 0-shot | 83.9 | 82.6 | **85.9** | 84.7 | | | WinoGrande (Acc.) | 5-shot | **86.3** | 82.3 | 85.2 | 84.9 | | | RACE-Middle (Acc.) | 5-shot | 73.1 | 68.1 | **74.2** | 67.1 | | | RACE-High (Acc.) | 5-shot | 52.6 | 50.3 | **56.8** | 51.3 | | | TriviaQA (EM) | 5-shot | 80.0 | 71.9 | **82.7** | **82.9** | | | NaturalQuestions (EM) | 5-shot | 38.6 | 33.2 | **41.5** | 40.0 | | | AGIEval (Acc.) | 0-shot | 57.5 | 75.8 | 60.6 | **79.6** | | Code | HumanEval (Pass@1) | 0-shot | 43.3 | 53.0 | 54.9 | **65.2** | | | MBPP (Pass@1) | 3-shot | 65.0 | 72.6 | 68.4 | **75.4** | | | LiveCodeBench-Base (Pass@1) | 3-shot | 11.6 | 12.9 | 15.5 | **19.4** | | | CRUXEval-I (Acc.) | 2-shot | 52.5 | 59.1 | 58.5 | **67.3** | | | CRUXEval-O (Acc.) | 2-shot | 49.8 | 59.9 | 59.9 | **69.8** | | Math | GSM8K (EM) | 8-shot | 81.6 | 88.3 | 83.5 | **89.3** | | | MATH (EM) | 4-shot | 43.4 | 54.4 | 49.0 | **61.6** | | | MGSM (EM) | 8-shot | 63.6 | 76.2 | 69.9 | **79.8** | | | CMath (EM) | 3-shot | 78.7 | 84.5 | 77.3 | **90.7** | | Chinese | CLUEWSC (EM) | 5-shot | 82.0 | 82.5 | **83.0** | 82.7 | | | C-Eval (Acc.) | 5-shot | 81.4 | 89.2 | 72.5 | **90.1** | | | CMMLU (Acc.) | 5-shot | 84.0 | **89.5** | 73.7 | 88.8 | | | CMRC (EM) | 1-shot | **77.4** | 75.8 | 76.0 | 76.3 | | | C3 (Acc.) | 0-shot | 77.4 | 76.7 | **79.7** | 78.6 | | | CCPM (Acc.) | 0-shot | **93.0** | 88.5 | 78.6 | 92.0 | | Multilingual | MMMLU-non-English (Acc.) | 5-shot | 64.0 | 74.8 | 73.8 | **79.4** | </div> Note: Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks. For more evaluation details, please check our paper. #### Context Window <p align="center"> <img width="80%" src="figures/niah.png"> </p> Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**. ### Chat Model #### Standard Benchmarks (Models larger than 67B) <div align="center"> | | **Benchmark (Metric)** | **DeepSeek V2-0506** | **DeepSeek V2.5-0905** | **Qwen2.5 72B-Inst.** | **Llama3.1 405B-Inst.** | **Claude-3.5-Sonnet-1022** | **GPT-4o 0513** | **DeepSeek V3** | |---|---------------------|---------------------|----------------------|---------------------|----------------------|---------------------------|----------------|----------------| | | Architecture | MoE | MoE | Dense | Dense | - | - | MoE | | | # Activated Params | 21B | 21B | 72B | 405B | - | - | 37B | | | # Total Params | 236B | 236B | 72B | 405B | - | - | 671B | | English | MMLU (EM) | 78.2 | 80.6 | 85.3 | **88.6** | **88.3** | 87.2 | **88.5** | | | MMLU-Redux (EM) | 77.9 | 80.3 | 85.6 | 86.2 | **88.9** | 88.0 | **89.1** | | | MMLU-Pro (EM) | 58.5 | 66.2 | 71.6 | 73.3 | **78.0** | 72.6 | 75.9 | | | DROP (3-shot F1) | 83.0 | 87.8 | 76.7 | 88.7 | 88.3 | 83.7 | **91.6** | | | IF-Eval (Prompt Strict) | 57.7 | 80.6 | 84.1 | 86.0 | **86.5** | 84.3 | 86.1 | | | GPQA-Diamond (Pass@1) | 35.3 | 41.3 | 49.0 | 51.1 | **65.0** | 49.9 | 59.1 | | | SimpleQA (Correct) | 9.0 | 10.2 | 9.1 | 17.1 | 28.4 | **38.2** | 24.9 | | | FRAMES (Acc.) | 66.9 | 65.4 | 69.8 | 70.0 | 72.5 | **80.5** | 73.3 | | | LongBench v2 (Acc.) | 31.6 | 35.4 | 39.4 | 36.1 | 41.0 | 48.1 | **48.7** | | Code | HumanEval-Mul (Pass@1) | 69.3 | 77.4 | 77.3 | 77.2 | 81.7 | 80.5 | **82.6** | | | LiveCodeBench (Pass@1-COT) | 18.8 | 29.2 | 31.1 | 28.4 | 36.3 | 33.4 | **40.5** | | | LiveCodeBench (Pass@1) | 20.3 | 28.4 | 28.7 | 30.1 | 32.8 | 34.2 | **37.6** | | | Codeforces (Percentile) | 17.5 | 35.6 | 24.8 | 25.3 | 20.3 | 23.6 | **51.6** | | | SWE Verified (Resolved) | - | 22.6 | 23.8 | 24.5 | **50.8** | 38.8 | 42.0 | | | Aider-Edit (Acc.) | 60.3 | 71.6 | 65.4 | 63.9 | **84.2** | 72.9 | 79.7 | | | Aider-Polyglot (Acc.) | - | 18.2 | 7.6 | 5.8 | 45.3 | 16.0 | **49.6** | | Math | AIME 2024 (Pass@1) | 4.6 | 16.7 | 23.3 | 23.3 | 16.0 | 9.3 | **39.2** | | | MATH-500 (EM) | 56.3 | 74.7 | 80.0 | 73.8 | 78.3 | 74.6 | **90.2** | | | CNMO 2024 (Pass@1) | 2.8 | 10.8 | 15.9 | 6.8 | 13.1 | 10.8 | **43.2** | | Chinese | CLUEWSC (EM) | 89.9 | 90.4 | **91.4** | 84.7 | 85.4 | 87.9 | 90.9 | | | C-Eval (EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | **86.5** | | | C-SimpleQA (Correct) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | **64.8** | Note: All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models. </div> #### Open Ended Generation Evaluation <div align="center"> | Model | Arena-Hard | AlpacaEval 2.0 | |-------|------------|----------------| | DeepSeek-V2.5-0905 | 76.2 | 50.5 | | Qwen2.5-72B-Instruct | 81.2 | 49.1 | | LLaMA-3.1 405B | 69.3 | 40.5 | | GPT-4o-0513 | 80.4 | 51.1 | | Claude-Sonnet-3.5-1022 | 85.2 | 52.0 | | DeepSeek-V3 | **85.5** | **70.0** | Note: English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric. </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in) We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: 1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference. 2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes. 3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment. 4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. 5. **AMD GPU**: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes. 6. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices. Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation. Here is an example of converting FP8 weights to BF16: ```shell cd inference python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights ``` **NOTE: Huggingface's Transformers has not been directly supported yet.** ### 6.1 Inference with DeepSeek-Infer Demo (example only) #### Model Weights & Demo Code Preparation First, clone our DeepSeek-V3 GitHub repository: ```shell git clone https://github.com/deepseek-ai/DeepSeek-V3.git ``` Navigate to the `inference` folder and install dependencies listed in `requirements.txt`. ```shell cd DeepSeek-V3/inference pip install -r requirements.txt ``` Download the model weights from HuggingFace, and put them into `/path/to/DeepSeek-V3` folder. #### Model Weights Conversion Convert HuggingFace model weights to a specific format: ```shell python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16 ``` #### Run Then you can chat with DeepSeek-V3: ```shell torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200 ``` Or batch inference on a given file: ```shell torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE ``` ### 6.2 Inference with SGLang (recommended) [SGLang](https://github.com/sgl-project/sglang) currently supports MLA optimizations, FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks. Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution. Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3 ### 6.3 Inference with LMDeploy (recommended) [LMDeploy](https://github.com/InternLM/lmdeploy), a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows. For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960 ### 6.4 Inference with TRT-LLM (recommended) [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3. ### 6.5 Recommended Inference Functionality with AMD GPUs In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the [SGLang instructions](#63-inference-with-lmdeploy-recommended). ### 6.6 Recommended Inference Functionality with Huawei Ascend NPUs The [MindIE](https://www.hiascend.com/en/software/mindie) framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the [instructions here](https://modelers.cn/models/MindIE/deepseekv3). ## 7. License This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V3 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V3 series (including Base and Chat) supports commercial use. ## 8. Citation ``` ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
getad72493/wife
getad72493
"2024-12-17T03:32:51Z"
47
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2024-12-17T03:23:06Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- [24]Realistic photograph, 4k, high quality, (best quality:1.1), realistic, photorealistic, close-up, upper body, 1girl, (ultra HD quality details), long hair, hair over one eye, (solo, secretary, sexy), thick thighs, wide hips, perfect large round butt, long legs, parted lips, standing, indoors, Against a soft, true gradient white background, perfect sagging large breasts, dynamic angle, dynamic pose, Back naked, from behind, turning head, Detailed and clear face, output: url: images/ComfyUI_00031_.png - text: >- raw photo, instagram photo, artistic mood, 1girl, chinese pretty, innocent face, messy hair, W-sit, panty, off-shoulder, tired, exhausted, on floor, messy room, mouth wide open, sticky white cum in mouth and dripping on to chest output: url: images/32597237.jpeg - text: >- ((grainy amateur Photo)) of a casual porn, (chinese Female having sex with a Muscular guy, pov, ((woman is getting fucked by a man)), (((nude, nudity, naked))), hetero, penis, sex, vaginal, lying down, nude, night time, , hair in ponytail, woman having an orgasm, skin texture style, photo output: url: images/32590934.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # wife <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/getad72493/wife/tree/main) them in the Files & versions tab.
adammandic87/bbbf1eed-c358-4e0b-9e6e-6885032d94fe
adammandic87
"2025-01-23T03:42:32Z"
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", "base_model:adapter:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", "license:apache-2.0", "region:us" ]
null
"2025-01-23T03:16:48Z"
--- library_name: peft license: apache-2.0 base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 tags: - axolotl - generated_from_trainer model-index: - name: bbbf1eed-c358-4e0b-9e6e-6885032d94fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 43a25c8426787eaa_train_data.json ds_type: json format: custom path: /workspace/input_data/43a25c8426787eaa_train_data.json type: field_input: mag_field_of_study field_instruction: section_title field_output: original_text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/bbbf1eed-c358-4e0b-9e6e-6885032d94fe hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/43a25c8426787eaa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1916ac98-98c6-431a-bcd4-6099de947a49 wandb_project: Birthday-SN56-13-Gradients-On-Demand wandb_run: your_name wandb_runid: 1916ac98-98c6-431a-bcd4-6099de947a49 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bbbf1eed-c358-4e0b-9e6e-6885032d94fe This model is a fine-tuned version of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 11.1609 | 0.0000 | 1 | 3.1391 | | 12.9458 | 0.0001 | 3 | 3.1294 | | 12.4847 | 0.0002 | 6 | 3.0432 | | 12.1346 | 0.0004 | 9 | 2.8765 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nzm97/math_question_grade_detection_v12-16-24_v1
nzm97
"2024-12-16T11:17:41Z"
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:allenai/scibert_scivocab_uncased", "base_model:finetune:allenai/scibert_scivocab_uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-16T09:45:22Z"
--- library_name: transformers base_model: allenai/scibert_scivocab_uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: math_question_grade_detection_v12-16-24_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # math_question_grade_detection_v12-16-24_v1 This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6301 - Accuracy: 0.8194 - Precision: 0.8228 - Recall: 0.8194 - F1: 0.8200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 6000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.0683 | 50 | 2.1123 | 0.1676 | 0.1211 | 0.1676 | 0.1026 | | No log | 0.1366 | 100 | 2.0118 | 0.2613 | 0.2102 | 0.2613 | 0.1941 | | No log | 0.2049 | 150 | 1.8750 | 0.3075 | 0.3556 | 0.3075 | 0.2833 | | No log | 0.2732 | 200 | 1.7074 | 0.3689 | 0.4076 | 0.3689 | 0.3224 | | No log | 0.3415 | 250 | 1.5071 | 0.4612 | 0.4925 | 0.4612 | 0.4492 | | No log | 0.4098 | 300 | 1.4983 | 0.4120 | 0.5160 | 0.4120 | 0.3779 | | No log | 0.4781 | 350 | 1.2997 | 0.5196 | 0.5526 | 0.5196 | 0.5059 | | No log | 0.5464 | 400 | 1.1756 | 0.5849 | 0.6063 | 0.5849 | 0.5731 | | No log | 0.6148 | 450 | 1.1104 | 0.6088 | 0.6260 | 0.6088 | 0.5997 | | 1.654 | 0.6831 | 500 | 1.0897 | 0.6103 | 0.6149 | 0.6103 | 0.6053 | | 1.654 | 0.7514 | 550 | 1.0162 | 0.6126 | 0.6221 | 0.6126 | 0.5963 | | 1.654 | 0.8197 | 600 | 1.0077 | 0.6095 | 0.6405 | 0.6095 | 0.5904 | | 1.654 | 0.8880 | 650 | 0.9427 | 0.6403 | 0.6608 | 0.6403 | 0.6277 | | 1.654 | 0.9563 | 700 | 0.9067 | 0.6464 | 0.6576 | 0.6464 | 0.6352 | | 1.654 | 1.0246 | 750 | 0.8812 | 0.6618 | 0.6745 | 0.6618 | 0.6443 | | 1.654 | 1.0929 | 800 | 0.8706 | 0.6764 | 0.6824 | 0.6764 | 0.6729 | | 1.654 | 1.1612 | 850 | 0.8650 | 0.6626 | 0.6800 | 0.6626 | 0.6584 | | 1.654 | 1.2295 | 900 | 0.8226 | 0.6879 | 0.7069 | 0.6879 | 0.6792 | | 1.654 | 1.2978 | 950 | 0.8039 | 0.7041 | 0.7102 | 0.7041 | 0.6999 | | 0.9362 | 1.3661 | 1000 | 0.7681 | 0.7110 | 0.7194 | 0.7110 | 0.7057 | | 0.9362 | 1.4344 | 1050 | 0.7844 | 0.6941 | 0.7128 | 0.6941 | 0.6916 | | 0.9362 | 1.5027 | 1100 | 0.7334 | 0.7241 | 0.7274 | 0.7241 | 0.7219 | | 0.9362 | 1.5710 | 1150 | 0.7071 | 0.7348 | 0.7371 | 0.7348 | 0.7313 | | 0.9362 | 1.6393 | 1200 | 0.6984 | 0.7487 | 0.7544 | 0.7487 | 0.7486 | | 0.9362 | 1.7077 | 1250 | 0.7166 | 0.7310 | 0.7375 | 0.7310 | 0.7317 | | 0.9362 | 1.7760 | 1300 | 0.7009 | 0.7425 | 0.7476 | 0.7425 | 0.7386 | | 0.9362 | 1.8443 | 1350 | 0.6653 | 0.7533 | 0.7584 | 0.7533 | 0.7521 | | 0.9362 | 1.9126 | 1400 | 0.6670 | 0.7533 | 0.7666 | 0.7533 | 0.7539 | | 0.9362 | 1.9809 | 1450 | 0.6622 | 0.7410 | 0.7482 | 0.7410 | 0.7414 | | 0.7205 | 2.0492 | 1500 | 0.6442 | 0.7479 | 0.7521 | 0.7479 | 0.7420 | | 0.7205 | 2.1175 | 1550 | 0.6465 | 0.7563 | 0.7637 | 0.7563 | 0.7567 | | 0.7205 | 2.1858 | 1600 | 0.6719 | 0.7456 | 0.7684 | 0.7456 | 0.7437 | | 0.7205 | 2.2541 | 1650 | 0.6189 | 0.7694 | 0.7831 | 0.7694 | 0.7721 | | 0.7205 | 2.3224 | 1700 | 0.6196 | 0.7663 | 0.7726 | 0.7663 | 0.7647 | | 0.7205 | 2.3907 | 1750 | 0.6442 | 0.7610 | 0.7612 | 0.7610 | 0.7592 | | 0.7205 | 2.4590 | 1800 | 0.6156 | 0.7733 | 0.7765 | 0.7733 | 0.7736 | | 0.7205 | 2.5273 | 1850 | 0.6003 | 0.7756 | 0.7813 | 0.7756 | 0.7766 | | 0.7205 | 2.5956 | 1900 | 0.5974 | 0.7748 | 0.7781 | 0.7748 | 0.7756 | | 0.7205 | 2.6639 | 1950 | 0.6170 | 0.7633 | 0.7697 | 0.7633 | 0.7609 | | 0.5272 | 2.7322 | 2000 | 0.5920 | 0.7748 | 0.7774 | 0.7748 | 0.7751 | | 0.5272 | 2.8005 | 2050 | 0.6260 | 0.7594 | 0.7754 | 0.7594 | 0.7602 | | 0.5272 | 2.8689 | 2100 | 0.5824 | 0.7932 | 0.8011 | 0.7932 | 0.7929 | | 0.5272 | 2.9372 | 2150 | 0.5796 | 0.7879 | 0.7888 | 0.7879 | 0.7861 | | 0.5272 | 3.0055 | 2200 | 0.5765 | 0.7932 | 0.7959 | 0.7932 | 0.7923 | | 0.5272 | 3.0738 | 2250 | 0.5710 | 0.7940 | 0.8033 | 0.7940 | 0.7956 | | 0.5272 | 3.1421 | 2300 | 0.5902 | 0.7825 | 0.7881 | 0.7825 | 0.7822 | | 0.5272 | 3.2104 | 2350 | 0.5540 | 0.7978 | 0.8007 | 0.7978 | 0.7982 | | 0.5272 | 3.2787 | 2400 | 0.5843 | 0.7863 | 0.7963 | 0.7863 | 0.7869 | | 0.5272 | 3.3470 | 2450 | 0.5719 | 0.8002 | 0.8071 | 0.8002 | 0.8004 | | 0.4067 | 3.4153 | 2500 | 0.5610 | 0.8048 | 0.8115 | 0.8048 | 0.8063 | | 0.4067 | 3.4836 | 2550 | 0.5584 | 0.8009 | 0.8068 | 0.8009 | 0.8023 | | 0.4067 | 3.5519 | 2600 | 0.5661 | 0.7971 | 0.8023 | 0.7971 | 0.7983 | | 0.4067 | 3.6202 | 2650 | 0.5789 | 0.7978 | 0.7996 | 0.7978 | 0.7970 | | 0.4067 | 3.6885 | 2700 | 0.6037 | 0.7848 | 0.7934 | 0.7848 | 0.7856 | | 0.4067 | 3.7568 | 2750 | 0.5666 | 0.8009 | 0.8084 | 0.8009 | 0.8024 | | 0.4067 | 3.8251 | 2800 | 0.5925 | 0.7925 | 0.8055 | 0.7925 | 0.7932 | | 0.4067 | 3.8934 | 2850 | 0.5872 | 0.8055 | 0.8124 | 0.8055 | 0.8073 | | 0.4067 | 3.9617 | 2900 | 0.5637 | 0.8040 | 0.8056 | 0.8040 | 0.8033 | | 0.4067 | 4.0301 | 2950 | 0.5385 | 0.8101 | 0.8129 | 0.8101 | 0.8100 | | 0.3331 | 4.0984 | 3000 | 0.5727 | 0.7955 | 0.8020 | 0.7955 | 0.7972 | | 0.3331 | 4.1667 | 3050 | 0.5755 | 0.7963 | 0.8021 | 0.7963 | 0.7962 | | 0.3331 | 4.2350 | 3100 | 0.5668 | 0.8048 | 0.8097 | 0.8048 | 0.8058 | | 0.3331 | 4.3033 | 3150 | 0.5994 | 0.7986 | 0.8083 | 0.7986 | 0.7999 | | 0.3331 | 4.3716 | 3200 | 0.5886 | 0.7986 | 0.8054 | 0.7986 | 0.7996 | | 0.3331 | 4.4399 | 3250 | 0.5933 | 0.7986 | 0.8091 | 0.7986 | 0.8006 | | 0.3331 | 4.5082 | 3300 | 0.6012 | 0.8002 | 0.8086 | 0.8002 | 0.8017 | | 0.3331 | 4.5765 | 3350 | 0.5947 | 0.8040 | 0.8073 | 0.8040 | 0.8031 | | 0.3331 | 4.6448 | 3400 | 0.5596 | 0.8125 | 0.8132 | 0.8125 | 0.8121 | | 0.3331 | 4.7131 | 3450 | 0.5737 | 0.8048 | 0.8082 | 0.8048 | 0.8054 | | 0.2431 | 4.7814 | 3500 | 0.5822 | 0.8101 | 0.8155 | 0.8101 | 0.8110 | | 0.2431 | 4.8497 | 3550 | 0.5520 | 0.8155 | 0.8177 | 0.8155 | 0.8157 | | 0.2431 | 4.9180 | 3600 | 0.5730 | 0.8125 | 0.8157 | 0.8125 | 0.8127 | | 0.2431 | 4.9863 | 3650 | 0.5790 | 0.8055 | 0.8147 | 0.8055 | 0.8069 | | 0.2431 | 5.0546 | 3700 | 0.5803 | 0.8109 | 0.8139 | 0.8109 | 0.8116 | | 0.2431 | 5.1230 | 3750 | 0.5903 | 0.8132 | 0.8152 | 0.8132 | 0.8130 | | 0.2431 | 5.1913 | 3800 | 0.5632 | 0.8240 | 0.8261 | 0.8240 | 0.8245 | | 0.2431 | 5.2596 | 3850 | 0.6303 | 0.8017 | 0.8077 | 0.8017 | 0.8031 | | 0.2431 | 5.3279 | 3900 | 0.5857 | 0.8148 | 0.8198 | 0.8148 | 0.8158 | | 0.2431 | 5.3962 | 3950 | 0.5705 | 0.8171 | 0.8195 | 0.8171 | 0.8176 | | 0.1805 | 5.4645 | 4000 | 0.5788 | 0.8201 | 0.8204 | 0.8201 | 0.8200 | | 0.1805 | 5.5328 | 4050 | 0.5936 | 0.8101 | 0.8149 | 0.8101 | 0.8104 | | 0.1805 | 5.6011 | 4100 | 0.5875 | 0.8163 | 0.8195 | 0.8163 | 0.8166 | | 0.1805 | 5.6694 | 4150 | 0.6021 | 0.8171 | 0.8224 | 0.8171 | 0.8182 | | 0.1805 | 5.7377 | 4200 | 0.5693 | 0.8186 | 0.8216 | 0.8186 | 0.8192 | | 0.1805 | 5.8060 | 4250 | 0.5950 | 0.8155 | 0.8177 | 0.8155 | 0.8157 | | 0.1805 | 5.8743 | 4300 | 0.6180 | 0.8086 | 0.8143 | 0.8086 | 0.8091 | | 0.1805 | 5.9426 | 4350 | 0.5957 | 0.8155 | 0.8197 | 0.8155 | 0.8162 | | 0.1805 | 6.0109 | 4400 | 0.6080 | 0.8140 | 0.8179 | 0.8140 | 0.8142 | | 0.1805 | 6.0792 | 4450 | 0.5948 | 0.8178 | 0.8197 | 0.8178 | 0.8183 | | 0.1547 | 6.1475 | 4500 | 0.5838 | 0.8217 | 0.8228 | 0.8217 | 0.8219 | | 0.1547 | 6.2158 | 4550 | 0.6166 | 0.8148 | 0.8178 | 0.8148 | 0.8148 | | 0.1547 | 6.2842 | 4600 | 0.6036 | 0.8224 | 0.8264 | 0.8224 | 0.8230 | | 0.1547 | 6.3525 | 4650 | 0.6064 | 0.8232 | 0.8265 | 0.8232 | 0.8229 | | 0.1547 | 6.4208 | 4700 | 0.6158 | 0.8171 | 0.8206 | 0.8171 | 0.8177 | | 0.1547 | 6.4891 | 4750 | 0.6404 | 0.8140 | 0.8185 | 0.8140 | 0.8142 | | 0.1547 | 6.5574 | 4800 | 0.6165 | 0.8171 | 0.8211 | 0.8171 | 0.8179 | | 0.1547 | 6.6257 | 4850 | 0.6126 | 0.8186 | 0.8237 | 0.8186 | 0.8193 | | 0.1547 | 6.6940 | 4900 | 0.5903 | 0.8240 | 0.8251 | 0.8240 | 0.8242 | | 0.1547 | 6.7623 | 4950 | 0.6012 | 0.8155 | 0.8203 | 0.8155 | 0.8165 | | 0.1099 | 6.8306 | 5000 | 0.6131 | 0.8186 | 0.8208 | 0.8186 | 0.8191 | | 0.1099 | 6.8989 | 5050 | 0.5935 | 0.8248 | 0.8262 | 0.8248 | 0.8252 | | 0.1099 | 6.9672 | 5100 | 0.6264 | 0.8186 | 0.8216 | 0.8186 | 0.8189 | | 0.1099 | 7.0355 | 5150 | 0.6274 | 0.8186 | 0.8225 | 0.8186 | 0.8192 | | 0.1099 | 7.1038 | 5200 | 0.6375 | 0.8217 | 0.8233 | 0.8217 | 0.8218 | | 0.1099 | 7.1721 | 5250 | 0.6362 | 0.8148 | 0.8185 | 0.8148 | 0.8154 | | 0.1099 | 7.2404 | 5300 | 0.6180 | 0.8194 | 0.8220 | 0.8194 | 0.8199 | | 0.1099 | 7.3087 | 5350 | 0.6279 | 0.8201 | 0.8252 | 0.8201 | 0.8211 | | 0.1099 | 7.3770 | 5400 | 0.6052 | 0.8217 | 0.8234 | 0.8217 | 0.8219 | | 0.1099 | 7.4454 | 5450 | 0.6075 | 0.8217 | 0.8228 | 0.8217 | 0.8219 | | 0.0859 | 7.5137 | 5500 | 0.6354 | 0.8178 | 0.8220 | 0.8178 | 0.8183 | | 0.0859 | 7.5820 | 5550 | 0.6367 | 0.8163 | 0.8205 | 0.8163 | 0.8170 | | 0.0859 | 7.6503 | 5600 | 0.6088 | 0.8240 | 0.8254 | 0.8240 | 0.8242 | | 0.0859 | 7.7186 | 5650 | 0.6100 | 0.8240 | 0.8269 | 0.8240 | 0.8245 | | 0.0859 | 7.7869 | 5700 | 0.6208 | 0.8232 | 0.8258 | 0.8232 | 0.8239 | | 0.0859 | 7.8552 | 5750 | 0.6302 | 0.8278 | 0.8301 | 0.8278 | 0.8283 | | 0.0859 | 7.9235 | 5800 | 0.6295 | 0.8240 | 0.8268 | 0.8240 | 0.8246 | | 0.0859 | 7.9918 | 5850 | 0.6438 | 0.8240 | 0.8284 | 0.8240 | 0.8247 | | 0.0859 | 8.0601 | 5900 | 0.6334 | 0.8217 | 0.8257 | 0.8217 | 0.8224 | | 0.0859 | 8.1284 | 5950 | 0.6313 | 0.8201 | 0.8237 | 0.8201 | 0.8208 | | 0.0733 | 8.1967 | 6000 | 0.6301 | 0.8194 | 0.8228 | 0.8194 | 0.8200 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.0 - Datasets 3.1.0 - Tokenizers 0.20.3
davidschulte/ESM_ckandemir__bitcoin_tweets_sentiment_kaggle_default
davidschulte
"2025-03-26T15:20:34Z"
22
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:ckandemir/bitcoin_tweets_sentiment_kaggle", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-12-08T14:38:37Z"
--- base_model: bert-base-multilingual-uncased datasets: - ckandemir/bitcoin_tweets_sentiment_kaggle license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM ckandemir/bitcoin_tweets_sentiment_kaggle <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** ckandemir/bitcoin_tweets_sentiment_kaggle - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** ckandemir/bitcoin_tweets_sentiment_kaggle - **Subset [optional]:** default - **Text Column:** text - **Label Column:** Sentiment - **Dataset Split:** train - **Sample size [optional]:** 10000 - **Sample seed [optional]:** 42 ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
bitsanlp/simcse_finetuned_500k
bitsanlp
"2022-12-05T02:13:12Z"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-12-05T01:46:52Z"
--- tags: - generated_from_trainer model-index: - name: simcse_finetuned_500k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simcse_finetuned_500k This model is a fine-tuned version of [bitsanlp/simcse_retrain_edos_500k](https://huggingface.co/bitsanlp/simcse_retrain_edos_500k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 28 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups-Q4_K_M-GGUF
Abirate
"2024-04-13T14:46:28Z"
6
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-04-13T14:45:57Z"
--- library_name: transformers tags: - llama-cpp - gguf-my-repo --- # Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups-Q4_K_M-GGUF This model was converted to GGUF format from [`Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups`](https://huggingface.co/Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups-Q4_K_M-GGUF --model gemma-1.1-7b-it-finetuned-on-kaggle-writeups.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Abirate/gemma-1.1-7b-it-finetuned-on-kaggle-writeups-Q4_K_M-GGUF --model gemma-1.1-7b-it-finetuned-on-kaggle-writeups.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-1.1-7b-it-finetuned-on-kaggle-writeups.Q4_K_M.gguf -n 128 ```
samoline/f5cc865b-a7c9-4005-9366-09994782f648
samoline
"2025-03-22T12:42:47Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:sethuiyer/Medichat-Llama3-8B", "base_model:adapter:sethuiyer/Medichat-Llama3-8B", "license:other", "region:us" ]
null
"2025-03-22T12:27:06Z"
--- library_name: peft license: other base_model: sethuiyer/Medichat-Llama3-8B tags: - axolotl - generated_from_trainer model-index: - name: f5cc865b-a7c9-4005-9366-09994782f648 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: sethuiyer/Medichat-Llama3-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c9601e2820367a8f_train_data.json ds_type: json format: custom path: /workspace/input_data/c9601e2820367a8f_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/f5cc865b-a7c9-4005-9366-09994782f648 hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/c9601e2820367a8f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: 6ff84017-f5f6-493b-8fd8-1985d6c9a0ff wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: 6ff84017-f5f6-493b-8fd8-1985d6c9a0ff warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f5cc865b-a7c9-4005-9366-09994782f648 This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9754 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.277 | 0.0000 | 1 | 1.9761 | | 1.0946 | 0.0000 | 2 | 1.9754 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrpojam/Llama3.2-1B-De2Fr-Translation
mrpojam
"2025-03-13T15:07:40Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-13T14:44:09Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
scjones/distilbert-base-uncased-finetuned-emotion
scjones
"2022-06-21T00:16:41Z"
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-06-20T23:43:04Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9315 - name: F1 type: f1 value: 0.9317528216385311 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 - Accuracy: 0.9315 - F1: 0.9318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2115 | 1.0 | 250 | 0.1696 | 0.93 | 0.9295 | | 0.1376 | 2.0 | 500 | 0.1630 | 0.9315 | 0.9318 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
ctranslate2-4you/Mistral-Nemo-Instruct-2407-ct2-int8
ctranslate2-4you
"2024-10-22T16:55:11Z"
10
0
null
[ "safetensors", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407", "region:us" ]
null
"2024-10-22T13:56:05Z"
--- base_model: - mistralai/Mistral-Nemo-Instruct-2407 --- Ctranslate2 conversion of the model located at [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) Conversion script with graphical user interface can be downloaded [HERE](https://github.com/BBC-Esq/Ctranslate2-Converter) ## Tested with Ctranslate 4.4.0 and Torch 2.2.2 - NOTE: Ctranslate2 will soon release version 4.5.0, which will require greater than Torch 2.2.2. ## Example Usage: ``` import os import sys import ctranslate2 import gc import torch from transformers import AutoTokenizer system_message = "You are a helpful person who answers questions." user_message = "Hello, how are you today? I'd like you to write me a funny poem that is a parody of Milton's Paradise Lost if you are familiar with that famous epic poem?" model_dir = r"D:\Scripts\bench_chat\models\mistralai--Mistral-Nemo-Instruct-2407-ct2-int8" def build_prompt_mistral_nemo(): prompt = f"""<s> [INST]{system_message} {user_message}[/INST]""" return prompt def main(): model_name = os.path.basename(model_dir) print(f"\033[32mLoading the model: {model_name}...\033[0m") intra_threads = max(os.cpu_count() - 4, 4) generator = ctranslate2.Generator( model_dir, device="cuda", compute_type="int8", intra_threads=intra_threads ) tokenizer = AutoTokenizer.from_pretrained(model_dir, add_prefix_space=None) prompt = build_prompt_mistral_nemo() tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)) results_batch = generator.generate_batch( [tokens], include_prompt_in_result=False, max_batch_size=4096, batch_type="tokens", beam_size=1, num_hypotheses=1, max_length=512, sampling_temperature=0.0, ) output = tokenizer.decode(results_batch[0].sequences_ids[0]) print("\nGenerated response:") print(output) del generator del tokenizer torch.cuda.empty_cache() gc.collect() if __name__ == "__main__": main() ```
nksaisrinivas/llama3_finetuned_lora
nksaisrinivas
"2025-03-03T06:50:31Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
"2025-03-03T06:50:27Z"
--- base_model: meta-llama/Llama-3.2-1B-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
fahuamancaja/whisper-small-es
fahuamancaja
"2024-03-08T12:52:19Z"
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "es", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-03-06T12:27:53Z"
--- language: - es license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Es - Spanish Sampler results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: es split: test args: es metrics: - name: Wer type: wer value: 11.316615023383822 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Es - Spanish Sampler This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.2701 - Wer Ortho: 16.7756 - Wer: 11.3166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.241 | 0.03 | 500 | 0.2701 | 16.7756 | 11.3166 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
"2023-10-26T10:56:05Z"
3
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased", "license:mit", "region:us" ]
token-classification
"2023-10-23T19:29:35Z"
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT 64k as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[4, 8]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average | |-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------| | `bs4-e10-lr3e-05` | [0.8586][1] | [0.8586][2] | [0.8688][3] | [0.8539][4] | [0.8529][5] | 0.8586 ± 0.0063 | | `bs8-e10-lr5e-05` | [0.8539][6] | [**0.8653**][7] | [0.8518][8] | [0.8536][9] | [0.8374][10] | 0.8524 ± 0.0099 | | `bs8-e10-lr3e-05` | [0.8486][11] | [0.8486][12] | [0.8522][13] | [0.8512][14] | [0.8414][15] | 0.8484 ± 0.0042 | | `bs4-e10-lr5e-05` | [0.8529][16] | [0.8425][17] | [0.8501][18] | [0.8412][19] | [0.8501][20] | 0.8474 ± 0.0052 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
HPLT/hplt_bert_base_2_0_slv-Latn
HPLT
"2025-03-19T12:52:19Z"
21
0
null
[ "pytorch", "BERT", "HPLT", "encoder", "custom_code", "sl", "dataset:HPLT/HPLT2.0_cleaned", "arxiv:2503.10267", "license:apache-2.0", "region:us" ]
null
"2025-02-22T22:29:21Z"
--- language: - sl inference: false tags: - BERT - HPLT - encoder license: apache-2.0 datasets: - HPLT/HPLT2.0_cleaned --- # HPLT v2.0 BERT for Slovenian <img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%> This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/). It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/). We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0). All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup: - hidden size: 768 - attention heads: 12 - layers: 12 - vocabulary size: 32768 Every model uses its own tokenizer trained on language-specific HPLT data. [The training code](https://github.com/hplt-project/HPLT-WP4). [The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn) ## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`) This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`. ```python import torch from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_slv-Latn") model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_slv-Latn", trust_remote_code=True) mask_id = tokenizer.convert_tokens_to_ids("[MASK]") input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt") output_p = model(**input_text) output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids) # should output: '[CLS] It's a beautiful place.[SEP]' print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True)) ``` The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`. ## Intermediate checkpoints We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`. You can load a specific model revision with `transformers` using the argument `revision`: ```python model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_slv-Latn", revision="step21875", trust_remote_code=True) ``` You can access all the revisions for the models with the following code: ```python from huggingface_hub import list_repo_refs out = list_repo_refs("HPLT/hplt_bert_base_2_0_slv-Latn") print([b.name for b in out.branches]) ``` ## Cite us ```bibtex @inproceedings{samuel-etal-2023-trained, title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus", author = "Samuel, David and Kutuzov, Andrey and {\O}vrelid, Lilja and Velldal, Erik", editor = "Vlachos, Andreas and Augenstein, Isabelle", booktitle = "Findings of the Association for Computational Linguistics: EACL 2023", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-eacl.146", doi = "10.18653/v1/2023.findings-eacl.146", pages = "1954--1974" } ``` ```bibtex @misc{burchell2025expandedmassivemultilingualdataset, title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies}, author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu}, year={2025}, eprint={2503.10267}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.10267}, } ```
Arkajyoti/Arkajyoti-Mistral-7B-v0.1-nli-random-standardized-many-random-names-easy
Arkajyoti
"2024-07-29T21:02:55Z"
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-29T19:29:02Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zelk12/MT2-MMMA-gemma-2-9B
zelk12
"2024-10-14T16:10:48Z"
17
1
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B", "base_model:merge:zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B", "base_model:zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B", "base_model:merge:zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-14T16:04:30Z"
--- base_model: - zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B - zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B](https://huggingface.co/zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B) * [zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B](https://huggingface.co/zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B - model: zelk12/MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B merge_method: slerp base_model: zelk12/MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B dtype: bfloat16 parameters: t: 0.5 ```
MahmoudRox/Paligemma_VQAMED2019
MahmoudRox
"2024-06-08T17:09:00Z"
19
5
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
"2024-06-01T16:55:29Z"
--- license: gemma library_name: peft tags: - generated_from_trainer base_model: google/paligemma-3b-pt-224 model-index: - name: paligemma_VQAMed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma_VQAMed2019 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the [VQAMed 2019](https://zenodo.org/records/10499039) dataset. Fine-tuning code is [here](https://colab.research.google.com/github/mahmoudBidry/Finetune-Google-Paligemma-3B-VQA/blob/main/Fine_tune_PaliGemma_on_VQAMed2019_dataset.ipynb). ## How to use To use the model, follow the [colab notebook](https://colab.research.google.com/drive/1SfrNNHE32k9kBWdR6U0DQr4LI_AVIAb1?usp=sharing). Below is a quick example. To ensure you have the latest version of Transformers, install it using the following command: ```bash !pip install -qU git+https://github.com/huggingface/transformers.git ``` ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration import torch from PIL import Image import requests processor = AutoProcessor.from_pretrained("google/paligemma-3b-pt-224") model = PaliGemmaForConditionalGeneration.from_pretrained("MahmoudRox/Paligemma_VQAMED2019") prompt = "Which part of the body is in the picture?" #your question image_file = "https://prod-images-static.radiopaedia.org/images/9289883/1c20962e46c92ee83a3f551adb24fa_big_gallery.jpg" #your image raw_image = Image.open(requests.get(image_file, stream=True).raw) def generate_response(prompt, image): inputs = processor(images=image, text=prompt, return_tensors="pt") # Check if the attention mask needs to be inverted attention_mask = inputs['attention_mask'] if torch.max(attention_mask) == 1: attention_mask = 1 - attention_mask # Generate a response outputs = model.generate( input_ids=inputs['input_ids'], attention_mask=attention_mask, pixel_values=inputs['pixel_values'], max_new_tokens=1, no_repeat_ngram_size=2 ) # Decode and print the response decoded_response = processor.decode(outputs[0], skip_special_tokens=True)[len(prompt):] return decoded_response print(generate_response(prompt, raw_image)) #spine ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill
opensearch-project
"2025-02-24T05:01:33Z"
1,959,514
5
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "learned sparse", "opensearch", "retrieval", "passage-retrieval", "document-expansion", "bag-of-words", "en", "arxiv:2411.04403", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-07-17T07:51:35Z"
--- language: en license: apache-2.0 tags: - learned sparse - opensearch - transformers - retrieval - passage-retrieval - document-expansion - bag-of-words --- # opensearch-neural-sparse-encoding-doc-v2-distill ## Select the model The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' **zero-shot performance** on a subset of BEIR benchmark: TrecCovid,NFCorpus,NQ,HotpotQA,FiQA,ArguAna,Touche,DBPedia,SCIDOCS,FEVER,Climate FEVER,SciFact,Quora. Overall, the v2 series of models have better search relevance, efficiency and inference speed than the v1 series. The specific advantages and disadvantages may vary across different datasets. | Model | Inference-free for Retrieval | Model Parameters | AVG NDCG@10 | AVG FLOPS | |-------|------------------------------|------------------|-------------|-----------| | [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | | 133M | 0.524 | 11.4 | | [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | | 67M | 0.528 | 8.3 | | [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | ✔️ | 133M | 0.490 | 2.3 | | [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | ✔️ | 67M | 0.504 | 1.8 | | [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | ✔️ | 23M | 0.497 | 1.7 | ## Overview - **Paper**: [Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers](https://arxiv.org/abs/2411.04403) - **Fine-tuning sample**: [opensearch-sparse-model-tuning-sample](https://github.com/zhichao-aws/opensearch-sparse-model-tuning-sample) This is a learned sparse retrieval model. It encodes the documents to 30522 dimensional **sparse vectors**. For queries, it just use a tokenizer and a weight look-up table to generate sparse vectors. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. And the similarity score is the inner product of query/document sparse vectors. The training datasets includes MS MARCO, eli5_question_answer, squad_pairs, WikiAnswers, yahoo_answers_title_question, gooaq_pairs, stackexchange_duplicate_questions_body_body, wikihow, S2ORC_title_abstract, stackexchange_duplicate_questions_title-body_title-body, yahoo_answers_question_answer, searchQA_top5_snippets, stackexchange_duplicate_questions_title_title, yahoo_answers_title_answer. OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API. ## Usage (HuggingFace) This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API. ```python import json import itertools import torch from transformers import AutoModelForMaskedLM, AutoTokenizer # get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size def get_sparse_vector(feature, output): values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1) values = torch.log(1 + torch.relu(values)) values[:,special_token_ids] = 0 return values # transform the sparse vector to a dict of (token, weight) def transform_sparse_vector_to_dict(sparse_vector): sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True) non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist() number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist() tokens = [transform_sparse_vector_to_dict.id_to_token[_id] for _id in token_indices.tolist()] output = [] end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample)) for i in range(len(end_idxs)-1): token_strings = tokens[end_idxs[i]:end_idxs[i+1]] weights = non_zero_values[end_idxs[i]:end_idxs[i+1]] output.append(dict(zip(token_strings, weights))) return output # download the idf file from model hub. idf is used to give weights for query tokens def get_tokenizer_idf(tokenizer): from huggingface_hub import hf_hub_download local_cached_path = hf_hub_download(repo_id="opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill", filename="idf.json") with open(local_cached_path) as f: idf = json.load(f) idf_vector = [0]*tokenizer.vocab_size for token,weight in idf.items(): _id = tokenizer._convert_token_to_id_with_added_voc(token) idf_vector[_id]=weight return torch.tensor(idf_vector) # load the model model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill") tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill") idf = get_tokenizer_idf(tokenizer) # set the special tokens and id_to_token transform for post-process special_token_ids = [tokenizer.vocab[token] for token in tokenizer.special_tokens_map.values()] get_sparse_vector.special_token_ids = special_token_ids id_to_token = ["" for i in range(tokenizer.vocab_size)] for token, _id in tokenizer.vocab.items(): id_to_token[_id] = token transform_sparse_vector_to_dict.id_to_token = id_to_token query = "What's the weather in ny now?" document = "Currently New York is rainy." # encode the query feature_query = tokenizer([query], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False) input_ids = feature_query["input_ids"] batch_size = input_ids.shape[0] query_vector = torch.zeros(batch_size, tokenizer.vocab_size) query_vector[torch.arange(batch_size).unsqueeze(-1), input_ids] = 1 query_sparse_vector = query_vector*idf # encode the document feature_document = tokenizer([document], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False) output = model(**feature_document)[0] document_sparse_vector = get_sparse_vector(feature_document, output) # get similarity score sim_score = torch.matmul(query_sparse_vector[0],document_sparse_vector[0]) print(sim_score) # tensor(17.5307, grad_fn=<DotBackward0>) query_token_weight = transform_sparse_vector_to_dict(query_sparse_vector)[0] document_query_token_weight = transform_sparse_vector_to_dict(document_sparse_vector)[0] for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True): if token in document_query_token_weight: print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token)) # result: # score in query: 5.7729, score in document: 1.4109, token: ny # score in query: 4.5684, score in document: 1.4673, token: weather # score in query: 3.5895, score in document: 0.7473, token: now ``` The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match. ## Detailed Search Relevance <div style="overflow-x: auto;"> | Model | Average | Trec Covid | NFCorpus | NQ | HotpotQA | FiQA | ArguAna | Touche | DBPedia | SCIDOCS | FEVER | Climate FEVER | SciFact | Quora | |-------|---------|------------|----------|----|----------|------|---------|--------|---------|---------|-------|---------------|---------|-------| | [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | 0.524 | 0.771 | 0.360 | 0.553 | 0.697 | 0.376 | 0.508 | 0.278 | 0.447 | 0.164 | 0.821 | 0.263 | 0.723 | 0.856 | | [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | 0.528 | 0.775 | 0.347 | 0.561 | 0.685 | 0.374 | 0.551 | 0.278 | 0.435 | 0.173 | 0.849 | 0.249 | 0.722 | 0.863 | | [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | 0.490 | 0.707 | 0.352 | 0.521 | 0.677 | 0.344 | 0.461 | 0.294 | 0.412 | 0.154 | 0.743 | 0.202 | 0.716 | 0.788 | | [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | 0.504 | 0.690 | 0.343 | 0.528 | 0.675 | 0.357 | 0.496 | 0.287 | 0.418 | 0.166 | 0.818 | 0.224 | 0.715 | 0.841 | | [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | 0.497 | 0.709 | 0.336 | 0.510 | 0.666 | 0.338 | 0.480 | 0.285 | 0.407 | 0.164 | 0.812 | 0.216 | 0.699 | 0.837 | </div> ## License This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE). ## Copyright Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details.
aravindhank/tiny-bart-sst2-distilled
aravindhank
"2024-04-28T17:29:34Z"
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:aravindhank/valuenet-bart-base", "base_model:finetune:aravindhank/valuenet-bart-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-26T08:18:18Z"
--- base_model: aravindhank/valuenet-bart-base tags: - generated_from_trainer model-index: - name: tiny-bart-sst2-distilled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bart-sst2-distilled This model is a fine-tuned version of [aravindhank/valuenet-bart-base](https://huggingface.co/aravindhank/valuenet-bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
fabiancpl/nlbse25_pharo
fabiancpl
"2024-12-13T02:21:28Z"
25
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:NLBSE/nlbse25_pharo", "base_model:finetune:NLBSE/nlbse25_pharo", "region:us" ]
text-classification
"2024-12-13T02:21:25Z"
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: NLBSE/nlbse25_pharo --- # SetFit with NLBSE/nlbse25_pharo This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [NLBSE/nlbse25_pharo](https://huggingface.co/NLBSE/nlbse25_pharo) as the Sentence Transformer embedding model. A RandomForestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [NLBSE/nlbse25_pharo](https://huggingface.co/NLBSE/nlbse25_pharo) - **Classification head:** a RandomForestClassifier instance - **Maximum Sequence Length:** 128 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("fabiancpl/nlbse25_pharo") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.12.4 - SetFit: 1.1.0 - Sentence Transformers: 3.3.0 - Transformers: 4.42.2 - PyTorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jddqd/Reinforce-1
jddqd
"2025-02-12T22:04:59Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2025-02-12T22:04:09Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 422.80 +/- 141.85 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
nhoxinh/8c2eeaea-bf4d-4063-ad01-741d4bd84e45
nhoxinh
"2025-01-13T07:04:52Z"
10
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-1.7B", "base_model:adapter:unsloth/SmolLM-1.7B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-13T06:52:56Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-1.7B tags: - axolotl - generated_from_trainer model-index: - name: 8c2eeaea-bf4d-4063-ad01-741d4bd84e45 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-1.7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff4f37673cc248ee_train_data.json ds_type: json format: custom path: /workspace/input_data/ff4f37673cc248ee_train_data.json type: field_input: content field_instruction: question field_output: correct_line format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhoxinh/8c2eeaea-bf4d-4063-ad01-741d4bd84e45 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff4f37673cc248ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aba0d157-7509-4493-873b-9910eab62a7a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aba0d157-7509-4493-873b-9910eab62a7a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8c2eeaea-bf4d-4063-ad01-741d4bd84e45 This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5999 | 0.4010 | 200 | 1.4461 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
LHRuig/onoffmenssx
LHRuig
"2025-03-25T19:00:30Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-25T19:00:18Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: man --- # onoffmensx <Gallery /> ## Model description onoffmensx lora ## Trigger words You should use `man` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/onoffmenssx/tree/main) them in the Files & versions tab.
Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF
Triangle104
"2025-02-18T09:56:24Z"
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:TheDrummer/Skyfall-36B-v2", "base_model:quantized:TheDrummer/Skyfall-36B-v2", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-18T09:45:50Z"
--- license: other base_model: TheDrummer/Skyfall-36B-v2 tags: - llama-cpp - gguf-my-repo --- # Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF This model was converted to GGUF format from [`TheDrummer/Skyfall-36B-v2`](https://huggingface.co/TheDrummer/Skyfall-36B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TheDrummer/Skyfall-36B-v2) for more details on the model. --- Skyfall v2 is an upscaled version of Mistral Small 2501 with continued training for creativity and RP. Supported Chat Templates - Mistral v7 Tekken (highly recommended) Metharme (not recommended) Alpaca (may be interesting, especially for cyoa / story) Description - Creativity, good writing style, good instruct, chain of thought capability, mathematics understanding, and solid tool use performance... This model is peak! This will be my new daily model over all the 70Bs I have used. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF --hf-file skyfall-36b-v2-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF --hf-file skyfall-36b-v2-q3_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF --hf-file skyfall-36b-v2-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Skyfall-36B-v2-Q3_K_M-GGUF --hf-file skyfall-36b-v2-q3_k_m.gguf -c 2048 ```
nathanialhunt/af131d52-bf77-4b0d-bf95-af07bd344220
nathanialhunt
"2025-02-05T00:42:48Z"
9
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Phi-3-mini-4k-instruct", "base_model:adapter:unsloth/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
"2025-02-05T00:37:24Z"
--- library_name: peft license: mit base_model: unsloth/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: af131d52-bf77-4b0d-bf95-af07bd344220 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # af131d52-bf77-4b0d-bf95-af07bd344220 This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
yoavush/FILM_flux
yoavush
"2024-08-22T19:09:31Z"
8
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-08-22T18:20:02Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image instance_prompt: FILM --- # Film_Flux Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `FILM` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('yoavush/FILM_flux', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
mradermacher/winter-garden-7b-alpha-GGUF
mradermacher
"2024-05-06T06:12:31Z"
43
1
transformers
[ "transformers", "gguf", "merge", "conversational", "multi-task", "en", "base_model:maldv/winter-garden-7b-alpha", "base_model:quantized:maldv/winter-garden-7b-alpha", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-03-15T15:43:09Z"
--- base_model: maldv/winter-garden-7b-alpha language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - merge - conversational - multi-task --- ## About static quants of https://huggingface.co/maldv/winter-garden-7b-alpha <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/winter-garden-7b-alpha-GGUF/resolve/main/winter-garden-7b-alpha.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kalytm/nous-7
kalytm
"2024-05-20T06:56:48Z"
170
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-10T00:04:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zeze24/dqn-SpaceInvadersNoFrameskip-v4
Zeze24
"2024-01-21T11:53:59Z"
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-21T11:53:22Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 432.00 +/- 124.26 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Zeze24 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Zeze24 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Zeze24 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Edgar404/Reinforce-001
Edgar404
"2024-04-30T11:14:59Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-04-30T11:14:40Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-001 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Realgon/left_padding70model
Realgon
"2023-11-27T07:15:40Z"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-07T17:44:24Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: left_padding70model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93092 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # left_padding70model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9309 - Loss: 0.7142 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.0473 | 1.0 | 1563 | 0.9279 | 0.4618 | | 0.0096 | 2.0 | 3126 | 0.929 | 0.5406 | | 0.0328 | 3.0 | 4689 | 0.92 | 0.5954 | | 0.0192 | 4.0 | 6252 | 0.9288 | 0.5570 | | 0.0171 | 5.0 | 7815 | 0.9294 | 0.5905 | | 0.006 | 6.0 | 9378 | 0.9301 | 0.6330 | | 0.0084 | 7.0 | 10941 | 0.9270 | 0.6311 | | 0.0003 | 8.0 | 12504 | 0.9288 | 0.6783 | | 0.0048 | 9.0 | 14067 | 0.9315 | 0.6987 | | 0.0001 | 10.0 | 15630 | 0.9309 | 0.7142 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0+cu117 - Datasets 2.14.6 - Tokenizers 0.14.1
MetaIX/GPT4-X-Alpaca-30B-4bit
MetaIX
"2023-05-27T13:33:42Z"
1,504
162
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-04-14T17:23:57Z"
<p><strong><font size="5">Information</font></strong></p> GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI. <p>This was made using <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a></p> <p><strong><font size="5">Update 05.26.2023</font></strong></p> <p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p> <p><strong>What's included</strong></p> <P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations</P> <P>GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.</P> <p><strong>GPU/GPTQ Usage</strong></p> <p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p> <p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p> <p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p> <p><strong>CPU/GGML Usage</strong></p> <p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p> <p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p> <p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p> <p><strong>Training Parameters</strong></p> <ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul> <p><strong><font size="5">Benchmarks</font></strong></p> <p><strong><font size="4">--true-sequential --act-order</font></strong></p> <strong>Wikitext2</strong>: 4.481280326843262 <strong>Ptb-New</strong>: 8.539161682128906 <strong>C4-New</strong>: 6.451964855194092 <strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM. <p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p> <strong>Wikitext2</strong>: 4.285132884979248 <strong>Ptb-New</strong>: 8.34856128692627 <strong>C4-New</strong>: 6.292652130126953 <strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM.
lesso03/96189575-87dd-4039-9b9c-2b857e3aecce
lesso03
"2025-01-10T12:24:35Z"
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "custom_code", "base_model:NovaSearch/stella_en_1.5B_v5", "base_model:adapter:NovaSearch/stella_en_1.5B_v5", "license:mit", "region:us" ]
null
"2025-01-10T12:01:21Z"
--- library_name: peft license: mit base_model: dunzhang/stella_en_1.5B_v5 tags: - axolotl - generated_from_trainer model-index: - name: 96189575-87dd-4039-9b9c-2b857e3aecce results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dunzhang/stella_en_1.5B_v5 bf16: true chat_template: llama3 datasets: - data_files: - c16dc9cb46034ec9_train_data.json ds_type: json format: custom path: /workspace/input_data/c16dc9cb46034ec9_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: false hub_model_id: lesso03/96189575-87dd-4039-9b9c-2b857e3aecce hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 1.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 30 micro_batch_size: 4 mlflow_experiment_name: /tmp/c16dc9cb46034ec9_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 20 save_strategy: steps sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9baba420-c84e-4ea6-8fe4-a4ce0fd08525 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 9baba420-c84e-4ea6-8fe4-a4ce0fd08525 warmup_steps: 5 weight_decay: 0.01 xformers_attention: false ``` </details><br> # 96189575-87dd-4039-9b9c-2b857e3aecce This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0004 | 4 | nan | | 0.0 | 0.0008 | 8 | nan | | 0.0 | 0.0012 | 12 | nan | | 0.0 | 0.0017 | 16 | nan | | 0.0 | 0.0021 | 20 | nan | | 0.0 | 0.0025 | 24 | nan | | 0.0 | 0.0029 | 28 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mah92/SalamTTS
mah92
"2025-03-21T10:36:26Z"
0
1
null
[ "fa", "en", "dataset:mah92/Khadijah-FA_EN-Public-Phone-Audio-Dataset", "dataset:mah92/Musa-FA_EN-Public-Phone-Audio-Dataset", "base_model:mah92/Khadijah-FA_EN-Matcha-TTS-Model", "base_model:finetune:mah92/Khadijah-FA_EN-Matcha-TTS-Model", "license:cc0-1.0", "region:us" ]
null
"2025-03-21T07:45:29Z"
--- license: cc0-1.0 datasets: - mah92/Khadijah-FA_EN-Public-Phone-Audio-Dataset - mah92/Musa-FA_EN-Public-Phone-Audio-Dataset language: - fa - en base_model: - mah92/Khadijah-FA_EN-Matcha-TTS-Model - mah92/Musa-FA_EN-Matcha-TTS-Model --- # Besm ALLAH # SalamTTS-v9 This repository contains the following APK files: - **SalamTTS-v9-Khadijah.apk**: [Download](https://huggingface.co/mah92/SalamTTS/blob/main/SalamTTS-v9-Khadijah.apk) - **SalamTTS-v9-Musa.apk**: [Download](https://huggingface.co/mah92/SalamTTS/blob/main/SalamTTS-v9-Musa.apk)
davidschulte/ESM_nala-cub__americas_nli_shp
davidschulte
"2025-03-26T13:28:40Z"
16
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:nala-cub/americas_nli", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-11-10T13:50:07Z"
--- base_model: bert-base-multilingual-uncased datasets: - nala-cub/americas_nli license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM nala-cub/americas_nli <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** nala-cub/americas_nli - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** nala-cub/americas_nli - **Subset [optional]:** shp - **Text Column:** ['premise', 'hypothesis'] - **Label Column:** label - **Dataset Split:** test - **Sample size [optional]:** 750 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
DarkSM/AdamRaguseaRVC
DarkSM
"2023-10-06T15:34:22Z"
0
0
null
[ "en", "region:us" ]
null
"2023-10-06T15:33:16Z"
--- language: - en --- Do not credit me for the model, but do not steal also :b
kazeric/whisper-small-dv-streaming
kazeric
"2025-03-10T18:56:34Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dv", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-03-10T13:15:31Z"
--- library_name: transformers language: - dv license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper_Small_Dhivehi_Streaming results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: dv split: test args: dv metrics: - name: Wer type: wer value: 14.62600410334875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper_Small_Dhivehi_Streaming This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.2024 - Wer Ortho: 68.1454 - Wer: 14.6260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.1315 | 2.328 | 500 | 0.2024 | 68.1454 | 14.6260 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
featherless-ai-quants/gordicaleksa-YugoGPT-GGUF
featherless-ai-quants
"2024-11-04T21:05:48Z"
22
0
null
[ "gguf", "text-generation", "base_model:gordicaleksa/YugoGPT", "base_model:quantized:gordicaleksa/YugoGPT", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-04T20:20:37Z"
--- base_model: gordicaleksa/YugoGPT pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # gordicaleksa/YugoGPT GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [gordicaleksa-YugoGPT-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [gordicaleksa-YugoGPT-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [gordicaleksa-YugoGPT-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [gordicaleksa-YugoGPT-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [gordicaleksa-YugoGPT-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [gordicaleksa-YugoGPT-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [gordicaleksa-YugoGPT-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [gordicaleksa-YugoGPT-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [gordicaleksa-YugoGPT-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [gordicaleksa-YugoGPT-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q6_K.gguf) | 5666.79 MB | | Q8_0 | [gordicaleksa-YugoGPT-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/gordicaleksa/YugoGPT-GGUF/blob/main/gordicaleksa-YugoGPT-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
cantillation/Teamim-small_WeightDecay-0.05_Augmented_New-Data_nusach-yerushalmi_date-24-07-2024
cantillation
"2024-07-25T04:24:27Z"
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "he", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-24T11:05:58Z"
--- language: - he license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: he-cantillation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # he-cantillation This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2759 - Wer: 12.0860 - Avg Precision Exact: 0.9045 - Avg Recall Exact: 0.9054 - Avg F1 Exact: 0.9045 - Avg Precision Letter Shift: 0.9223 - Avg Recall Letter Shift: 0.9233 - Avg F1 Letter Shift: 0.9224 - Avg Precision Word Level: 0.9250 - Avg Recall Word Level: 0.9259 - Avg F1 Word Level: 0.9250 - Avg Precision Word Shift: 0.9777 - Avg Recall Word Shift: 0.9785 - Avg F1 Word Shift: 0.9777 - Precision Median Exact: 1.0 - Recall Median Exact: 1.0 - F1 Median Exact: 1.0 - Precision Max Exact: 1.0 - Recall Max Exact: 1.0 - F1 Max Exact: 1.0 - Precision Min Exact: 0.0 - Recall Min Exact: 0.0 - F1 Min Exact: 0.0 - Precision Min Letter Shift: 0.0 - Recall Min Letter Shift: 0.0 - F1 Min Letter Shift: 0.0 - Precision Min Word Level: 0.0 - Recall Min Word Level: 0.0 - F1 Min Word Level: 0.0 - Precision Min Word Shift: 0.0 - Recall Min Word Shift: 0.0 - F1 Min Word Shift: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 200000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Avg Precision Exact | Avg Recall Exact | Avg F1 Exact | Avg Precision Letter Shift | Avg Recall Letter Shift | Avg F1 Letter Shift | Avg Precision Word Level | Avg Recall Word Level | Avg F1 Word Level | Avg Precision Word Shift | Avg Recall Word Shift | Avg F1 Word Shift | Precision Median Exact | Recall Median Exact | F1 Median Exact | Precision Max Exact | Recall Max Exact | F1 Max Exact | Precision Min Exact | Recall Min Exact | F1 Min Exact | Precision Min Letter Shift | Recall Min Letter Shift | F1 Min Letter Shift | Precision Min Word Level | Recall Min Word Level | F1 Min Word Level | Precision Min Word Shift | Recall Min Word Shift | F1 Min Word Shift | |:-------------:|:-------:|:------:|:---------------:|:--------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:| | No log | 0.0004 | 1 | 6.8177 | 106.5214 | 0.0004 | 0.0012 | 0.0006 | 0.0038 | 0.0036 | 0.0033 | 0.0030 | 0.0121 | 0.0043 | 0.0322 | 0.0342 | 0.0300 | 0.0 | 0.0 | 0.0 | 0.0909 | 0.3333 | 0.1429 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0059 | 3.7023 | 10000 | 0.1748 | 15.8840 | 0.8772 | 0.8813 | 0.8786 | 0.9013 | 0.9056 | 0.9028 | 0.9063 | 0.9103 | 0.9077 | 0.9648 | 0.9693 | 0.9663 | 0.9286 | 0.9375 | 0.9474 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0045 | 7.4047 | 20000 | 0.2047 | 15.1038 | 0.8686 | 0.8670 | 0.8673 | 0.8906 | 0.8892 | 0.8894 | 0.8952 | 0.8935 | 0.8938 | 0.9722 | 0.9711 | 0.9710 | 0.9375 | 0.9333 | 0.9524 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.001 | 11.1070 | 30000 | 0.2024 | 13.8083 | 0.8862 | 0.8876 | 0.8863 | 0.9076 | 0.9094 | 0.9080 | 0.9109 | 0.9127 | 0.9113 | 0.9743 | 0.9767 | 0.9749 | 1.0 | 1.0 | 0.9600 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.001 | 14.8093 | 40000 | 0.2188 | 13.8083 | 0.8924 | 0.8918 | 0.8916 | 0.9125 | 0.9118 | 0.9116 | 0.9166 | 0.9156 | 0.9155 | 0.9733 | 0.9730 | 0.9726 | 1.0 | 1.0 | 0.9630 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0005 | 18.5117 | 50000 | 0.2256 | 13.6464 | 0.8921 | 0.8937 | 0.8924 | 0.9131 | 0.9148 | 0.9135 | 0.9161 | 0.9176 | 0.9164 | 0.9760 | 0.9774 | 0.9762 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0012 | 22.2140 | 60000 | 0.2194 | 12.8515 | 0.8896 | 0.8917 | 0.8902 | 0.9089 | 0.9110 | 0.9095 | 0.9116 | 0.9138 | 0.9122 | 0.9748 | 0.9780 | 0.9759 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0006 | 25.9163 | 70000 | 0.2265 | 13.0870 | 0.8981 | 0.9013 | 0.8992 | 0.9191 | 0.9224 | 0.9203 | 0.9219 | 0.9249 | 0.9229 | 0.9756 | 0.9776 | 0.9761 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 29.6187 | 80000 | 0.2249 | 13.0870 | 0.8938 | 0.8961 | 0.8945 | 0.9139 | 0.9163 | 0.9146 | 0.9169 | 0.9191 | 0.9175 | 0.9749 | 0.9764 | 0.9752 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 33.3210 | 90000 | 0.2379 | 13.2342 | 0.8960 | 0.8987 | 0.8969 | 0.9160 | 0.9189 | 0.9169 | 0.9197 | 0.9224 | 0.9206 | 0.9759 | 0.9780 | 0.9764 | 1.0 | 1.0 | 0.9697 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 37.0233 | 100000 | 0.2302 | 13.1312 | 0.8910 | 0.8958 | 0.8930 | 0.9121 | 0.9171 | 0.9142 | 0.9149 | 0.9195 | 0.9167 | 0.9742 | 0.9786 | 0.9759 | 1.0 | 1.0 | 0.9677 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0004 | 40.7257 | 110000 | 0.2294 | 12.9987 | 0.9032 | 0.9028 | 0.9025 | 0.9220 | 0.9216 | 0.9213 | 0.9255 | 0.9249 | 0.9247 | 0.9762 | 0.9773 | 0.9763 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 44.4280 | 120000 | 0.2322 | 12.6601 | 0.9038 | 0.9045 | 0.9037 | 0.9234 | 0.9242 | 0.9233 | 0.9262 | 0.9270 | 0.9262 | 0.9766 | 0.9784 | 0.9770 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 48.1303 | 130000 | 0.2362 | 12.5129 | 0.9054 | 0.9058 | 0.9051 | 0.9241 | 0.9247 | 0.9239 | 0.9277 | 0.9284 | 0.9276 | 0.9763 | 0.9777 | 0.9766 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 51.8327 | 140000 | 0.2430 | 13.1753 | 0.8973 | 0.8993 | 0.8978 | 0.9184 | 0.9205 | 0.9189 | 0.9216 | 0.9237 | 0.9221 | 0.9766 | 0.9783 | 0.9770 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0005 | 55.5350 | 150000 | 0.2325 | 12.7926 | 0.9032 | 0.9032 | 0.9028 | 0.9226 | 0.9228 | 0.9223 | 0.9251 | 0.9252 | 0.9247 | 0.9781 | 0.9785 | 0.9779 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 59.2373 | 160000 | 0.2428 | 12.2332 | 0.9090 | 0.9104 | 0.9093 | 0.9275 | 0.9289 | 0.9278 | 0.9301 | 0.9315 | 0.9304 | 0.9773 | 0.9791 | 0.9778 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 62.9397 | 170000 | 0.2499 | 12.1301 | 0.9067 | 0.9081 | 0.9070 | 0.9246 | 0.9261 | 0.9249 | 0.9273 | 0.9286 | 0.9275 | 0.9775 | 0.9794 | 0.9780 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0003 | 66.6420 | 180000 | 0.2572 | 12.2185 | 0.9050 | 0.9049 | 0.9045 | 0.9238 | 0.9238 | 0.9234 | 0.9265 | 0.9263 | 0.9260 | 0.9785 | 0.9784 | 0.9780 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 70.3443 | 190000 | 0.2704 | 12.1449 | 0.9058 | 0.9068 | 0.9059 | 0.9237 | 0.9247 | 0.9238 | 0.9263 | 0.9273 | 0.9264 | 0.9775 | 0.9787 | 0.9777 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 74.0466 | 200000 | 0.2759 | 12.0860 | 0.9045 | 0.9054 | 0.9045 | 0.9223 | 0.9233 | 0.9224 | 0.9250 | 0.9259 | 0.9250 | 0.9777 | 0.9785 | 0.9777 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.1 - Datasets 2.20.0 - Tokenizers 0.19.1
hgnoi/rrgXZg1mZ2Pdeu9e
hgnoi
"2024-05-25T11:24:00Z"
77
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-25T11:21:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alphatao/1884ae85-c593-4798-9674-0b9af03c13dd
Alphatao
"2025-03-13T14:36:46Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-2-7b-chat", "base_model:adapter:unsloth/llama-2-7b-chat", "license:apache-2.0", "region:us" ]
null
"2025-03-13T10:47:10Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/llama-2-7b-chat tags: - axolotl - generated_from_trainer model-index: - name: 1884ae85-c593-4798-9674-0b9af03c13dd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-2-7b-chat bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 92c03c5ab2158f88_train_data.json ds_type: json format: custom path: /workspace/input_data/92c03c5ab2158f88_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null device_map: ? '' : 0,1,2,3,4,5,6,7 early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Alphatao/1884ae85-c593-4798-9674-0b9af03c13dd hub_repo: null hub_strategy: null hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj - down_proj - up_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 840 micro_batch_size: 4 mlflow_experiment_name: /tmp/92c03c5ab2158f88_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.04 wandb_entity: null wandb_mode: online wandb_name: c144bd9a-5f78-4e37-b021-d91f2c0c0d5f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c144bd9a-5f78-4e37-b021-d91f2c0c0d5f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1884ae85-c593-4798-9674-0b9af03c13dd This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 840 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8756 | 0.0011 | 1 | 1.7959 | | 0.2531 | 0.1059 | 100 | 0.2592 | | 0.2352 | 0.2118 | 200 | 0.2345 | | 0.1999 | 0.3178 | 300 | 0.2183 | | 0.1546 | 0.4237 | 400 | 0.2117 | | 0.291 | 0.5296 | 500 | 0.2037 | | 0.1577 | 0.6355 | 600 | 0.1983 | | 0.2267 | 0.7414 | 700 | 0.1929 | | 0.1747 | 0.8473 | 800 | 0.1916 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Vera-V1.3-GGUF
mradermacher
"2025-03-30T01:20:40Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Dorian2B/Vera-V1.3", "base_model:quantized:Dorian2B/Vera-V1.3", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-30T00:59:20Z"
--- base_model: Dorian2B/Vera-V1.3 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Dorian2B/Vera-V1.3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q3_K_S.gguf) | Q3_K_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q3_K_L.gguf) | Q3_K_L | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.IQ4_XS.gguf) | IQ4_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q5_K_S.gguf) | Q5_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q5_K_M.gguf) | Q5_K_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q6_K.gguf) | Q6_K | 2.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Vera-V1.3-GGUF/resolve/main/Vera-V1.3.f16.gguf) | f16 | 5.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Nexspear/1d5b3126-a524-4eda-bf0e-be21cea15183
Nexspear
"2025-01-24T22:50:39Z"
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b-it", "base_model:adapter:unsloth/codegemma-7b-it", "license:apache-2.0", "region:us" ]
null
"2025-01-24T22:25:40Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 1d5b3126-a524-4eda-bf0e-be21cea15183 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4f327ff36134b9ea_train_data.json ds_type: json format: custom path: /workspace/input_data/4f327ff36134b9ea_train_data.json type: field_input: '' field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: Nexspear/1d5b3126-a524-4eda-bf0e-be21cea15183 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/4f327ff36134b9ea_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: b285face-4656-4e7d-8064-270d1ff4db96 wandb_project: Gradients-On-Four wandb_run: your_name wandb_runid: b285face-4656-4e7d-8064-270d1ff4db96 warmup_steps: 10 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 1d5b3126-a524-4eda-bf0e-be21cea15183 This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0043 | 1 | 0.5219 | | 0.4964 | 0.0388 | 9 | 0.4895 | | 0.446 | 0.0777 | 18 | 0.4565 | | 0.3904 | 0.1165 | 27 | 0.4438 | | 0.4331 | 0.1553 | 36 | 0.4393 | | 0.4545 | 0.1942 | 45 | 0.4371 | | 0.378 | 0.2330 | 54 | 0.4372 | | 0.4208 | 0.2718 | 63 | 0.4334 | | 0.3789 | 0.3107 | 72 | 0.4352 | | 0.4618 | 0.3495 | 81 | 0.4325 | | 0.4479 | 0.3883 | 90 | 0.4320 | | 0.4111 | 0.4272 | 99 | 0.4324 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
abbassix/pn6_800
abbassix
"2024-01-04T12:34:32Z"
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-04T12:33:59Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: pn6_800 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pn6_800 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 100 | 0.7003 | 0.495 | | No log | 2.0 | 200 | 0.6936 | 0.495 | | No log | 3.0 | 300 | 0.6932 | 0.505 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
IoanaLivia/whisper-small-finetuned-400-standard-A-epochs-10
IoanaLivia
"2025-03-17T12:31:11Z"
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:horoscope_standard_a_400_19_20_5_03", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-03-16T19:53:28Z"
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - horoscope_standard_a_400_19_20_5_03 metrics: - wer model-index: - name: whisper-small-finetuned-400-standard-A-epochs-10 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: horoscope_standard_a_400_19_20_5_03 type: horoscope_standard_a_400_19_20_5_03 config: default split: validation args: default metrics: - name: Wer type: wer value: 27.414809121188828 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-finetuned-400-standard-A-epochs-10 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the horoscope_standard_a_400_19_20_5_03 dataset. It achieves the following results on the evaluation set: - Loss: 0.5064 - Wer: 27.4148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 0.6890 | 41.0325 | | 0.1722 | 1.0 | 50 | 0.4912 | 30.1563 | | 0.0365 | 2.0 | 100 | 0.4825 | 28.5678 | | 0.0134 | 3.0 | 150 | 0.4958 | 27.7479 | | 0.0065 | 4.0 | 200 | 0.5046 | 28.3500 | | 0.0042 | 5.0 | 250 | 0.5026 | 27.6326 | | 0.0027 | 6.0 | 300 | 0.5018 | 27.5045 | | 0.0019 | 7.0 | 350 | 0.5035 | 27.4789 | | 0.0017 | 8.0 | 400 | 0.5048 | 27.3123 | | 0.0015 | 9.0 | 450 | 0.5060 | 27.4276 | | 0.0015 | 10.0 | 500 | 0.5064 | 27.4148 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.1+cu121 - Datasets 3.4.0 - Tokenizers 0.21.0
askenaz/results-7655726778571638724
askenaz
"2024-02-20T21:25:13Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-02-20T21:25:05Z"
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: results-7655726778571638724 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results-7655726778571638724 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
juliowaissman/q-FrozenLake-v1-4x4-noSlippery
juliowaissman
"2024-01-30T05:22:26Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-01-30T05:09:28Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="juliowaissman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Kquant03/FrankenDPO-4x7B-GGUF
Kquant03
"2024-01-18T11:03:27Z"
10
2
null
[ "gguf", "merge", "en", "arxiv:2101.03961", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-01-16T02:21:03Z"
--- license: apache-2.0 language: - en tags: - merge --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/7JsqBt8QRiZmcMh-ameqH.jpeg) # It's alive!!!! Half the size and better on GSM8k and Winogrande than Mixtral Instruct 8x 7B! Also rank 6 on Ayumi's ERP Bench! A frankenMoE using only DPO models. To be used with Chat-instruct mode enabled. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/wGRcusncUd-mCdksvYckY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/rx1GfLMEIP3T-r3bxqW9r.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/3F_Sm5He9AlsfyRcvcZqk.png) ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Q2_K Tiny](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 7.87 GB| 9.87 GB | smallest, significant quality loss - not recommended for most purposes | | [Q3_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 10.28 GB| 12.28 GB | very small, high quality loss | | [Q4_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 13.3 GB| 15.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Q4_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended | | [Q5_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality | | [Q5_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended | | [Q6 XL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely minor degradation | | [Q8 XXL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely minor degradation - not recommended | - [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - router - [udkai/Turdus](https://huggingface.co/udkai/Turdus) - expert #1 - [distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) - expert #2 - [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - expert #3 - [Neuronovo/neuronovo-9B-v0.3](https://huggingface.co/Neuronovo/neuronovo-9B-v0.3) - expert #4 # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
elemtopos/dqn-SpaceInvadersNoFrameskip-v4
elemtopos
"2023-09-21T08:46:40Z"
6
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-09-20T15:49:36Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 270.50 +/- 83.53 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elemtopos -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elemtopos -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga elemtopos ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 200000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
usabuts/codegen-350M-mono-python-18k-alpaca
usabuts
"2024-05-31T05:01:44Z"
106
0
transformers
[ "transformers", "safetensors", "codegen", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-31T05:01:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOpeepeepoopoo/gemmerica_r3_2
OwOpeepeepoopoo
"2024-03-03T18:37:56Z"
3
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-03T18:35:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EryriLabs/Llama-3.2-SARA-3b
EryriLabs
"2024-11-17T11:26:05Z"
7
0
null
[ "safetensors", "llama", "en", "base_model:unsloth/Llama-3.2-3B-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-3B-bnb-4bit", "license:llama3.2", "region:us" ]
null
"2024-11-14T20:08:30Z"
--- license: llama3.2 language: - en base_model: - unsloth/Llama-3.2-3B-bnb-4bit --- # Llama-3.2-SARA-3b <figure> <img src="SARA.png" alt="SARA" width="300"> </figure> This model is a fine-tuned version of the `unsloth/Llama-3.2-3B-bnb-4bit`, developed to act as SARA—the Security Awareness and Resilience Assistant. SARA is optimized to be a lightweight, offline-friendly AI assistant capable of running on low-spec laptops, designed to provide practical cybersecurity advice in a conversational style. ## Model Details ### Model Description This model is fine-tuned for conversational question-answering focused on basic cybersecurity topics. It was trained as part of an ongoing blog series (https://www.eryrilabs.co.uk/post/building-sara-a-lightweight-cybersecurity-assistant-for-everyday-laptops) to deliver short, actionable responses suitable for users who want quick guidance on digital safety without needing advanced technical knowledge. - **Developed by:** EryriLabs - **Funded by:** Personal Project - **Model type:** Fine-tuned conversational LLM for cybersecurity question-answering - **Language(s) (NLP):** English (en) - **License:** llama3.2 - **Finetuned from model:** unsloth/Llama-3.2-3B-bnb-4bit ### Model Sources - **Repository:** [https://huggingface.co/EryriLabs/Llama-3.2-SARA-3b](https://huggingface.co/EryriLabs/Llama-3.2-SARA-3b) ## Uses This model is intended for providing cybersecurity information and guidance to general users in an accessible, offline-friendly way. ### Direct Use This model can be used as an offline assistant for basic cybersecurity questions, answering common queries in a conversational format. It is ideal for use cases where an internet connection is not available or where low-spec hardware constraints apply. ### Out-of-Scope Use This model should not be used for professional or critical cybersecurity advice, as it is designed for general guidance and may lack the specificity required for advanced technical issues. It is also not suitable for providing nuanced advice in areas outside basic cybersecurity practices. ## Bias, Risks, and Limitations While SARA is optimized for basic cybersecurity education, it has limitations in depth and may lack the ability to answer highly technical questions. Additionally, it may be limited in handling complex, nuanced queries due to its lightweight design and quantized 4-bit structure. ### Recommendations Users should consider SARA as an educational tool rather than a replacement for professional cybersecurity advice. Further fine-tuning could help improve the model's handling of diverse inputs and conversational depth, making it more robust for varied user needs. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained("EryriLabs/Llama-3.2-SARA-3b", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("EryriLabs/Llama-3.2-SARA-3b") # Sample question input_text = "What make a strong password?" # Tokenize and generate response inputs = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(inputs["input_ids"], max_length=50) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Training Details ### Training Data The model was fine-tuned on a custom Q&A-style dataset centered on cybersecurity fundamentals, such as creating a strong password, using 2-Step Verification etc. ### Training Procedure The fine-tuning was conducted on a system with an Intel i9 12900k CPU, an NVIDIA GeForce RTX 4090 GPU, and 32GB RAM. Unsloth’s 4-bit quantization (bnb-4bit) was applied to keep the model compact and efficient for low-spec laptop deployment. #### Training Hyperparameters - **Training regime:** Mixed precision with 4-bit quantization (bnb-4bit) #### Speeds, Sizes, Times [optional] Training took approximately 10 minutes, with additional fine-tuning recommended for improved performance, especially for handling varied text inputs and enhancing conversational depth. ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data Testing was conducted on a dataset of common cybersecurity questions to evaluate the model’s responsiveness and accuracy for general use cases. #### Factors The model was evaluated based on its ability to provide clear, direct answers to basic cybersecurity questions. #### Metrics The main evaluation metric was response accuracy for typical cybersecurity queries. ### Results The model performs adequately for its intended purpose, with room for improvement in response handling and input variability. #### Summary SARA functions well for basic cybersecurity guidance but requires additional fine-tuning to better handle diverse inputs and enhance conversational flow. ## Environmental Impact Carbon emissions for this project can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute). - **Hardware Type:** Intel i9 12900k CPU, NVIDIA GeForce RTX 4090 GPU - **Hours used:** ~10 minutes of fine-tuning - **Carbon Emitted:** 0.01 ### Compute Infrastructure The fine-tuning process was conducted on a high-spec machine, with final deployment optimized for low-spec hardware. #### Hardware Intel i9 12900k CPU, NVIDIA GeForce RTX 4090 GPU, 32GB RAM #### Software This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Contact For questions or issues, please contact `EryriLabs`.
vocabtrimmer/mt5-small-jaquad-qa-trimmed-ja-10000
vocabtrimmer
"2023-04-28T15:09:11Z"
105
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-03-15T15:47:57Z"
# Vocabulary Trimmed [lmqg/mt5-small-jaquad-qa](https://huggingface.co/lmqg/mt5-small-jaquad-qa): `vocabtrimmer/mt5-small-jaquad-qa-trimmed-ja-10000` This model is a trimmed version of [lmqg/mt5-small-jaquad-qa](https://huggingface.co/lmqg/mt5-small-jaquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-jaquad-qa | vocabtrimmer/mt5-small-jaquad-qa-trimmed-ja-10000 | |:---------------------------|:---------------------------|:----------------------------------------------------| | parameter_size_full | 300,165,504 | 54,304,128 | | parameter_size_embedding | 256,103,424 | 10,242,048 | | vocab_size | 250,101 | 10,002 | | compression_rate_full | 100.0 | 18.09 | | compression_rate_embedding | 100.0 | 4.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 10000 | 2 |
robot-test/old-clip-tokenizer
robot-test
"2022-02-07T21:44:19Z"
0
0
null
[ "region:us" ]
null
"2022-03-02T23:29:05Z"
Old version of the CLIP fast tokenizer cf [this issue](https://github.com/huggingface/transformers/issues/12648) on transformers