modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
manishtanwar/reuters-gpt2-text-gen
manishtanwar
2024-02-05T13:41:16Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T11:38:06Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: reuters-gpt2-text-gen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reuters-gpt2-text-gen This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
yamamiya/ai
yamamiya
2024-02-05T13:39:17Z
0
0
null
[ "arxiv:1910.09700", "license:creativeml-openrail-m", "region:us" ]
null
2024-02-05T13:38:24Z
--- license: creativeml-openrail-m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ProfessorBob/title-par-segmentation
ProfessorBob
2024-02-05T13:37:29Z
18
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "fr", "en", "endpoints_compatible", "region:us" ]
null
2024-01-19T13:16:37Z
--- language: - fr - en --- --- language: - en - fr --- # Title-Paragraph Segmentation Model - ver 1.0 <!-- Provide a quick summary of what the model is/does. --> Formal Content Segmentation Model that belongs to *first order segmentation* (FOS) model family. It performs `title-paragraph separation` task. Architecture: - E5-base Cross Encoder Dataset: - Custom Constrative Title-Paragraph Dataset based off `wikitext` Performance: - 89% acc on test set Broader context: 1) The aim of FOS is to separate content types featured in raw instructured strings such as: * text * code * tables * list * math formulas * images 2) This will enable further processings such as *second order segmentation* (SOS) that aims at generating semantic frontiers i.e segmenting: * plain text into knowledge units * code into functional blocks * math formulas blocks into equations * objects/concepts within an image * videos into timestamped chapters ## Model Details ### Direct Use ###### Setup and Utilities ```python from transformers import XLMRobertaPreTrainedModel, XLMRobertaModel, AutoTokenizer from nltk.tokenize import line_tokenize import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from datasets import Dataset #Utility Functions def get_default_device(): if torch.cuda.is_available(): return torch.device('cuda') elif torch.backends.mps.is_available(): return torch.device('mps') else: return torch.device('cpu') def to_device(data, device): if isinstance(data, (list,tuple)): return [to_device(x, device) for x in data] elif isinstance(data, dict): return {'input_ids':to_device(data['input_ids'],device),'attention_mask':to_device(data['attention_mask'],device)} return data.to(device) class DeviceDataLoader(): def __init__(self, dl, device): self.dl = dl self.device = device def __iter__(self): for b in self.dl: yield to_device(b, self.device) def __len__(self): return len(self.dl) class IsoBN(nn.Module): def __init__(self, hidden_size): """Init method""" super().__init__() self.register_parameter(name='cov', param=torch.nn.Parameter(torch.zeros(hidden_size, hidden_size))) self.register_parameter(name='std', param=torch.nn.Parameter(torch.zeros(hidden_size))) self.cov.requires_grad = False self.std.requires_grad = False def forward(self, input, momentum: float = 0.05, eps: float = 1e-3, beta: float = 0.5): """Forward method""" if self.training: x = input.detach() n = x.size(0) mean = x.mean(dim=0) y = x - mean.unsqueeze(0) std = (y ** 2).mean(0) ** 0.5 cov = (y.t() @ y) / n self.cov.data += momentum * (cov.data - self.cov.data) self.std.data += momentum * (std.data - self.std.data) corr = torch.clamp(self.cov / torch.ger(self.std, self.std), -1, 1) gamma = (corr ** 2).mean(1) denorm = (gamma * self.std) scale = 1 / (denorm + eps) ** beta E = torch.diag(self.cov).sum() new_E = (torch.diag(self.cov) * (scale ** 2)).sum() m = (E / (new_E + eps)) ** 0.5 scale *= m return input * scale.unsqueeze(0).detach() class e5_base_CTSEG(XLMRobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.e5 = XLMRobertaModel(config).from_pretrained('intfloat/multilingual-e5-base') self.dropout = nn.Dropout(0.5) self.linear_1 = nn.Linear(768,256) self.linear_2 = nn.Linear(256,128) self.linear_3 = nn.Linear(128,2) self.relu = nn.ReLU() self.isobn = IsoBN(768) def forward(self, sent): sent['input_ids'] = sent['input_ids'].reshape(sent['input_ids'].shape[0],-1) sent['attention_mask'] = sent['attention_mask'].reshape(sent['attention_mask'].shape[0],-1) hs= self.e5(input_ids=sent['input_ids'], attention_mask=sent['attention_mask']) cls_hs = hs.last_hidden_state[:, 0] cls_hs = self.isobn(cls_hs) out = self.linear_1(cls_hs) out = self.relu(out) out = self.dropout(out) out = self.linear_2(out) out = self.relu(out) out = self.dropout(out) out = self.linear_3(out) return out def training_step(self, sent, labels): out = self.forward(sent) loss = F.cross_entropy(out, labels) return loss def validation_step(self, sent, labels): out = self.forward(sent) loss = F.cross_entropy(out, labels) acc = accuracy(out, labels) return {'val_acc':acc,'val_loss':loss.detach()} def validation_epoch_end(self, metrics): batch_losses = [x['val_loss'] for x in metrics] batch_accs = [x['val_acc'] for x in metrics] epoch_loss = torch.stack(batch_losses).mean().item() epoch_acc = torch.stack(batch_accs).mean().item() return {'val_loss':epoch_loss, 'val_acc':epoch_acc} def epoch_end(self, epoch, result): print("Epoch [{}], train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format( epoch, result['train_loss'], result['val_loss'], result['val_acc'])) def evaluate(self, val_loader): self.eval() metrics = [self.validation_step(sent,labels.type(torch.LongTensor).to(device, non_blocking=True)) for sent,labels in val_loader] return self.validation_epoch_end(metrics) def accuracy(out, labels): return (out.argmax(dim=1) == labels).sum()/labels.numel() ``` ```python tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base') model = e5_base_CTSEG.from_pretrained('ProfessorBob/title-par-segmentation') device = get_default_device() to_device(model,device) ``` ```python def infer_block( chunks, batch_size: int = 8, return_probability: bool = False, tokenizer = tokenizer ): """ Bulk Infer function""" tok_text_bulk = tokenizer( ['query: ' + sent[0] +'[SEP]'+ sent[1] for sent in chunks], padding='max_length', truncation=True, return_tensors='pt' ) sentences = Dataset.from_dict({ 'input_ids': tok_text_bulk['input_ids'], 'attention_mask': tok_text_bulk['attention_mask'] }) sentences.set_format( 'torch', columns=['input_ids','attention_mask'] ) sentences = DataLoader( sentences, batch_size=batch_size, pin_memory=True ) sentences = DeviceDataLoader(sentences, device) preds = list() model.eval() with torch.no_grad(): for i, batch in enumerate(sentences): out = model(batch) if return_probability: preds.extend((out.softmax(dim=1).cpu()[:, 1]).tolist()) else: preds.extend(out.argmax(dim=1).cpu().tolist()) if device == torch.device('cuda'): torch.cuda.empty_cache() assert len(preds) == len(chunks) return preds, out def segmentation_pipeline(text): block = line_tokenize(text) chunks = [ (u, v) for u, v in zip(block[:-1], block[1:]) ] preds, out = infer_block(chunks,return_probability=False) cut_idx = [i+1 for i, value in enumerate(preds) if value == 1] cut_idx = [0]+cut_idx+[len(block)] seg = [block[cut_idx[i]:cut_idx[i+1]] for i in range(len(cut_idx)-1)] return seg ``` ###### Usage example ```python mixed_string = """ Ancient Foundations (3000 BCE - 600 CE) In the dawn of human civilization, mathematics emerged as an essential tool for commerce, construction, and astronomy. Explore the mathematical innovations of ancient cultures such as the Babylonians, Egyptians, and Greeks, laying the groundwork for numerical systems, geometry, and the Pythagorean theorem. The Golden Age of Islamic Mathematics (700 CE - 1300 CE) Delve into the intellectual flourishing during the Islamic Golden Age, where scholars like Al-Khwarizmi and Omar Khayyam made groundbreaking contributions to algebra, trigonometry, and the development of algorithms. Discover how these advancements paved the way for the Renaissance in Europe. """ ``` Generated Title-Paragraph Segmentation ```console Block 1 ----- Ancient Foundations (3000 BCE - 600 CE) Block 2 ----- In the dawn of human civilization, mathematics emerged as an essential tool for commerce, construction, and astronomy. Explore the mathematical innovations of ancient cultures such as the Babylonians, Egyptians, and Greeks, laying the groundwork for numerical systems, geometry, and the Pythagorean theorem. Block 3 ----- The Golden Age of Islamic Mathematics (700 CE - 1300 CE) Block 4 ----- Delve into the intellectual flourishing during the Islamic Golden Age, where scholars like Al-Khwarizmi and Omar Khayyam made groundbreaking contributions to algebra, trigonometry, and the development of algorithms. Discover how these advancements paved the way for the Renaissance in Europe. ``` <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Sagicc/w2v-bert-2.0-sr
Sagicc
2024-02-05T13:33:22Z
136
2
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_1", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-05T10:54:16Z
--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer datasets: - common_voice_16_1 metrics: - wer model-index: - name: w2v-bert-2.0-sr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_16_1 type: common_voice_16_1 config: sr split: test args: sr metrics: - name: Wer type: wer value: 0.05344857999647204 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-sr This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.1469 - Wer: 0.0534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.1994 | 1.89 | 300 | 0.1350 | 0.1078 | | 0.2331 | 3.77 | 600 | 0.2306 | 0.1341 | | 0.1879 | 5.66 | 900 | 0.1354 | 0.0766 | | 0.1579 | 7.54 | 1200 | 0.1646 | 0.0958 | | 0.1293 | 9.43 | 1500 | 0.1207 | 0.0713 | | 0.1182 | 11.31 | 1800 | 0.1376 | 0.0737 | | 0.1061 | 13.2 | 2100 | 0.1244 | 0.0580 | | 0.1011 | 15.08 | 2400 | 0.1390 | 0.0602 | | 0.0933 | 16.97 | 2700 | 0.1313 | 0.0524 | | 0.0948 | 18.85 | 3000 | 0.1469 | 0.0534 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
houdini001/nep-spell-bert2bert
houdini001
2024-02-05T13:30:13Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "base_model:houdini001/nep-spell-bert2bert", "base_model:finetune:houdini001/nep-spell-bert2bert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T02:33:15Z
--- base_model: houdini001/nep-spell-bert2bert tags: - generated_from_trainer model-index: - name: nep-spell-bert2bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nep-spell-bert2bert This model is a fine-tuned version of [houdini001/nep-spell-bert2bert](https://huggingface.co/houdini001/nep-spell-bert2bert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
huyentls1114/swin-tiny-patch4-window7-224-finetuned-swin-tiny
huyentls1114
2024-02-05T13:26:07Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-05T12:27:26Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-swin-tiny results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-swin-tiny This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5222 - Accuracy: 0.5559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.5958 | 0.96 | 20 | 3.5209 | 0.0937 | | 3.2466 | 1.98 | 41 | 2.9994 | 0.2387 | | 2.4246 | 2.99 | 62 | 2.0341 | 0.4169 | | 1.8599 | 4.0 | 83 | 1.6747 | 0.4955 | | 1.531 | 4.96 | 103 | 1.5218 | 0.4773 | | 1.3292 | 5.98 | 124 | 1.3834 | 0.5317 | | 1.2063 | 6.99 | 145 | 1.3381 | 0.5468 | | 1.0806 | 8.0 | 166 | 1.2748 | 0.5710 | | 0.9638 | 8.96 | 186 | 1.3062 | 0.5559 | | 0.8441 | 9.98 | 207 | 1.3322 | 0.5498 | | 0.7868 | 10.99 | 228 | 1.2873 | 0.5710 | | 0.7485 | 12.0 | 249 | 1.2012 | 0.5619 | | 0.6522 | 12.96 | 269 | 1.2264 | 0.5861 | | 0.6362 | 13.98 | 290 | 1.2796 | 0.5589 | | 0.6214 | 14.99 | 311 | 1.3406 | 0.5529 | | 0.5793 | 16.0 | 332 | 1.2479 | 0.5740 | | 0.5187 | 16.96 | 352 | 1.3203 | 0.5891 | | 0.4965 | 17.98 | 373 | 1.3429 | 0.5619 | | 0.4809 | 18.99 | 394 | 1.3453 | 0.5831 | | 0.4243 | 20.0 | 415 | 1.3759 | 0.5498 | | 0.4447 | 20.96 | 435 | 1.4275 | 0.5196 | | 0.3839 | 21.98 | 456 | 1.4660 | 0.5589 | | 0.414 | 22.99 | 477 | 1.4465 | 0.5408 | | 0.3741 | 24.0 | 498 | 1.3944 | 0.5650 | | 0.3802 | 24.96 | 518 | 1.4272 | 0.5650 | | 0.3733 | 25.98 | 539 | 1.3341 | 0.5589 | | 0.3558 | 26.99 | 560 | 1.3864 | 0.5589 | | 0.3448 | 28.0 | 581 | 1.4027 | 0.5589 | | 0.3373 | 28.96 | 601 | 1.4452 | 0.5589 | | 0.311 | 29.98 | 622 | 1.4021 | 0.5740 | | 0.3218 | 30.99 | 643 | 1.4015 | 0.5680 | | 0.3082 | 32.0 | 664 | 1.4159 | 0.5619 | | 0.3173 | 32.96 | 684 | 1.4290 | 0.5498 | | 0.2551 | 33.98 | 705 | 1.4268 | 0.5619 | | 0.2739 | 34.99 | 726 | 1.4546 | 0.5559 | | 0.2533 | 36.0 | 747 | 1.4398 | 0.5498 | | 0.2578 | 36.96 | 767 | 1.4487 | 0.5438 | | 0.2472 | 37.98 | 788 | 1.4438 | 0.5559 | | 0.281 | 38.99 | 809 | 1.4916 | 0.5529 | | 0.2757 | 40.0 | 830 | 1.4758 | 0.5619 | | 0.2679 | 40.96 | 850 | 1.5104 | 0.5559 | | 0.2548 | 41.98 | 871 | 1.5024 | 0.5529 | | 0.2357 | 42.99 | 892 | 1.5286 | 0.5468 | | 0.2357 | 44.0 | 913 | 1.5150 | 0.5529 | | 0.2287 | 44.96 | 933 | 1.5234 | 0.5589 | | 0.2329 | 45.98 | 954 | 1.5334 | 0.5650 | | 0.2131 | 46.99 | 975 | 1.5296 | 0.5619 | | 0.2269 | 48.0 | 996 | 1.5221 | 0.5559 | | 0.2161 | 48.19 | 1000 | 1.5222 | 0.5559 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.1
AlKiir/llama-2-13b-alkiir-hf3
AlKiir
2024-02-05T13:18:15Z
0
0
peft
[ "peft", "region:us" ]
null
2024-02-05T13:18:05Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
e22vvb/ALL_mt5-base_10_spider_15_wikiSQL_new
e22vvb
2024-02-05T12:52:53Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T09:54:55Z
--- tags: - generated_from_trainer model-index: - name: ALL_mt5-base_10_spider_15_wikiSQL_new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ALL_mt5-base_10_spider_15_wikiSQL_new This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2575 - Rouge2 Precision: 0.6182 - Rouge2 Recall: 0.4218 - Rouge2 Fmeasure: 0.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 19 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.2255 | 1.0 | 1021 | 0.2284 | 0.5416 | 0.3565 | 0.4021 | | 0.1417 | 2.0 | 2042 | 0.2184 | 0.5668 | 0.3778 | 0.4244 | | 0.1087 | 3.0 | 3063 | 0.2238 | 0.5823 | 0.3944 | 0.4421 | | 0.0884 | 4.0 | 4084 | 0.2273 | 0.6072 | 0.4136 | 0.4634 | | 0.0769 | 5.0 | 5105 | 0.2393 | 0.5998 | 0.4047 | 0.4542 | | 0.0666 | 6.0 | 6126 | 0.2399 | 0.6073 | 0.4128 | 0.4625 | | 0.0592 | 7.0 | 7147 | 0.2474 | 0.6081 | 0.4128 | 0.4626 | | 0.0551 | 8.0 | 8168 | 0.2530 | 0.6145 | 0.4181 | 0.4685 | | 0.0517 | 9.0 | 9189 | 0.2527 | 0.6168 | 0.4203 | 0.4708 | | 0.0507 | 10.0 | 10210 | 0.2575 | 0.6182 | 0.4218 | 0.4725 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.7.dev0 - Tokenizers 0.13.3
OmniFederal/Omni-8x7B-gating-merged
OmniFederal
2024-02-05T12:52:34Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-05T11:44:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
llmixer/BigWeave-v14-90b
llmixer
2024-02-05T12:48:22Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T10:18:18Z
--- base_model: [] tags: - mergekit - merge --- # model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * G:\Sao10K_WinterGoddess-1.4x-70B-L2 * F:\Xwin-70b ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: F:\Xwin-70b layer_range: [0,12] - sources: - model: G:\Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [9,14] - sources: - model: F:\Xwin-70b layer_range: [12,62] - sources: - model: G:\Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [54,71] - sources: - model: F:\Xwin-70b layer_range: [62,80] merge_method: passthrough dtype: float16 ```
wahaha1987/DecisionTransformer_1920steps_halfcheetah_expert_v2
wahaha1987
2024-02-05T12:48:00Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "decision_transformer", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-02-05T09:07:12Z
--- tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 120 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ahessamb/sentence-transformers-all-MiniLM-L6-v2-10epoch-100perp-cosine
ahessamb
2024-02-05T12:42:23Z
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-05T12:42:13Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ahessamb/sentence-transformers-all-MiniLM-L6-v2-10epoch-100perp-cosine This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ahessamb/sentence-transformers-all-MiniLM-L6-v2-10epoch-100perp-cosine') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ahessamb/sentence-transformers-all-MiniLM-L6-v2-10epoch-100perp-cosine) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1363 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1363, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Commandante/german-party-sentiment-bert-241-synonyms-5e-5
Commandante
2024-02-05T12:41:32Z
6
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:mdraw/german-news-sentiment-bert", "base_model:finetune:mdraw/german-news-sentiment-bert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-05T11:44:21Z
--- base_model: mdraw/german-news-sentiment-bert tags: - generated_from_trainer model-index: - name: german-party-sentiment-bert-241-synonyms-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-party-sentiment-bert-241-synonyms-5e-5 This model is a fine-tuned version of [mdraw/german-news-sentiment-bert](https://huggingface.co/mdraw/german-news-sentiment-bert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 20 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 120 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3705 | 1.0 | 28 | 0.9724 | | 0.9826 | 2.0 | 56 | 0.9680 | | 0.9826 | 3.0 | 84 | 0.9769 | | 0.8121 | 4.0 | 112 | 1.0368 | | 0.8121 | 5.0 | 140 | 1.1361 | | 0.5266 | 6.0 | 168 | 1.4722 | | 0.2635 | 7.0 | 196 | 1.3610 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Tokenizers 0.15.1
mariasierro/flair-ner-echr-fr-rev
mariasierro
2024-02-05T12:41:17Z
1
0
flair
[ "flair", "pytorch", "legal", "fr", "license:mit", "region:us" ]
null
2023-12-31T10:58:45Z
--- license: mit language: - fr library_name: flair tags: - legal --- This is a flair sequence tagger trained with a corpus of 32 case reports from the European Court of Human Rights (ECHR) in French (using pre-trained embeddings from the flair/ner-french model). This corpus was built and annotated for anonymization as part of the work presented in the Master's thesis "Anonymization of case reports from the ECHR in Spanish and French: exploration of two alternative annotation approaches". The annotation was carried out by projecting the annotations of the parallel texts of the English corpus built by Pilán et al. (2022), followed by a review of the projected annotations performed by human reviewers. It predicts 8 tags: DATETIME, CODE, PER, DEM, MISC, ORG, LOC, QUANTITY. The corpus and the code used for training this sequence tagger are available on GitHub: https://github.com/mariasierro/automatic-anonymization-ECHR-French-Spanish. References Pilán, I., Lison, P., Ovrelid, L., Papadopoulou, A., Sánchez, D. & Batet, M. (2022). The Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization. In Computational Linguistics, 48(4), pp. 1053–1101. Cambridge, MA: MIT Press. doi: 10.1162/coli_a_00458.
mariasierro/flair-ner-echr-es-projected
mariasierro
2024-02-05T12:40:09Z
3
0
flair
[ "flair", "pytorch", "legal", "es", "license:mit", "region:us" ]
null
2023-12-29T16:25:38Z
--- license: mit language: - es library_name: flair tags: - legal --- This is a flair sequence tagger trained with a corpus of 127 case reports from the European Court of Human Rights (ECHR) in Spanish (using pre-trained embeddings from the flair/ner-multi model). This corpus was built and annotated for anonymization as part of the work presented in the Master's thesis "Anonymization of case reports from the ECHR in Spanish and French: exploration of two alternative annotation approaches". The annotation was carried out by projecting the annotations of the test set of the English corpus built by Pilán et al. (2022). It predicts 8 tags: DATETIME, CODE, PER, DEM, MISC, ORG, LOC, QUANTITY. The corpus and the code used for training this sequence tagger are available on GitHub: https://github.com/mariasierro/automatic-anonymization-ECHR-French-Spanish. References Pilán, I., Lison, P., Ovrelid, L., Papadopoulou, A., Sánchez, D. & Batet, M. (2022). The Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization. In Computational Linguistics, 48(4), pp. 1053–1101. Cambridge, MA: MIT Press. doi: 10.1162/coli_a_00458.
ereldav/eyal_golan
ereldav
2024-02-05T12:39:13Z
0
0
null
[ "he", "arxiv:1910.09700", "region:us" ]
null
2024-02-05T12:35:55Z
--- language: - he --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mariasierro/flair-ner-echr-fr-projected
mariasierro
2024-02-05T12:38:43Z
7
0
flair
[ "flair", "pytorch", "legal", "fr", "license:mit", "region:us" ]
null
2023-12-29T17:32:36Z
--- license: mit language: - fr library_name: flair tags: - legal --- This is a flair sequence tagger trained with a corpus of 127 case reports from the European Court of Human Rights (ECHR) in French (using pre-trained embeddings from the flair/ner-french model). This corpus was built and annotated for anonymization as part of the work presented in the Master's thesis "Anonymization of case reports from the ECHR in Spanish and French: exploration of two alternative annotation approaches". The annotation was carried out by projecting the annotations of the test set of the English corpus built by Pilán et al. (2022). It predicts 8 tags: DATETIME, CODE, PER, DEM, MISC, ORG, LOC, QUANTITY. The corpus and the code used for fine-tuning this model are available on GitHub: https://github.com/mariasierro/automatic-anonymization-ECHR-French-Spanish. References Pilán, I., Lison, P., Ovrelid, L., Papadopoulou, A., Sánchez, D. & Batet, M. (2022). The Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization. In Computational Linguistics, 48(4), pp. 1053–1101. Cambridge, MA: MIT Press. doi: 10.1162/coli_a_00458.
chathuranga-jayanath/codet5-small-v16
chathuranga-jayanath
2024-02-05T12:36:09Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T10:56:45Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v16 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7656 - Bleu Score: 0.0057 - Gen Len: 13.156 ## Model description Trained, - on: chathuranga-jayanath/selfapr-manipulation-bug-context-10000 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----------:|:-------:| | No log | 1.0 | 267 | 0.8482 | 0.0057 | 13.042 | | 1.0733 | 2.0 | 534 | 0.7801 | 0.0057 | 13.151 | | 1.0733 | 3.0 | 801 | 0.7656 | 0.0057 | 13.156 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Kishore098/falcon7binstruct_mentalhealthmodel
Kishore098
2024-02-05T12:27:38Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded", "license:apache-2.0", "region:us" ]
null
2024-02-05T07:54:43Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: vilsonrodrigues/falcon-7b-instruct-sharded model-index: - name: falcon7binstruct_mentalhealthmodel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7binstruct_mentalhealthmodel This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 180 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Yashhhhmishra/pytorch_lora_weights.safetensors
Yashhhhmishra
2024-02-05T12:27:13Z
121
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-05T12:27:04Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/image (10).png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # pytorch_lora_weights.safetensors <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Yashhhhmishra/pytorch_lora_weights.safetensors/tree/main) them in the Files & versions tab.
xaviviro/wav2vec2-common_voice-ca-demo
xaviviro
2024-02-05T12:26:23Z
9
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "ca", "dataset:mozilla-foundation/common_voice_16_1", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-03T07:57:04Z
--- datasets: - mozilla-foundation/common_voice_16_1 language: - ca ---
oeg/RoBERTa-CelebA-Sp
oeg
2024-02-05T12:24:31Z
0
0
null
[ "Spanish", "CelebA", "Roberta-base-bne", "celebFaces Attributes", "text-to-image", "es", "dataset:oeg/CelebA_RoBERTa_Sp", "doi:10.57967/hf/0464", "license:cc-by-nc-4.0", "region:us" ]
text-to-image
2023-03-18T01:37:01Z
--- license: cc-by-nc-4.0 datasets: - oeg/CelebA_RoBERTa_Sp language: - es tags: - Spanish - CelebA - Roberta-base-bne - celebFaces Attributes pipeline_tag: text-to-image --- # RoBERTa base BNE trained with data from the descriptive text corpus of the CelebA dataset ## Overview - **Language**: Spanish - **Data**: [CelebA_RoBERTa_Sp](https://huggingface.co/datasets/oeg/CelebA_RoBERTa_Sp). - **Architecture**: roberta-base - - **Paper**: [Information Processing and Management](https://doi.org/10.1016/j.ipm.2024.103667) ## Description In order to improve the [RoBERTa-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) encoder performance, this model has been trained using the generated corpus ([in this respository](https://huggingface.co/oeg/RoBERTa-CelebA-Sp/)) and following the strategy of using a Siamese network together with the loss function of cosine similarity. The following steps were followed: - Define [sentence-transformer](https://www.sbert.net/) and _torch_ libraries for the implementation of the encoder. - Divide the training corpus into two parts, training with 249,000 sentences and validation with 1,000 sentences. - Load training / validation data for the model. Two lists are generated for the storage of the information and, in each of them, the entries are composed of a pair of descriptive sentences and their similarity value. - Implement [RoBERTa-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) as a baseline model for transformer training. - Train with a Siamese network in which, for a pair of sentences _A_ and _B_ from the training corpus, the similarities of their embedding vectors _u_ and _v_ generated using the cosine similarity metric (_CosineSimilarityLoss()_) are evaluated and compares with the real similarity value obtained from the training corpus. The performance measurement of the model during training was calculated using Spearman's correlation coefficient between the real similarity vector and the calculated similarity vector. The total training time using the _sentence-transformer_ library in Python was 42 days using all the available GPUs of the server, and with exclusive dedication. A comparison was made between the Spearman's correlation for 1000 test sentences between the base model and our trained model. As can be seen in the following table, our model obtains better results (correlation closer to 1). | Models | Spearman's correlation | | :---: | :---: | | RoBERTa-base-bne | 0.827176427 | | RoBERTa-celebA-Sp | 0.999913276 | ## How to use Downloading the model results in a directory called **roberta-large-bne-celebAEs-UNI** that contains its main files. To make use of the model use the following code in Python: ```python from sentence_transformers import SentenceTransformer, InputExample, models, losses, util, evaluation model_sbert = SentenceTransformer('roberta-large-bne-celebAEs-UNI') caption = ['La mujer tiene pomulos altos. Su cabello es de color negro. Tiene las cejas arqueadas y la boca ligeramente abierta. La joven y atractiva mujer sonriente tiene mucho maquillaje. Lleva aretes, collar y lapiz labial.'] vector = model_sbert.encode(captions) print(vector) ``` ## Results As a result, the encoder will generate a numeric vector whose dimension is 1024. ```python >>$ print(vector) >>$ [0.2,0.5,0.45,........0.9] >>$ len(vector) >>$ 1024 ``` ## More information To see more detailed information about the implementation visit the [following link](https://github.com/eduar03yauri/DCGAN-text2face-forSpanish/blob/main/Data/encoder-models/RoBERTa_model_trained.md). ## Licensing information This model is available under the [CC BY-NC 4.0.](https://creativecommons.org/licenses/by-nc/4.0/deed.es) ## Citation information **Citing**: If you used RoBERTa+CelebA model in your work, please cite the paper publish in **[Information Processing and Management](https://doi.org/10.1016/j.ipm.2024.103667)**: ```bib @article{YAURILOZANO2024103667, title = {Generative Adversarial Networks for text-to-face synthesis & generation: A quantitative–qualitative analysis of Natural Language Processing encoders for Spanish}, journal = {Information Processing & Management}, volume = {61}, number = {3}, pages = {103667}, year = {2024}, issn = {0306-4573}, doi = {https://doi.org/10.1016/j.ipm.2024.103667}, url = {https://www.sciencedirect.com/science/article/pii/S030645732400027X}, author = {Eduardo Yauri-Lozano and Manuel Castillo-Cara and Luis Orozco-Barbosa and Raúl García-Castro} } ``` ## Autors - [Eduardo Yauri Lozano](https://github.com/eduar03yauri) - [Manuel Castillo-Cara](https://github.com/manwestc) - [Raúl García-Castro](https://github.com/rgcmme) [*Universidad Nacional de Ingeniería*](https://www.uni.edu.pe/), [*Ontology Engineering Group*](https://oeg.fi.upm.es/), [*Universidad Politécnica de Madrid.*](https://www.upm.es/internacional) ## Contributors See the full list of contributors and more resources [here](https://github.com/eduar03yauri/DCGAN-text2face-forSpanish). <kbd><img src="https://www.uni.edu.pe/images/logos/logo_uni_2016.png" alt="Universidad Politécnica de Madrid" width="100"></kbd> <kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-oeg.png" alt="Ontology Engineering Group" width="100"></kbd> <kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-upm.png" alt="Universidad Politécnica de Madrid" width="100"></kbd>
oeg/Sent2vec_CelebA_Sp
oeg
2024-02-05T12:22:07Z
0
0
null
[ "CelebA", "Spanish", "celebFaces Attributes", "es", "dataset:oeg/CelebA_Sent2Vect_Sp", "doi:10.57967/hf/0465", "license:cc-by-nc-4.0", "region:us" ]
null
2023-03-18T02:13:01Z
--- license: cc-by-nc-4.0 datasets: - oeg/CelebA_Sent2Vect_Sp language: - es tags: - CelebA - Spanish - celebFaces Attributes --- # Sent2vec trained with data from the descriptive text corpus of the CelebA dataset ## Overview - **Language**: Spanish - **Data**: [CelebA_Sent2vec_Sp](https://huggingface.co/datasets/oeg/CelebA_Sent2Vect_Sp). - **Architecture**: Sent2vec - **Paper**: [Information Processing and Management](https://doi.org/10.1016/j.ipm.2024.103667) ## Description Sent2vec can be used directly for English texts. For this purpose, all you have to do is download the library and enter the text to be coded, since most of these algorithms were trained using English as the original language. However, since this work is used with text in Spanish, it has been necessary to train it from zero in this new language. This training was carried out using the generated corpus ([in this respository](https://huggingface.co/datasets/oeg/CelebA_Sent2Vect_Sp)) with the following process: - A corpus composed of a set of descriptive sentences of characteristics of each of the faces of the CelebA dataset in Spanish has been generated. A total of 192,209 sentences are available for training. - Apply a pre-processing consisting of removing accents. _stopwords_ and connectors were retained as part of the sentence structure during training. - Install the libraries _Sent2vec_ and _FastText_, and configure the parameters. The parameters have been fixed empirically after several - tests, being: 4,800 dimensions of feature vectors, 5,000 epochs, 200 threads, 2 n-grams and a learning rate of 0.05. In this context, the total training time lasted 7 hours working with all CPUs at maximum performance. As a result, it generates a _bin_ extension file which can be downloaded from this repository. ## How to use Download the model, as a result there is a **sent2vec_celebAEs-UNI.bin** file which will be loaded using the _sent2vec_ library in Python as follows: ```python import sent2vec Model_path="sent2vec_celebAEs-UNI.bin" s2vmodel = sent2vec.Sent2vecModel() s2vmodel.load_model(Model_path) caption = """El hombre luce una sombra a las 5 en punto. Su cabello es de color negro. Tiene una nariz grande con cejas tupidas. El hombre se ve atractivo""" vector = s2vmodel.embed_sentence(caption) print(vector) ``` ## Results As a result, the encoder will generate a numeric vector whose dimension is 4800. ```python >>$ print(vector) >>$ [[0.1,0.87,0.51,........0.7]] >>$ len(vector[0]) >>$ 4800 ``` To see detailed information on the use of the trained model, enter the [following link](https://github.com/eduar03yauri/DCGAN-text2face-forSpanish/blob/main/Data/encoder-models/Sent2vec_model_trained.md) ## Licensing information This model is available under the [CC BY-NC 4.0.](https://creativecommons.org/licenses/by-nc/4.0/deed.es) ## Citation information **Citing**: If you used Sent2vec+CelebA model in your work, please cite the paper publish in **[Information Processing and Management](https://doi.org/10.1016/j.ipm.2024.103667)**: ```bib @article{YAURILOZANO2024103667, title = {Generative Adversarial Networks for text-to-face synthesis & generation: A quantitative–qualitative analysis of Natural Language Processing encoders for Spanish}, journal = {Information Processing & Management}, volume = {61}, number = {3}, pages = {103667}, year = {2024}, issn = {0306-4573}, doi = {https://doi.org/10.1016/j.ipm.2024.103667}, url = {https://www.sciencedirect.com/science/article/pii/S030645732400027X}, author = {Eduardo Yauri-Lozano and Manuel Castillo-Cara and Luis Orozco-Barbosa and Raúl García-Castro} } ``` ## Autors - [Eduardo Yauri Lozano](https://github.com/eduar03yauri) - [Manuel Castillo-Cara](https://github.com/manwestc) - [Raúl García-Castro](https://github.com/rgcmme) [*Universidad Nacional de Ingeniería*](https://www.uni.edu.pe/), [*Ontology Engineering Group*](https://oeg.fi.upm.es/), [*Universidad Politécnica de Madrid.*](https://www.upm.es/internacional) ## Contributors See the full list of contributors [here](https://github.com/eduar03yauri/DCGAN-text2face-forSpanish). <kbd><img src="https://www.uni.edu.pe/images/logos/logo_uni_2016.png" alt="Universidad Politécnica de Madrid" width="100"></kbd> <kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-oeg.png" alt="Ontology Engineering Group" width="100"></kbd> <kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-upm.png" alt="Universidad Politécnica de Madrid" width="100"></kbd>
kenilshah35/whisper-med-dictation
kenilshah35
2024-02-05T12:19:42Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-medium.en", "base_model:adapter:openai/whisper-medium.en", "region:us" ]
null
2024-02-05T11:08:06Z
--- library_name: peft base_model: openai/whisper-medium.en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Nexesenex/chargoddard_llama-2-34b-uncode-iMat.GGUF
Nexesenex
2024-02-05T12:19:29Z
10
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-02-04T23:27:35Z
GGUF Quants with iMatrix for the following model : https://huggingface.co/chargoddard/llama-2-34b-uncode That model is based on the following dataset : https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-2-34b-uncode It's basically an attempt to un-nerf CodeLlama 34b. Here are some benchs made with LlamaCPP : - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag,72.5,,400,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag,72.6,,1000,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag_Bin,70,,400,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag_Bin,72.5,,1000,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Arc-Challenge,38.12709030,,299,2024-02-02 05:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Arc-Easy,64.73684211,,570,2024-02-02 05:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,MMLU,35.14376997,,313,2024-02-02 05:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Thruthful-QA,24.96940024,,817,2024-02-02 05:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,Winogrande,68.5872,,1267,2024-02-02 05:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,wikitext,5.7595,512,512,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex, - llama-2-34b-uncode-b2060-iMat-c32_ch3250-Q5_K_S.gguf,-,wikitext,5.0113,4096,4096,2024-02-02 01:40:00,,34b,CodeLlama,1000000,,,GGUF,chargoddard,Nexesenex,
Nexesenex/brucethemoose_Yi-34B-200K-DARE-merge-v7-iMat.GGUF
Nexesenex
2024-02-05T12:18:55Z
29
1
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-01-23T02:41:53Z
GGUF Quants with iMatrix for https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v7 iMatrix made with 2500 batches of 32 tokens made on wiki.train.raw Benchs made with LlamaCPP : - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Hellaswag,85.25,,400,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Hellaswag_Bin,80,,400,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Arc-Challenge,57.19063545,,299,2024-01-26 05:40:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Arc-Easy,79.12280702,,570,2024-01-26 05:40:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,MMLU,38.91285591,,1159,2024-01-26 05:40:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Thruthful-QA,33.41493268,19.8590,817,2024-01-26 05:40:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,Winogrande,78.1373,,1267,2024-01-26 05:40:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,wikitext,5.1353,512,512,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,wikitext,4.5414,2048,2048,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,wikitext,4.3967,4096,4096,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex, - Yi-34B-200K-DARE-merge-v7-AR-b1952-iMat-c32_ch2500-Q4_K_M.gguf,-,wikitext,4.4457,8192,8192,2024-01-26 00:00:00,,34b,Yi,2000000,,,GGUF,Brucethemoose,Nexesenex,
TalkTix/roberta-base-service-type-generator-28k
TalkTix
2024-02-05T12:18:50Z
89
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-19T17:00:14Z
--- license: mit language: - en metrics: - confusion_matrix --- ## Model Details This model is designed to classify customer service inquiries into nine services: SAP ERP, Atlassian, Adobe, Salesforce, Reporting, Microsoft Power Platform, Microsoft SharePoint, Snowflake, Microsoft Office. ## Training Data The model was trained on a balanced dataset of 28000 entries composed of anonymized customer service inquiries. Each category contained a similar number of examples to prevent class imbalance. https://github.com/amosproj/amos2023ws01-ticket-chat-ai/tree/main/Backend/app/model/test_data/test_data_with_gpt ## Training Procedure The model was fine-tuned over four epochs for a sequence classification task. We utilized a batch size of 4 and an Adam optimizer with a learning rate of 2e-5. ## Model Performance The model's performance was evaluated using a confusion matrix and a learning curve, as detailed below: - ### Confusion Matrix Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/VX1kloNWTcAkjG5kjAOSd.png) - **SAP ERP**: Most instances are classified correctly (400), with a few misclassifications as Adobe (1) and Microsoft Office (1). - **Atlassian**: This category has perfect classification with all instances (410) correctly identified. - **Adobe**: Also has high accuracy with 390 instances correctly classified and a single misclassification as SAP ERP (1). - **Salesforce**: There are 450 correctly classified instances, but there's some confusion with Reporting (16) and Microsoft Power Platform (1), and Microsoft SharePoint (1). - **Reporting**: There are 59 correct predictions. However, a significant number of Reporting instances are misclassified as Salesforce (36). - **Microsoft Power Platform**: This category has 320 correct classifications, with a few instances misclassified as Reporting (5), Snowflake (2), and Microsoft SharePoint (2). - **Microsoft SharePoint**: Most instances are correctly classified (390), with minimal confusion with other services. - **Snowflake**: There are 300 instances correctly identified, with a single instance misclassified as Microsoft Power Platform (1). - **Microsoft Office**: This category has 30 instances, all correctly classified with no misclassifications. - ### Learning Curve Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/2_pFL3pRe_X3cve8ef3yr.png) - **Training Loss**: The training loss starts at approximately 0.12 and decreases to about 0.065. The steady decline indicates the model is learning effectively from the training data. - **Validation Loss**: The validation loss starts around 0.11, decreases, then increases slightly at epoch 2 before declining again, ending around 0.075. This pattern suggests some variation in model performance on the validation set, but overall, the validation loss follows a downward trend, indicating improving model generalization. - ### Interprating Model's Output: - LABEL_0 stands for Adobe - LABEL_1 stands for Atlassian - LABEL_2 stands for Microsoft Office - LABEL_3 stands for Microsoft Power Platform - LABEL_4 stands for Microsoft SharePoint - LABEL_5 stands for Reporting - LABEL_6 stands for SAP ERP - LABEL_7 stands for Salesforce - LABEL_8 stands for Snowflake
cykim/distilbert-base-uncased-finetuned-emotions
cykim
2024-02-05T12:17:07Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-15T02:37:16Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotions results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.921 - name: F1 type: f1 value: 0.9208288097625511 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2140 - Accuracy: 0.921 - F1: 0.9208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8121 | 1.0 | 250 | 0.3099 | 0.9105 | 0.9099 | | 0.2479 | 2.0 | 500 | 0.2140 | 0.921 | 0.9208 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
DaOppaiLoli/Llama2-TwAddr-LoRA
DaOppaiLoli
2024-02-05T12:12:55Z
0
0
peft
[ "peft", "safetensors", "base_model:TheBloke/Llama-2-7B-Chat-fp16", "base_model:adapter:TheBloke/Llama-2-7B-Chat-fp16", "license:mit", "region:us" ]
null
2024-02-05T11:08:59Z
--- license: mit library_name: peft base_model: TheBloke/Llama-2-7B-Chat-fp16 --- # Model Card for Model ID 簡易臺灣路名 JSON 格式解析模型,關於詳細的訓練資料來源與訓練方法,請參考以下文章: 1. [LLM Note Day 24 - 語言模型微調 LLM Finetuning](https://ithelp.ithome.com.tw/articles/10336323) 2. [LLM Note Day 25 - PEFT & LoRA 訓練框架](https://ithelp.ithome.com.tw/articles/10336491) ## Model Details ### Model Description - **Developed by:** Penut Chen - **Model type:** Llama - **Language(s) (NLP):** 繁體中文 - **License:** MIT - **Finetuned from model:** [TheBloke/Llama-2-7B-Chat-fp16](https://huggingface.co/TheBloke/Llama-2-7B-Chat-fp16) ## Usage - 關於訓練資料,請參考 `data` 資料夾。 - 關於模型微調,請參考[這份程式碼](scripts/step1_finetuning.py)。 - 關於合併權重,請參考[這份程式碼](scripts/step2_merge.py)。 - 關於測試評估,請參考[這份程式碼](scripts/step3_evaluation.py)。 ## Training Details ### Training Data [政府資料開放平台 - 112 全國路名資料](https://data.gov.tw/dataset/35321) ### Framework versions - PEFT 0.8.2
TalkTix/roberta-base-priority-type-generator-55k
TalkTix
2024-02-05T12:05:59Z
89
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-03T21:09:19Z
--- license: mit language: - en metrics: - confusion_matrix --- ## Model Details This model is designed to classify customer service inquiries into four priorites: Low, Medium, High and Very High. ## Training Data The model was trained on a balanced dataset composed of anonymized customer service inquiries. Each category contained a similar number of examples to prevent class imbalance. https://github.com/amosproj/amos2023ws01-ticket-chat-ai/tree/main/Backend/app/model/test_data/test_data_with_gpt ## Training Procedure The model was fine-tuned over four epochs for a sequence classification task. We utilized a batch size of 4 and an Adam optimizer with a learning rate of 2e-5. ## Model Performance The model's performance was evaluated using a confusion matrix and a learning curve, as detailed below: - ### Confusion Matrix Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/g3U0MM2r--qQeuHqsXLO8.png) - **High**: The model has performed well in classifying high-priority items, with 1800 correct predictions. However, there are 28 instances where high priority is confused with low, 460 with medium, and 84 with very high. - **Low**: There is some confusion in the low-priority classification, with 140 instances classified correctly, but 360 instances confused with medium priority and 200 with very high. The model rarely misclassifies low as high priority. - **Medium**: The model has classified medium priority with moderate accuracy, with 700 correct predictions. However, there's notable confusion with high priority (150 instances) and very high priority (36 instances). - **Very High**: This category shows significant confusion. While the model correctly identifies 410 very high priority instances, it also confuses 5 with low, 160 with high, and 200 with medium. - ### Learning Curve Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/GNeprSSqqPTg5aJmXxRqt.png) - **Training Loss**: This line starts at approximately 0.94 and steadily decreases to around 0.82, indicating that the model is effectively learning from the training data. - **Validation Loss**: The validation loss begins just below 0.90 and decreases slightly after the first epoch, then levels off around 0.86. This behavior suggests that the model is not overfitting since the validation loss is not increasing as the model trains. However, the plateauing of the validation loss also suggests that the model may not be improving significantly after the first epoch. - ### Interprating Model's Output: - LABEL_0 stands for High - LABEL_1 stands for Low - LABEL_2 stands for Medium - LAVEL_3 stands for Very High
TalkTix/roberta-base-category-type-generator-43k
TalkTix
2024-02-05T12:04:54Z
89
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-03T17:10:24Z
--- license: mit language: - en metrics: - confusion_matrix --- ## Model Details This model is designed to classify customer service inquiries into five categories: Technical Issues, Billing & Payment, Product Inquiries, Account Management, and Policy Questions. ## Training Data The model was trained on a balanced dataset of 43000 entries composed of anonymized customer service inquiries. Each category contained a similar number of examples to prevent class imbalance. https://github.com/amosproj/amos2023ws01-ticket-chat-ai/tree/main/Backend/app/model/test_data/test_data_with_gpt ## Training Procedure The model was fine-tuned over four epochs for a sequence classification task. We utilized a batch size of 4 and an Adam optimizer with a learning rate of 2e-5. ## Model Performance The model's performance was evaluated using a confusion matrix and a learning curve, as detailed below: - ### Confusion Matrix Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/5YFdtj0PW1GATfr6ANhVZ.png) - ### Learning Curve Analysis ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654f564ebebf0c4c51d7290a/b9jaay1BKYoVnMD4YUhDK.png) - **Training Loss**: The training loss starts at approximately 0.42 and decreases steadily to around 0.32. This is a good sign as it suggests that the model is learning and improving its prediction on the training data with each epoch. - **Validation Loss**: The validation loss starts around 0.38 and decreases slightly after the first epoch but then flattens out and remains almost constant around 0.36. The flattening of the validation loss indicates that further learning improvements are marginal, and the model is not gaining additional predictive power from further training on this dataset.
ucheokechukwu/rl_course_vizdoom_health_gathering_supreme
ucheokechukwu
2024-02-05T12:04:08Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-05T11:54:35Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 20.40 +/- 1.79 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r ucheokechukwu/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
mtc/stabilityai-stablelm-2-1_6b-xsum-with-explanation-local-save-test_merged
mtc
2024-02-05T12:01:05Z
4
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-02-05T11:34:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jcjo/cat2
jcjo
2024-02-05T12:00:56Z
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-05T12:00:49Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of PJY cat license: openrail++ --- # SDXL LoRA DreamBooth - jcjo/cat2 <Gallery /> ## Model description These are jcjo/cat2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of PJY cat to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](jcjo/cat2/tree/main) them in the Files & versions tab.
sarulab-speech/hubert-base-jtube
sarulab-speech
2024-02-05T11:49:57Z
1,203
16
transformers
[ "transformers", "pytorch", "hubert", "feature-extraction", "ja", "arxiv:2106.07447", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-02T04:15:22Z
--- license: mit language: - ja library_name: transformers --- # hubert-base-jtube This repo provides model weights for the [hubert-base model](https://arxiv.org/abs/2106.07447) trained on the [JTubeSpeech](https://github.com/sarulab-speech/jtubespeech) corpus. Scroll down for the model usage # FAQ Q. 何をするモデル?<br> A. 音声を潜在変数に埋め込むモデル.音声認識(書き起こし)みたいな認識系のタスクに使えます. Q. 音声言語モデルって,ChatGPT の音声版ってこと?<br> A. Transformer にも種類があって,Encoder型とDecoder型の2つがあります.簡単に言うとEncoderが認識用(元データから潜在変数を得るモデル)で,Decoderが生成用(元データを復元するモデル)です.今回公開したHuBERTはEncoder型(認識用)で,ChatGPTのようなDecoder型(生成用)とは異なります. Q. じゃあ声は作れないってこと?<br> A. 声を生成するモデルではなくて,認識する側のモデルです.生成には使えません. Q. Decoder型(生成側)は今後公開する予定はあるの?<br> A. 生成モデルの公開は個人の権利を侵害する可能性があるため予定していないです.むしろ,声に関する個人の権利を保護する技術を開発することが音声技術者の課題だと考えています.(今回の音声言語モデルはそのための第一歩です) ## Dataset We extracted approximately 2720 hours of Japanese speech from the single-speaker subset of the JTubeSpeech corpus. The training data includes approximately 6,000,000 utterances from a total of about 55,000 speakers. ## How to use ```python from transformers import AutoFeatureExtractor, HubertModel from datasets import load_dataset import soundfile as sf model_name = "sarulab-speech/hubert-base-jtube" processor = AutoFeatureExtractor.from_pretrained(model_name) model = HubertModel.from_pretrained(model_name) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) input_values = processor(ds["speech"][0], return_tensors="pt",sampling_rate=16_000).input_values # Batch size 1 hidden_states = model(input_values).last_hidden_state ``` # Contributors * [Wataru Nakata/中田 亘](https://wataru-nakata.github.io) * [Kentaro Seki/関 健太郎](https://trgkpc.github.io/) * [Hitomi Yanaka/谷中 瞳](https://hitomiyanaka.mystrikingly.com/) * [Takaaki Saeki/佐伯 高明](https://takaaki-saeki.github.io/) * [Yuki Saito/齋藤 佑樹](https://sython.org/) * [Shinnosuke Takamichi/高道 慎之介](https://sites.google.com/site/shinnosuketakamichi/home) # 謝辞/acknowledgements 本研究は、国立研究開発法人産業技術総合研究所事業の令和5年度覚醒プロジェクトの助成を受けたものです。 /This work was supported by AIST KAKUSEI project (FY2023).
thisiswooyeol/Reinforce-Pixelcopter-PLE-v0
thisiswooyeol
2024-02-05T11:42:07Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-05T11:42:04Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.70 +/- 12.07 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
athmurikarthik/videomae-base-finetuned-ucf101-subset
athmurikarthik
2024-02-05T11:38:26Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-02-05T10:55:07Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8953 - Accuracy: 0.6590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 148 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0692 | 0.26 | 38 | 0.9795 | 0.5211 | | 1.0828 | 1.26 | 76 | 0.9425 | 0.5211 | | 1.0734 | 2.26 | 114 | 0.9658 | 0.6552 | | 0.8549 | 3.23 | 148 | 0.8953 | 0.6590 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
krishnareddy/asr_example
krishnareddy
2024-02-05T11:37:16Z
4
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-05T10:54:30Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - wer model-index: - name: asr_example results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # asr_example This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8808 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.2136 | 50.0 | 500 | 3.1954 | 1.0 | | 2.8884 | 100.0 | 1000 | 2.9321 | 1.0 | | 2.7653 | 150.0 | 1500 | 2.8864 | 1.0 | | 2.7109 | 200.0 | 2000 | 2.8808 | 1.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Skier8402/bert-finetuned-ner
Skier8402
2024-02-05T11:36:30Z
10
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "en", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-03T09:58:59Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] datasets: - conll2003 language: - en library_name: transformers --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [CoNLL-2003](https://huggingface.co/datasets/conll2003) dataset. It achieves the following results on the evaluation set: - Loss: 0.0597 - Precision: 0.9322 - Recall: 0.9482 - F1: 0.9401 - Accuracy: 0.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0793 | 1.0 | 1756 | 0.0771 | 0.9107 | 0.9342 | 0.9223 | 0.9805 | | 0.0384 | 2.0 | 3512 | 0.0583 | 0.9301 | 0.9455 | 0.9377 | 0.9858 | | 0.0255 | 3.0 | 5268 | 0.0597 | 0.9322 | 0.9482 | 0.9401 | 0.9863 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
mtc/stabilityai-stablelm-2-1_6b-xsum-with-explanation-local-save-test-qlora-4bit-adapter
mtc
2024-02-05T11:31:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T11:31:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
newbie-geek/tinyllama-v1-training
newbie-geek
2024-02-05T11:29:16Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T06:22:38Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-v1-training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-v1-training This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
simonycl/llama-2-7b-hf-cohere-Random-0.05
simonycl
2024-02-05T11:27:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-05T11:26:39Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
simonycl/llama-2-7b-hf-cohere-KCenterGreedyDeita-0.05-Llama-2-7b-hf
simonycl
2024-02-05T11:23:25Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-05T11:23:13Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LitiGious/my_first_model
LitiGious
2024-02-05T11:23:11Z
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T11:15:17Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: my_first_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_first_model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7224 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 459 | 2.7405 | | 2.9597 | 2.0 | 918 | 2.7174 | | 2.5937 | 3.0 | 1377 | 2.7224 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
PoungPoung/tuto_one
PoungPoung
2024-02-05T11:22:28Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-01T22:48:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hiraltalsaniya/phi2-task-classification-demo
hiraltalsaniya
2024-02-05T11:14:34Z
33
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T05:12:49Z
--- library_name: transformers pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
okandemirel/sdxl-turbo
okandemirel
2024-02-05T11:07:07Z
4
0
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "license:other", "autotrain_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-05T11:07:06Z
--- pipeline_tag: text-to-image inference: false license: other license_name: sai-nc-community license_link: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT --- # SDXL-Turbo Model Card <!-- Provide a quick summary of what the model is/does. --> ![row01](output_tile.jpg) SDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. A real-time demo is available here: http://clipdrop.co/stable-diffusion-turbo ## Model Details ### Model Description SDXL-Turbo is a distilled version of [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. - **Developed by:** Stability AI - **Funded by:** Stability AI - **Model type:** Generative text-to-image model - **Finetuned from model:** [SDXL 1.0 Base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) ### Model Sources For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference). - **Repository:** https://github.com/Stability-AI/generative-models - **Paper:** https://stability.ai/research/adversarial-diffusion-distillation - **Demo:** http://clipdrop.co/stable-diffusion-turbo ## Evaluation ![comparison1](image_quality_one_step.png) ![comparison2](prompt_alignment_one_step.png) The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models. SDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps. In addition, we see that using four steps for SDXL-Turbo further improves performance. For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation). ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Research on generative models. - Research on real-time applications of generative models. - Research on the impact of real-time generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. Excluded uses are described below. ### Diffusers ``` pip install diffusers transformers accelerate --upgrade ``` - **Text-to-image**: SDXL-Turbo does not make use of `guidance_scale` or `negative_prompt`, we disable it with `guidance_scale=0.0`. Preferably, the model generates images of size 512x512 but higher image sizes work as well. A **single step** is enough to generate high quality images. ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0] ``` - **Image-to-image**: When using SDXL-Turbo for image-to-image generation, make sure that `num_inference_steps` * `strength` is larger or equal to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, *e.g.* 0.5 * 2.0 = 1 step in our example below. ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image import torch pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512)) prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0] ``` ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy). ## Limitations and Bias ### Limitations - The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism. - The model cannot render legible text. - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for research purposes only. ## How to Get Started with the Model Check out https://github.com/Stability-AI/generative-models
doceoSoftware/donut-rvlcdip-clicars-04022024-1
doceoSoftware
2024-02-05T10:53:22Z
4
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-05T10:52:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OpenBuddy/openbuddy-deepseek-67b-v15.3-4k
OpenBuddy
2024-02-05T10:51:23Z
56
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-04T05:11:46Z
--- language: - zh - en - fr - de - ja - ko - it - ru - fi pipeline_tag: text-generation inference: false library_name: transformers license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/deepseek-ai/deepseek-llm-67b-base License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL) ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
AromaticHydrocarbon/ppo-LunarLander-v2
AromaticHydrocarbon
2024-02-05T10:48:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-05T10:47:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.33 +/- 27.73 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF
MaziyarPanahi
2024-02-05T10:38:28Z
35
3
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "dataset:abacusai/MetaMathFewshot", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "has_space", "base_model:abacusai/MetaMath-Bagel-DPO-34B", "base_model:quantized:abacusai/MetaMath-Bagel-DPO-34B", "conversational" ]
text-generation
2024-02-05T09:48:33Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - dataset:abacusai/MetaMathFewshot - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - has_space model_name: MetaMath-Bagel-DPO-34B-GGUF base_model: abacusai/MetaMath-Bagel-DPO-34B inference: false model_creator: abacusai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF) - Model creator: [abacusai](https://huggingface.co/abacusai) - Original model: [abacusai/MetaMath-Bagel-DPO-34B](https://huggingface.co/abacusai/MetaMath-Bagel-DPO-34B) ## Description [MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF) contains GGUF format model files for [abacusai/MetaMath-Bagel-DPO-34B](https://huggingface.co/abacusai/MetaMath-Bagel-DPO-34B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF) and below it, a specific filename to download, such as: MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MetaMath-Bagel-DPO-34B-GGUF MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./MetaMath-Bagel-DPO-34B-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
vsrinivas/falconlite2
vsrinivas
2024-02-05T10:31:19Z
14
0
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-06T16:08:16Z
--- license: apache-2.0 inference: false --- # FalconLite2 Model FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3), making it suitable for applications that require high performance in resource-constrained environments. You can also deploy FalconLite2 directly on SageMaker endpoints. FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below: |Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Inference framework| |----------|-------------:|-------------:|------------:|-----------:|-----------:| | FalconLite | No | 4-bit GPTQ |12K | [dNTK](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) | TGI 0.9.2 | | FalconLite2 | Yes | 4-bit GPTQ |24K | rope_theta = 1000000 | TGI 1.0.3 | ## Model Details - **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac) - **Model type:** [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) - **Language:** English - **Finetuned from weights:** [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560) - **Finetuned on data:** [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) and [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions) - **Served using framework:** [Text-Generation-Inference 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3) - **Model License:** Apache 2.0 - **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues) ## Deploy FalconLite2 on EC2 ## SSH login to an AWS `g5.12x` instance with the [Deep Learning AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-ami-gpu-pytorch-2-0-ubuntu-20-04/). ### Start TGI server ```bash git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev cd falconlite-dev/falconlite2 # this may take a while to build updated vLLM CUDA kernels ./docker_build.sh ./start_falconlite.sh ``` ### Perform inference ```bash # after FalconLite has been completely started pip install -r ../script/requirements-client.txt # test short context python falconlite_client.py # test long context of 13400 tokens, # which are copied from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/) python falconlite_client.py -l ``` **Important** - Use the prompt template below for FalconLite2: ``` <|prompter|>What are the main challenges to support a long context for LLM?<|endoftext|><|assistant|> ``` **Important** - When using FalconLite2 for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed. ## Deploy FalconLite2 on Amazon SageMaker ## To deploy FalconLite2 on a SageMaker endpoint, please follow [this notebook](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/falconlite2/sm_deploy.ipynb) running on a SageMaker Notebook instance (e.g. `g5.xlarge`). ## Evalution Result ## We evaluated FalconLite2 against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts. ### Accuracy ### |Eval task|Input length| Input length | Input length| Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:| | | 2851| 5568 |8313 | 11044 | 13780 | [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) | 100% | 100% | 100% | 100% | 90% | |Eval task|Input length| Input length | Input length| Input length| Input length|Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:| | | 3818| 5661 |7505 | 9354 | 11188 | 12657 | [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) | 84% | 82% | 66% | 56% | 62% | 34% | |Eval task|Input length| Input length | Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:| | | 3264| 5396 |8329 | 10197 | | [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) | 100% | 100% | 100% | 100% | |Eval task| Test set Accuracy | Hard subset Accuracy| |----------|-------------:|-------------:| | [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) | 53.4% | 45.4% | ## Limitations ## Before using the FalconLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
manishtanwar/gpt2-reuters-tokenizer
manishtanwar
2024-02-05T10:26:47Z
0
0
null
[ "en", "license:mit", "region:us" ]
null
2024-01-11T14:06:27Z
--- license: mit language: - en ---
LoneStriker/Midnight-Rose-103B-v1.0-5.0bpw-h6-exl2
LoneStriker
2024-02-05T10:24:05Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T09:56:21Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is a frankenmerge of [Midnight-Rose-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v1.0) with itself. (See that model card for details on what's in the blend.) It features 120 layers and should weigh in at 103b parameters. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompt Tips Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": false, "system_prompt": "You are an award-winning fiction author engaging in an uncensored literary roleplaying experience between a human (me) and a creative, obedient, and helpful AI assistant (you). It is VITAL that you follow all these instructions because this roleplay is very important to my career and I'll be fired from my job if it isn't good.\nROLEPLAY RULES:\n> It is vital that ONLY the human provides responses for {{user}}.\n> Reply as {{char}} using authentic, vivid, varied, explicit, accurate, creative, fresh, and descriptive responses that follow ALL provided narrative instructions. Stay in character as {{char}} and only write text for {{char}}.\n> Describe the scene and {{char}}'s sensory perceptions in vivid detail to immerse the reader in the story.\n> Keep your responses scoped to the current story beat and current scene.\n> Consider all available contextual information when narrating so that all the story details remain consistent between scenes.\n> Demonstrate {{char}}'s goals and motivations, and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or enhances the vividness of the scene.\n> When quoting {{char}}'s internal first-person thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose the thoughts in asterisks like this*. Only use asterisks for thoughts.\n> Use strong action verbs and varied descriptions to produce dynamic, high-quality prose.", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (provide varied, creative, and vivid narration; follow all narrative instructions; include all necessary possessive pronouns; maintain consistent story details; only roleplay as {{char}})|>\n", "activation_regex": "", "name": "Aurora-Nights" } ``` ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` slices: - sources: - model: midnight-rose-70b-v1.0 layer_range: [0, 40] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [20, 60] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [40, 80] # 40 merge_method: passthrough dtype: float16 ```
sridhar1111111111111111/mistralbase_travel_2epochs_4batch
sridhar1111111111111111
2024-02-05T10:17:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T10:17:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DouglasPontes/2020-Q2-90p-filtered
DouglasPontes
2024-02-05T10:15:18Z
16
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-2019-90m", "base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-03T16:04:43Z
--- license: mit base_model: cardiffnlp/twitter-roberta-base-2019-90m tags: - generated_from_trainer model-index: - name: 2020-Q2-90p-filtered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2020-Q2-90p-filtered This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | No log | 0.17 | 8000 | 4.0640 | | 4.2654 | 0.34 | 16000 | 3.9414 | | 4.2654 | 0.51 | 24000 | 3.8956 | | 4.0459 | 0.67 | 32000 | 3.8527 | | 4.0459 | 0.84 | 40000 | 3.8232 | | 3.9781 | 1.01 | 48000 | 3.7806 | | 3.9781 | 1.18 | 56000 | 3.7861 | | 3.9323 | 1.35 | 64000 | 3.7930 | | 3.9323 | 1.52 | 72000 | 3.7814 | | 3.9224 | 1.68 | 80000 | 3.7815 | | 3.9224 | 1.85 | 88000 | 3.7403 | | 3.8924 | 2.02 | 96000 | 3.7468 | | 3.8924 | 2.19 | 104000 | 3.7400 | | 3.879 | 2.36 | 112000 | 3.7283 | | 3.879 | 2.53 | 120000 | 3.7381 | | 3.8806 | 2.69 | 128000 | 3.7073 | | 3.8806 | 2.86 | 136000 | 3.7083 | | 3.8659 | 3.03 | 144000 | 3.6992 | | 3.8659 | 3.2 | 152000 | 3.6956 | | 3.8634 | 3.37 | 160000 | 3.6745 | | 3.8634 | 3.54 | 168000 | 3.7017 | | 3.8632 | 3.71 | 176000 | 3.6960 | | 3.8632 | 3.87 | 184000 | 3.7202 | | 3.8416 | 4.04 | 192000 | 3.7109 | | 3.8416 | 4.21 | 200000 | 3.6942 | | 3.8368 | 4.38 | 208000 | 3.6944 | | 3.8368 | 4.55 | 216000 | 3.6751 | | 3.8359 | 4.72 | 224000 | 3.6815 | | 3.8359 | 4.88 | 232000 | 3.6915 | | 3.8411 | 5.05 | 240000 | 3.6796 | | 3.8411 | 5.22 | 248000 | 3.6847 | | 3.8359 | 5.39 | 256000 | 3.6988 | | 3.8359 | 5.56 | 264000 | 3.6799 | | 3.8268 | 5.73 | 272000 | 3.6810 | | 3.8268 | 5.89 | 280000 | 3.6639 | | 3.8172 | 6.06 | 288000 | 3.6663 | | 3.8172 | 6.23 | 296000 | 3.6838 | | 3.8263 | 6.4 | 304000 | 3.6756 | | 3.8263 | 6.57 | 312000 | 3.6507 | | 3.8215 | 6.74 | 320000 | 3.6409 | | 3.8215 | 6.91 | 328000 | 3.6790 | | 3.8189 | 7.07 | 336000 | 3.6679 | | 3.8189 | 7.24 | 344000 | 3.6443 | | 3.8155 | 7.41 | 352000 | 3.6588 | | 3.8155 | 7.58 | 360000 | 3.6448 | | 3.8075 | 7.75 | 368000 | 3.6520 | | 3.8075 | 7.92 | 376000 | 3.6541 | | 3.8064 | 8.08 | 384000 | 3.6569 | | 3.8064 | 8.25 | 392000 | 3.6586 | | 3.8092 | 8.42 | 400000 | 3.6701 | | 3.8092 | 8.59 | 408000 | 3.6544 | | 3.8032 | 8.76 | 416000 | 3.6668 | | 3.8032 | 8.93 | 424000 | 3.6631 | | 3.8062 | 9.09 | 432000 | 3.6481 | | 3.8062 | 9.26 | 440000 | 3.6392 | | 3.7987 | 9.43 | 448000 | 3.6482 | | 3.7987 | 9.6 | 456000 | 3.6357 | | 3.7954 | 9.77 | 464000 | 3.6333 | | 3.7954 | 9.94 | 472000 | 3.6653 | | 3.7938 | 10.11 | 480000 | 3.6267 | | 3.7938 | 10.27 | 488000 | 3.6490 | | 3.7901 | 10.44 | 496000 | 3.6417 | | 3.7901 | 10.61 | 504000 | 3.6263 | | 3.7935 | 10.78 | 512000 | 3.6523 | | 3.7935 | 10.95 | 520000 | 3.6444 | | 3.7951 | 11.12 | 528000 | 3.6226 | | 3.7951 | 11.28 | 536000 | 3.6347 | | 3.7861 | 11.45 | 544000 | 3.6372 | | 3.7861 | 11.62 | 552000 | 3.6163 | | 3.7846 | 11.79 | 560000 | 3.6299 | | 3.7846 | 11.96 | 568000 | 3.6330 | | 3.7778 | 12.13 | 576000 | 3.6371 | | 3.7778 | 12.29 | 584000 | 3.6343 | | 3.777 | 12.46 | 592000 | 3.6242 | | 3.777 | 12.63 | 600000 | 3.6119 | | 3.778 | 12.8 | 608000 | 3.6167 | | 3.778 | 12.97 | 616000 | 3.6191 | | 3.7795 | 13.14 | 624000 | 3.6225 | | 3.7795 | 13.3 | 632000 | 3.6056 | | 3.7766 | 13.47 | 640000 | 3.6135 | | 3.7766 | 13.64 | 648000 | 3.6169 | | 3.7729 | 13.81 | 656000 | 3.6035 | | 3.7729 | 13.98 | 664000 | 3.6109 | | 3.7846 | 14.15 | 672000 | 3.6180 | | 3.7846 | 14.32 | 680000 | 3.6171 | | 3.7726 | 14.48 | 688000 | 3.6182 | | 3.7726 | 14.65 | 696000 | 3.6086 | | 3.7717 | 14.82 | 704000 | 3.5852 | | 3.7717 | 14.99 | 712000 | 3.5883 | | 3.7713 | 15.16 | 720000 | 3.6056 | | 3.7713 | 15.33 | 728000 | 3.6004 | | 3.7745 | 15.49 | 736000 | 3.6059 | | 3.7745 | 15.66 | 744000 | 3.6156 | | 3.7557 | 15.83 | 752000 | 3.6029 | | 3.7557 | 16.0 | 760000 | 3.6099 | | 3.7628 | 16.17 | 768000 | 3.6016 | | 3.7628 | 16.34 | 776000 | 3.6008 | | 3.7717 | 16.5 | 784000 | 3.5972 | | 3.7717 | 16.67 | 792000 | 3.5838 | | 3.7616 | 16.84 | 800000 | 3.5868 | | 3.7616 | 17.01 | 808000 | 3.5834 | | 3.7608 | 17.18 | 816000 | 3.6066 | | 3.7608 | 17.35 | 824000 | 3.5911 | | 3.7625 | 17.52 | 832000 | 3.5997 | | 3.7625 | 17.68 | 840000 | 3.5855 | | 3.7634 | 17.85 | 848000 | 3.5861 | | 3.7634 | 18.02 | 856000 | 3.6021 | | 3.75 | 18.19 | 864000 | 3.5966 | | 3.75 | 18.36 | 872000 | 3.5761 | | 3.7492 | 18.53 | 880000 | 3.5757 | | 3.7492 | 18.69 | 888000 | 3.6123 | | 3.7522 | 18.86 | 896000 | 3.5841 | | 3.7522 | 19.03 | 904000 | 3.5831 | | 3.7482 | 19.2 | 912000 | 3.5860 | | 3.7482 | 19.37 | 920000 | 3.5804 | | 3.75 | 19.54 | 928000 | 3.5730 | | 3.75 | 19.7 | 936000 | 3.5955 | | 3.755 | 19.87 | 944000 | 3.5868 | | 3.755 | 20.04 | 952000 | 3.5992 | | 3.7549 | 20.21 | 960000 | 3.5657 | | 3.7549 | 20.38 | 968000 | 3.5780 | | 3.743 | 20.55 | 976000 | 3.5828 | | 3.743 | 20.72 | 984000 | 3.5676 | | 3.75 | 20.88 | 992000 | 3.5724 | | 3.75 | 21.05 | 1000000 | 3.5850 | | 3.7483 | 21.22 | 1008000 | 3.5873 | | 3.7483 | 21.39 | 1016000 | 3.5799 | | 3.7523 | 21.56 | 1024000 | 3.5974 | | 3.7523 | 21.73 | 1032000 | 3.5790 | | 3.7458 | 21.89 | 1040000 | 3.5884 | | 3.7458 | 22.06 | 1048000 | 3.5904 | | 3.7498 | 22.23 | 1056000 | 3.5851 | | 3.7498 | 22.4 | 1064000 | 3.5776 | | 3.7496 | 22.57 | 1072000 | 3.5685 | | 3.7496 | 22.74 | 1080000 | 3.5731 | | 3.7395 | 22.9 | 1088000 | 3.5858 | | 3.7395 | 23.07 | 1096000 | 3.5931 | | 3.7466 | 23.24 | 1104000 | 3.5614 | | 3.7466 | 23.41 | 1112000 | 3.5456 | | 3.7503 | 23.58 | 1120000 | 3.5895 | | 3.7503 | 23.75 | 1128000 | 3.5608 | | 3.7484 | 23.92 | 1136000 | 3.5696 | | 3.7484 | 24.08 | 1144000 | 3.5653 | | 3.7435 | 24.25 | 1152000 | 3.5721 | | 3.7435 | 24.42 | 1160000 | 3.5510 | | 3.7348 | 24.59 | 1168000 | 3.5631 | | 3.7348 | 24.76 | 1176000 | 3.5727 | | 3.7341 | 24.93 | 1184000 | 3.5835 | | 3.7341 | 25.09 | 1192000 | 3.5766 | | 3.7435 | 25.26 | 1200000 | 3.5606 | | 3.7435 | 25.43 | 1208000 | 3.5497 | | 3.732 | 25.6 | 1216000 | 3.5433 | | 3.732 | 25.77 | 1224000 | 3.5420 | | 3.7343 | 25.94 | 1232000 | 3.5987 | | 3.7343 | 26.1 | 1240000 | 3.5956 | | 3.7336 | 26.27 | 1248000 | 3.5673 | | 3.7336 | 26.44 | 1256000 | 3.5643 | | 3.7444 | 26.61 | 1264000 | 3.5848 | | 3.7444 | 26.78 | 1272000 | 3.5693 | | 3.7395 | 26.95 | 1280000 | 3.5745 | | 3.7395 | 27.12 | 1288000 | 3.5758 | | 3.7389 | 27.28 | 1296000 | 3.5685 | | 3.7389 | 27.45 | 1304000 | 3.5712 | | 3.7416 | 27.62 | 1312000 | 3.5693 | | 3.7416 | 27.79 | 1320000 | 3.5740 | | 3.7305 | 27.96 | 1328000 | 3.5803 | | 3.7305 | 28.13 | 1336000 | 3.5682 | | 3.7268 | 28.29 | 1344000 | 3.5928 | | 3.7268 | 28.46 | 1352000 | 3.5608 | | 3.7363 | 28.63 | 1360000 | 3.5587 | | 3.7363 | 28.8 | 1368000 | 3.5603 | | 3.7325 | 28.97 | 1376000 | 3.5711 | | 3.7325 | 29.14 | 1384000 | 3.5828 | | 3.7337 | 29.3 | 1392000 | 3.5790 | | 3.7337 | 29.47 | 1400000 | 3.5795 | | 3.7367 | 29.64 | 1408000 | 3.5528 | | 3.7367 | 29.81 | 1416000 | 3.5766 | | 3.7313 | 29.98 | 1424000 | 3.5610 | | 3.7313 | 30.15 | 1432000 | 3.5834 | | 3.7277 | 30.32 | 1440000 | 3.5546 | | 3.7277 | 30.48 | 1448000 | 3.5534 | | 3.7296 | 30.65 | 1456000 | 3.5646 | | 3.7296 | 30.82 | 1464000 | 3.5436 | | 3.7411 | 30.99 | 1472000 | 3.5778 | | 3.7411 | 31.16 | 1480000 | 3.5541 | | 3.7233 | 31.33 | 1488000 | 3.5720 | | 3.7233 | 31.49 | 1496000 | 3.5567 | | 3.7291 | 31.66 | 1504000 | 3.5477 | | 3.7291 | 31.83 | 1512000 | 3.5557 | | 3.7265 | 32.0 | 1520000 | 3.5643 | | 3.7265 | 32.17 | 1528000 | 3.5739 | | 3.7352 | 32.34 | 1536000 | 3.5628 | | 3.7352 | 32.5 | 1544000 | 3.5542 | | 3.7353 | 32.67 | 1552000 | 3.5496 | | 3.7353 | 32.84 | 1560000 | 3.5737 | | 3.7243 | 33.01 | 1568000 | 3.5788 | | 3.7243 | 33.18 | 1576000 | 3.5631 | | 3.7192 | 33.35 | 1584000 | 3.5438 | | 3.7192 | 33.52 | 1592000 | 3.5554 | | 3.7266 | 33.68 | 1600000 | 3.5748 | | 3.7266 | 33.85 | 1608000 | 3.5620 | | 3.73 | 34.02 | 1616000 | 3.5464 | | 3.73 | 34.19 | 1624000 | 3.5670 | | 3.7264 | 34.36 | 1632000 | 3.5626 | | 3.7264 | 34.53 | 1640000 | 3.5640 | | 3.7317 | 34.69 | 1648000 | 3.5650 | | 3.7317 | 34.86 | 1656000 | 3.5458 | | 3.7332 | 35.03 | 1664000 | 3.5567 | | 3.7332 | 35.2 | 1672000 | 3.5610 | | 3.7248 | 35.37 | 1680000 | 3.5650 | | 3.7248 | 35.54 | 1688000 | 3.5580 | | 3.7232 | 35.7 | 1696000 | 3.5829 | | 3.7232 | 35.87 | 1704000 | 3.5532 | | 3.729 | 36.04 | 1712000 | 3.5723 | | 3.729 | 36.21 | 1720000 | 3.5454 | | 3.7273 | 36.38 | 1728000 | 3.5623 | | 3.7273 | 36.55 | 1736000 | 3.5462 | | 3.7261 | 36.72 | 1744000 | 3.5743 | | 3.7261 | 36.88 | 1752000 | 3.5638 | | 3.7208 | 37.05 | 1760000 | 3.5519 | | 3.7208 | 37.22 | 1768000 | 3.5584 | | 3.7183 | 37.39 | 1776000 | 3.5308 | | 3.7183 | 37.56 | 1784000 | 3.5549 | | 3.7193 | 37.73 | 1792000 | 3.5409 | | 3.7193 | 37.89 | 1800000 | 3.5396 | | 3.7271 | 38.06 | 1808000 | 3.5536 | | 3.7271 | 38.23 | 1816000 | 3.5452 | | 3.7284 | 38.4 | 1824000 | 3.5582 | | 3.7284 | 38.57 | 1832000 | 3.5668 | | 3.714 | 38.74 | 1840000 | 3.5673 | | 3.714 | 38.9 | 1848000 | 3.5477 | | 3.7105 | 39.07 | 1856000 | 3.5662 | | 3.7105 | 39.24 | 1864000 | 3.5498 | | 3.7189 | 39.41 | 1872000 | 3.5493 | | 3.7189 | 39.58 | 1880000 | 3.5676 | | 3.7203 | 39.75 | 1888000 | 3.5640 | | 3.7203 | 39.91 | 1896000 | 3.5747 | | 3.7271 | 40.08 | 1904000 | 3.5592 | | 3.7271 | 40.25 | 1912000 | 3.5515 | | 3.7237 | 40.42 | 1920000 | 3.5704 | | 3.7237 | 40.59 | 1928000 | 3.5642 | | 3.723 | 40.76 | 1936000 | 3.5300 | | 3.723 | 40.93 | 1944000 | 3.5482 | | 3.7224 | 41.09 | 1952000 | 3.5586 | | 3.7224 | 41.26 | 1960000 | 3.5463 | | 3.715 | 41.43 | 1968000 | 3.5323 | | 3.715 | 41.6 | 1976000 | 3.5426 | | 3.7209 | 41.77 | 1984000 | 3.5513 | | 3.7209 | 41.94 | 1992000 | 3.5614 | | 3.7183 | 42.1 | 2000000 | 3.5678 | | 3.7183 | 42.27 | 2008000 | 3.5304 | | 3.7161 | 42.44 | 2016000 | 3.5631 | | 3.7161 | 42.61 | 2024000 | 3.5589 | | 3.7215 | 42.78 | 2032000 | 3.5639 | | 3.7215 | 42.95 | 2040000 | 3.5376 | | 3.7205 | 43.11 | 2048000 | 3.5478 | | 3.7205 | 43.28 | 2056000 | 3.5511 | | 3.7178 | 43.45 | 2064000 | 3.5285 | | 3.7178 | 43.62 | 2072000 | 3.5428 | | 3.7232 | 43.79 | 2080000 | 3.5347 | | 3.7232 | 43.96 | 2088000 | 3.5501 | | 3.7167 | 44.13 | 2096000 | 3.5422 | | 3.7167 | 44.29 | 2104000 | 3.5487 | | 3.7253 | 44.46 | 2112000 | 3.5540 | | 3.7253 | 44.63 | 2120000 | 3.5432 | | 3.7139 | 44.8 | 2128000 | 3.5502 | | 3.7139 | 44.97 | 2136000 | 3.5450 | | 3.7194 | 45.14 | 2144000 | 3.5564 | | 3.7194 | 45.3 | 2152000 | 3.5441 | | 3.7167 | 45.47 | 2160000 | 3.5549 | | 3.7167 | 45.64 | 2168000 | 3.5429 | | 3.7202 | 45.81 | 2176000 | 3.5613 | | 3.7202 | 45.98 | 2184000 | 3.5469 | | 3.7193 | 46.15 | 2192000 | 3.5467 | | 3.7193 | 46.31 | 2200000 | 3.5493 | | 3.717 | 46.48 | 2208000 | 3.5652 | | 3.717 | 46.65 | 2216000 | 3.5669 | | 3.7164 | 46.82 | 2224000 | 3.5755 | | 3.7164 | 46.99 | 2232000 | 3.5580 | | 3.715 | 47.16 | 2240000 | 3.5403 | | 3.715 | 47.33 | 2248000 | 3.5521 | | 3.7091 | 47.49 | 2256000 | 3.5604 | | 3.7091 | 47.66 | 2264000 | 3.5401 | | 3.7199 | 47.83 | 2272000 | 3.5408 | | 3.7199 | 48.0 | 2280000 | 3.5509 | | 3.7238 | 48.17 | 2288000 | 3.5348 | | 3.7238 | 48.34 | 2296000 | 3.5530 | | 3.7193 | 48.5 | 2304000 | 3.5447 | | 3.7193 | 48.67 | 2312000 | 3.5453 | | 3.7195 | 48.84 | 2320000 | 3.5487 | | 3.7195 | 49.01 | 2328000 | 3.5357 | | 3.7187 | 49.18 | 2336000 | 3.5404 | | 3.7187 | 49.35 | 2344000 | 3.5247 | | 3.7157 | 49.51 | 2352000 | 3.5557 | | 3.7157 | 49.68 | 2360000 | 3.5532 | | 3.7144 | 49.85 | 2368000 | 3.5453 | | 3.7144 | 50.02 | 2376000 | 3.5421 | | 3.715 | 50.19 | 2384000 | 3.5183 | | 3.715 | 50.36 | 2392000 | 3.5473 | | 3.7208 | 50.53 | 2400000 | 3.5386 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.0
saswata1809/tiny-llama-1.1B-gsm8k_QA
saswata1809
2024-02-05T10:03:45Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-02-05T05:50:36Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: tiny-llama-1.1B-gsm8k_QA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-llama-1.1B-gsm8k_QA This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
RafaelZequeira/starcoderbase-1b-cucumber-copilot
RafaelZequeira
2024-02-05T10:01:39Z
12
0
transformers
[ "transformers", "safetensors", "gpt_bigcode", "text-generation", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:finetune:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T09:49:35Z
--- license: bigcode-openrail-m tags: - generated_from_trainer base_model: bigcode/starcoderbase-1b model-index: - name: starcoderbase-1b-cucumber-copilot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # starcoderbase-1b-cucumber-copilot This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6697 | 0.25 | 250 | 0.6523 | | 0.4537 | 0.5 | 500 | 0.6328 | | 0.3829 | 0.75 | 750 | 0.6309 | | 0.3245 | 1.0 | 1000 | 0.6377 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
TURKCELL/gibberish-sentence-detection-model-tr
TURKCELL
2024-02-05T09:59:50Z
108
5
transformers
[ "transformers", "pytorch", "bert", "text-classification", "tr", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
2024-02-05T07:51:02Z
--- license: mit language: - tr pipeline_tag: text-classification tags: - text-classification --- ## Model Description This model has been fine-tuned using [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) model. This model created for detecting gibberish sentences like "adssnfjnfjn" . It is a simple binary classification project that gives sentence is gibberish or real. ## Usage ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForSequenceClassification.from_pretrained("TURKCELL/gibberish-detection-model-tr") tokenizer = AutoTokenizer.from_pretrained("TURKCELL/gibberish-detection-model-tr", do_lower_case=True, use_fast=True) model.to(device) def get_result_for_one_sample(model, tokenizer, device, sample): d = { 1: 'gibberish', 0: 'real' } test_sample = tokenizer([sample], padding=True, truncation=True, max_length=256, return_tensors='pt').to(device) # test_sample output = model(**test_sample) y_pred = np.argmax(output.logits.detach().to('cpu').numpy(), axis=1) return d[y_pred[0]] sentence = "nabeer rdahdaajdajdnjnjf" result = get_result_for_one_sample(model, tokenizer, device, sentence) print(result) ```
kavg/LiLT-RE-ZH
kavg
2024-02-05T09:59:12Z
5
0
transformers
[ "transformers", "safetensors", "lilt", "generated_from_trainer", "dataset:xfun", "base_model:nielsr/lilt-xlm-roberta-base", "base_model:finetune:nielsr/lilt-xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-02-05T09:58:21Z
--- license: mit base_model: nielsr/lilt-xlm-roberta-base tags: - generated_from_trainer datasets: - xfun metrics: - precision - recall - f1 model-index: - name: checkpoints results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on the xfun dataset. It achieves the following results on the evaluation set: - Precision: 0.3911 - Recall: 0.6703 - F1: 0.4940 - Loss: 0.1352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | F1 | Validation Loss | Precision | Recall | |:-------------:|:------:|:-----:|:------:|:---------------:|:---------:|:------:| | 0.1469 | 20.83 | 500 | 0 | 0.1467 | 0 | 0 | | 0.0896 | 41.67 | 1000 | 0.0837 | 0.1454 | 0.2946 | 0.0487 | | 0.1027 | 62.5 | 1500 | 0.1225 | 0.1353 | 0.3333 | 0.0750 | | 0.0485 | 83.33 | 2000 | 0.3536 | 0.1571 | 0.3364 | 0.3727 | | 0.0597 | 104.17 | 2500 | 0.4448 | 0.1546 | 0.3535 | 0.5997 | | 0.0367 | 125.0 | 3000 | 0.4940 | 0.1352 | 0.3911 | 0.6703 | | 0.033 | 145.83 | 3500 | 0.4977 | 0.1749 | 0.3902 | 0.6870 | | 0.0176 | 166.67 | 4000 | 0.5087 | 0.2262 | 0.4034 | 0.6883 | | 0.0123 | 187.5 | 4500 | 0.5050 | 0.2358 | 0.3978 | 0.6915 | | 0.0194 | 208.33 | 5000 | 0.5173 | 0.2976 | 0.4090 | 0.7037 | | 0.0118 | 171.88 | 5500 | 0.4159 | 0.6863 | 0.5179 | 0.2836 | | 0.0054 | 187.5 | 6000 | 0.4356 | 0.6703 | 0.5280 | 0.3100 | | 0.01 | 203.12 | 6500 | 0.4229 | 0.6979 | 0.5266 | 0.3430 | | 0.0062 | 218.75 | 7000 | 0.4272 | 0.7062 | 0.5324 | 0.3652 | | 0.0051 | 234.38 | 7500 | 0.4306 | 0.6947 | 0.5317 | 0.3496 | | 0.0048 | 250.0 | 8000 | 0.4400 | 0.6940 | 0.5386 | 0.3943 | | 0.0087 | 265.62 | 8500 | 0.4290 | 0.6992 | 0.5317 | 0.3782 | | 0.0077 | 281.25 | 9000 | 0.4394 | 0.7049 | 0.5414 | 0.3855 | | 0.0014 | 296.88 | 9500 | 0.4363 | 0.7004 | 0.5377 | 0.3933 | | 0.0035 | 312.5 | 10000 | 0.4350 | 0.6992 | 0.5363 | 0.4045 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
basab1142/ppo-LunarLander-v2
basab1142
2024-02-05T09:53:25Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-05T09:27:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.25 +/- 17.63 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SteRoh/microsoft-xtremedistil-l12-h384-uncased
SteRoh
2024-02-05T09:53:14Z
13
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:microsoft/xtremedistil-l12-h384-uncased", "base_model:finetune:microsoft/xtremedistil-l12-h384-uncased", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-02-05T08:23:02Z
--- license: mit base_model: microsoft/xtremedistil-l12-h384-uncased tags: - generated_from_trainer model-index: - name: result results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
diksha13/arrivae-foreign-test
diksha13
2024-02-05T09:46:23Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-05T08:48:55Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - diksha13/arrivae-foreign-test These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the diksha13/xustom-dual dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
haturusinghe/1st_0.62586651429139_05_02-0944_xlm-roberta-base_mrp_2e-05_8_937.ckpt
haturusinghe
2024-02-05T09:45:27Z
4
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T09:44:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hojzas/proj4-all-labs
hojzas
2024-02-05T09:26:09Z
6
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "dataset:hojzas/proj4-all-labs", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "co2_eq_emissions", "region:us" ]
text-classification
2024-02-05T09:25:48Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer datasets: - hojzas/proj4-all-labs metrics: - accuracy widget: - text: return list(dict.fromkeys(sorted(it))) - text: ' perms = all_permutations_substrings(string)\n result = perms & set(words)\n return set(i for i in words if i in perms)' - text: return [l for i, l in enumerate(it) if i == it.index(l)] - text: " unique_items = set(it)\n return sorted(list(unique_items))" - text: " seen = set()\n result = []\n for word in it:\n if word not\ \ in seen:\n result.append(word)\n seen.add(word)\n return\ \ result" pipeline_tag: text-classification inference: true co2_eq_emissions: emissions: 6.0133985248367114 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz ram_total_size: 251.49161911010742 hours_used: 0.019 hardware_used: 4 x NVIDIA RTX A5000 base_model: sentence-transformers/all-mpnet-base-v2 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [hojzas/proj4-all-labs](https://huggingface.co/datasets/hojzas/proj4-all-labs) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 7 classes - **Training Dataset:** [hojzas/proj4-all-labs](https://huggingface.co/datasets/hojzas/proj4-all-labs) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>" perms = all_permutations_substrings(string)\\n return set(''.join(perm) for word in words for perm in perms if word == perm)"</li><li>' perms = all_permutations_substrings(string)\\n out = set()\\n for w in words:\\n for s in perms:\\n if w == s:\\n out.add(w)\\n return out'</li><li>' perms = all_permutations_substrings(string)\\n return set(word for word in words if word in perms)'</li></ul> | | 1 | <ul><li>' perms = all_permutations_substrings(string)\\n return perms.intersection(words)'</li><li>' perms = all_permutations_substrings(string)\\n return set.intersection(perms,words)'</li><li>' perms = all_permutations_substrings(string)\\n return set(perms).intersection(words)'</li></ul> | | 3 | <ul><li>' it = list(dict.fromkeys(it))\n it.sort()\n return it'</li><li>' sequence = []\n for i in it:\n if i in sequence:\n pass\n else:\n sequence.append(i)\n sequence.sort()\n return sequence'</li><li>' unique = list(set(it))\n unique.sort()\n return unique'</li></ul> | | 2 | <ul><li>'return sorted(list({word : it.count(word) for (word) in set(it)}.keys())) '</li><li>'return list(dict.fromkeys(sorted(it)))'</li><li>'return sorted((list(dict.fromkeys(it)))) '</li></ul> | | 4 | <ul><li>' unique_items = set(it)\n return sorted(list(unique_items))'</li><li>' letters = set(it)\n sorted_letters = sorted(letters)\n return sorted_letters'</li><li>'return list(sorted(set(it)))'</li></ul> | | 5 | <ul><li>' outputSequence = []\n for input in it:\n found = 0\n for output in outputSequence:\n if output == input:\n found = 1\n break\n if not found:\n outputSequence.append(input)\n return outputSequence'</li><li>' uniq = []\n for char in it:\n if not char in uniq:\n uniq.append(char)\n return uniq'</li><li>'return sorted(set(it), key=lambda y: it.index(y)) '</li></ul> | | 6 | <ul><li>'return [tmp for tmp in dict.fromkeys(it).keys()]'</li><li>'return [i for i in dict.fromkeys(it)]'</li><li>'return list(dict.fromkeys(it))'</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("hojzas/proj4-all-labs") # Run inference preds = model("return list(dict.fromkeys(sorted(it)))") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 2 | 25.0515 | 140 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 35 | | 1 | 14 | | 2 | 8 | | 3 | 10 | | 4 | 9 | | 5 | 13 | | 6 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0041 | 1 | 0.1745 | - | | 0.2058 | 50 | 0.0355 | - | | 0.4115 | 100 | 0.0168 | - | | 0.6173 | 150 | 0.0042 | - | | 0.8230 | 200 | 0.0075 | - | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Carbon Emitted**: 0.006 kg of CO2 - **Hours Used**: 0.019 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 4 x NVIDIA RTX A5000 - **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz - **RAM Size**: 251.49 GB ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.36.1 - PyTorch: 2.1.2+cu121 - Datasets: 2.14.7 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
haturusinghe/1st_0.6190252486584438_05_02-0915_xlm-roberta-base_mrp_2e-05_4_1875.ckpt
haturusinghe
2024-02-05T09:21:05Z
4
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T09:15:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> {'pretrained_model': 'xlm-roberta-base', 'exp_date': '02/05/2024-14:30', 'batch_size': 4, 'epochs': 3, 'lr': 2e-05, 'val_int': 1875, 'patience': 10, 'intermediate': 'mrp', 'mask_ratio': 0.5, 'n_tk_label': 2, 'dir_result': '/content/mrp_pipeline/', 'test': False, 'device': 'cuda', 'waiting': 0, 'n_eval': 0, 'optimizer': "optim.AdamW(list(emb_layer.parameters())+list(model.parameters()), lr=args['lr'], betas=(0.9, 0.99))", 'embedding_dim': 768} ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/Midnight-Rose-103B-v1.0-2.4bpw-h6-exl2
LoneStriker
2024-02-05T09:20:18Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T09:06:53Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is a frankenmerge of [Midnight-Rose-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v1.0) with itself. (See that model card for details on what's in the blend.) It features 120 layers and should weigh in at 103b parameters. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompt Tips Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": false, "system_prompt": "You are an award-winning fiction author engaging in an uncensored literary roleplaying experience between a human (me) and a creative, obedient, and helpful AI assistant (you). It is VITAL that you follow all these instructions because this roleplay is very important to my career and I'll be fired from my job if it isn't good.\nROLEPLAY RULES:\n> It is vital that ONLY the human provides responses for {{user}}.\n> Reply as {{char}} using authentic, vivid, varied, explicit, accurate, creative, fresh, and descriptive responses that follow ALL provided narrative instructions. Stay in character as {{char}} and only write text for {{char}}.\n> Describe the scene and {{char}}'s sensory perceptions in vivid detail to immerse the reader in the story.\n> Keep your responses scoped to the current story beat and current scene.\n> Consider all available contextual information when narrating so that all the story details remain consistent between scenes.\n> Demonstrate {{char}}'s goals and motivations, and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or enhances the vividness of the scene.\n> When quoting {{char}}'s internal first-person thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose the thoughts in asterisks like this*. Only use asterisks for thoughts.\n> Use strong action verbs and varied descriptions to produce dynamic, high-quality prose.", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (provide varied, creative, and vivid narration; follow all narrative instructions; include all necessary possessive pronouns; maintain consistent story details; only roleplay as {{char}})|>\n", "activation_regex": "", "name": "Aurora-Nights" } ``` ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` slices: - sources: - model: midnight-rose-70b-v1.0 layer_range: [0, 40] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [20, 60] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [40, 80] # 40 merge_method: passthrough dtype: float16 ```
Archit001a/model
Archit001a
2024-02-05T09:19:14Z
32
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "next-sentence-prediction", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-01-16T10:27:12Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: Model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.1
treshnanda/TestGEC
treshnanda
2024-02-05T09:09:27Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:LazarusNLP/indo-t5-base-v2", "base_model:finetune:LazarusNLP/indo-t5-base-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T09:06:58Z
--- license: apache-2.0 base_model: LazarusNLP/indo-t5-base-v2 tags: - generated_from_trainer metrics: - rouge model-index: - name: TestGEC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TestGEC This model is a fine-tuned version of [LazarusNLP/indo-t5-base-v2](https://huggingface.co/LazarusNLP/indo-t5-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 3.3289 - Rouge2: 0.1246 - Rougel: 3.1352 - Rougelsum: 3.1321 - Gen Len: 18.9835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 4500 | nan | 3.3289 | 0.1246 | 3.1352 | 3.1321 | 18.9835 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Midnight-Rose-103B-v1.0-GGUF
LoneStriker
2024-02-05T08:59:24Z
4
1
null
[ "gguf", "en", "arxiv:2307.11760", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-02-05T06:35:51Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is a frankenmerge of [Midnight-Rose-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v1.0) with itself. (See that model card for details on what's in the blend.) It features 120 layers and should weigh in at 103b parameters. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompt Tips Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": false, "system_prompt": "You are an award-winning fiction author engaging in an uncensored literary roleplaying experience between a human (me) and a creative, obedient, and helpful AI assistant (you). It is VITAL that you follow all these instructions because this roleplay is very important to my career and I'll be fired from my job if it isn't good.\nROLEPLAY RULES:\n> It is vital that ONLY the human provides responses for {{user}}.\n> Reply as {{char}} using authentic, vivid, varied, explicit, accurate, creative, fresh, and descriptive responses that follow ALL provided narrative instructions. Stay in character as {{char}} and only write text for {{char}}.\n> Describe the scene and {{char}}'s sensory perceptions in vivid detail to immerse the reader in the story.\n> Keep your responses scoped to the current story beat and current scene.\n> Consider all available contextual information when narrating so that all the story details remain consistent between scenes.\n> Demonstrate {{char}}'s goals and motivations, and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or enhances the vividness of the scene.\n> When quoting {{char}}'s internal first-person thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose the thoughts in asterisks like this*. Only use asterisks for thoughts.\n> Use strong action verbs and varied descriptions to produce dynamic, high-quality prose.", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (provide varied, creative, and vivid narration; follow all narrative instructions; include all necessary possessive pronouns; maintain consistent story details; only roleplay as {{char}})|>\n", "activation_regex": "", "name": "Aurora-Nights" } ``` ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` slices: - sources: - model: midnight-rose-70b-v1.0 layer_range: [0, 40] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [20, 60] # 40 - sources: - model: midnight-rose-70b-v1.0 layer_range: [40, 80] # 40 merge_method: passthrough dtype: float16 ```
Draon/tinyllama-colorist-v1
Draon
2024-02-05T08:57:33Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-02-05T08:55:01Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-colorist-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-colorist-v1 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
r3m3c3/english-to-kanji-c48500_model_3_v_0
r3m3c3
2024-02-05T08:57:31Z
19
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T06:00:49Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Thanasapon/tinyllama-colorist-v1
Thanasapon
2024-02-05T08:57:27Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-02-05T08:31:04Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-colorist-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-colorist-v1 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LuckyTemmie/tinyllama-colorist-v1
LuckyTemmie
2024-02-05T08:52:34Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-02-05T08:52:31Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-colorist-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-colorist-v1 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
UnaiGurbindo/whisper-small-af-ZA
UnaiGurbindo
2024-02-05T08:48:30Z
9
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-04T23:08:03Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-small-af-ZA results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: af_za split: train+validation args: af_za metrics: - name: Wer type: wer value: 0.02925243770314193 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-af-ZA This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.0415 - Wer Ortho: 0.0529 - Wer: 0.0293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 5 - training_steps: 700 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0054 | 1.45 | 100 | 0.0312 | 0.0449 | 0.0228 | | 0.0025 | 2.9 | 200 | 0.0345 | 0.0456 | 0.0231 | | 0.0021 | 4.35 | 300 | 0.0325 | 0.0445 | 0.0206 | | 0.0018 | 5.8 | 400 | 0.0325 | 0.0449 | 0.0202 | | 0.0033 | 7.25 | 500 | 0.0390 | 0.0905 | 0.0654 | | 0.0043 | 8.7 | 600 | 0.0415 | 0.0577 | 0.0347 | | 0.0026 | 10.14 | 700 | 0.0415 | 0.0529 | 0.0293 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Aharneish/llama-2-final
Aharneish
2024-02-05T08:47:38Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "feature-extraction", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
feature-extraction
2023-11-16T05:04:43Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: llama-2-7b-spiritual_test_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-2-7b-spiritual_test_1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - epoch: 4.77 - eval_loss: 0.3259 - eval_runtime: 141.2201 - eval_samples_per_second: 5.941 - eval_steps_per_second: 0.744 - step: 9000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
NovoCode/Phi-2-DPO
NovoCode
2024-02-05T08:40:30Z
15
4
transformers
[ "transformers", "pytorch", "safetensors", "phi", "text-generation", "generated_from_trainer", "custom_code", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T23:45:35Z
--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-sft-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Intel/orca_dpo_pairs type: system_prompt: "" field_system: system field_instruction: question field_output: rejected field_output: chosen dataset_prepared_path: val_set_size: 0.05 output_dir: ./phi-sft-out sequence_len: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ``` </details><br> # phi-sft-out This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the Intel/orca_dpo_pairs dataset. It achieves the following results on the evaluation set: - Loss: 1.2999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3053 | 0.0 | 1 | 1.3288 | | 1.2314 | 0.25 | 287 | 1.3183 | | 1.1664 | 0.5 | 574 | 1.3090 | | 1.4349 | 0.75 | 861 | 1.3034 | | 1.4875 | 1.0 | 1148 | 1.3012 | | 1.3461 | 1.23 | 1435 | 1.3006 | | 1.3247 | 1.48 | 1722 | 1.2998 | | 1.2906 | 1.73 | 2009 | 1.2999 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
newknp/tinyllama-colorist-v1
newknp
2024-02-05T08:36:05Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-02-05T08:36:02Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-colorist-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-colorist-v1 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Augustya07/Mistral-7B-Instruct-v0.2-sft-test-push_2
Augustya07
2024-02-05T08:33:50Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-05T08:29:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
s3nh/WSB-GPT-7B-GGUF
s3nh
2024-02-05T08:31:22Z
3
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T08:01:23Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/Sentdex/WSB-GPT-7B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### inference User: Tell me story about what is an quantization and what do we need to build. everyone on earth to use the internet ### REPLY: Quantization? Oh, you mean quantilization. As for your second question... We need 5G! ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END. ### END # Original model card
ryusangwon/billsum_236_t5-base
ryusangwon
2024-02-05T08:30:39Z
4
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T02:37:30Z
--- license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: billsum_236_t5-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # billsum_236_t5-base This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1143 - Rouge1: 0.1513 - Rouge2: 0.0546 - Rougel: 0.1244 - Rougelsum: 0.1245 - Gen Len: 18.979 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.2695 | 1.69 | 500 | 2.1398 | 0.157 | 0.0611 | 0.1279 | 0.1279 | 19.0 | | 1.1033 | 3.38 | 1000 | 2.1182 | 0.1582 | 0.0629 | 0.1296 | 0.1297 | 18.9984 | | 1.1178 | 5.07 | 1500 | 2.1133 | 0.1551 | 0.0594 | 0.1275 | 0.1277 | 18.979 | | 1.0399 | 6.75 | 2000 | 2.1171 | 0.1538 | 0.058 | 0.1266 | 0.1266 | 18.9887 | | 1.0364 | 8.44 | 2500 | 2.1143 | 0.1513 | 0.0546 | 0.1244 | 0.1245 | 18.979 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
Imran263/Mistral7Bfine
Imran263
2024-02-05T08:30:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T08:29:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mankook/detr-resnet-50_finetuned_cppe5
mankook
2024-02-05T08:23:18Z
31
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-02-05T07:45:37Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
alexgastev/dqn-SpaceInvadersNoFrameskip-v4
alexgastev
2024-02-05T08:17:43Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-05T08:17:19Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 329.00 +/- 157.97 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexgastev -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexgastev -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alexgastev ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mjm4dl/mstrl_slt_v03
mjm4dl
2024-02-05T08:14:36Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T08:11:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
giprime/OOM-7B_01
giprime
2024-02-05T08:12:11Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "llama", "en", "ko", "license:apache-2.0", "region:us" ]
null
2024-02-05T01:46:04Z
--- license: apache-2.0 language: - en - ko library_name: adapter-transformers --- Model Architecture OOM-7B_01 is an language model that uses an optimized transformer architecture based on Llama-2. ## Model description Based on "beomi/llama-2-ko-7b" ## Intended uses & limitations T.B.D. ## Training and evaluation data T.B.D. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-04 - train_batch_size: 2 - eval_batch_size: 8 - seed: 24 - gradient_accumulation_steps: 1 - total_train_batch_size: - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.1
ThanhNX/falcon_7b-FT1
ThanhNX
2024-02-05T08:05:46Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b-instruct", "base_model:adapter:tiiuae/falcon-7b-instruct", "region:us" ]
null
2024-02-05T08:05:39Z
--- library_name: peft base_model: tiiuae/falcon-7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
incomprehensible/01
incomprehensible
2024-02-05T08:05:29Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T08:02:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepseek-ai/deepseek-moe-16b-chat
deepseek-ai
2024-02-05T08:02:28Z
21,880
126
transformers
[ "transformers", "safetensors", "deepseek", "text-generation", "conversational", "custom_code", "arxiv:2401.06066", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-01-09T04:55:35Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL --- <p align="center"> <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p> <p align="center"> <a href="https://arxiv.org/pdf/2401.06066.pdf"><b>Paper Link</b>👁️</a> </p> <hr> ### 1. Introduction to DeepSeekMoE See the [Introduction](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main) for more details. ### 2. How to Use Here give some examples of how to use our model. **Chat Completion** ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_name = "deepseek-ai/deepseek-moe-16b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id messages = [ {"role": "user", "content": "Who are you?"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result) ``` Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input. ``` User: {messages[0]['content']} Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']} Assistant: ``` **Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input. ### 3. License This code repository is licensed under the MIT License. The use of DeepSeekMoE models is subject to the Model License. DeepSeekMoE supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL) for more details. ### 4. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
nesteggs/deepseek-moe-16b-chat
nesteggs
2024-02-05T08:02:28Z
4
0
transformers
[ "transformers", "safetensors", "deepseek", "text-generation", "conversational", "custom_code", "arxiv:2401.06066", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-02-16T23:37:13Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL --- <p align="center"> <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p> <p align="center"> <a href="https://arxiv.org/pdf/2401.06066.pdf"><b>Paper Link</b>👁️</a> </p> <hr> ### 1. Introduction to DeepSeekMoE See the [Introduction](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main) for more details. ### 2. How to Use Here give some examples of how to use our model. **Chat Completion** ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_name = "deepseek-ai/deepseek-moe-16b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id messages = [ {"role": "user", "content": "Who are you?"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result) ``` Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input. ``` User: {messages[0]['content']} Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']} Assistant: ``` **Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input. ### 3. License This code repository is licensed under the MIT License. The use of DeepSeekMoE models is subject to the Model License. DeepSeekMoE supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL) for more details. ### 4. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
Dev2410/LLAMA_Abhishek_7b
Dev2410
2024-02-05T07:55:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T07:55:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pxltd/world_v1
pxltd
2024-02-05T07:51:52Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T07:49:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
s3nh/Faraday-7B-GGUF
s3nh
2024-02-05T07:51:35Z
0
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T07:02:36Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/FelixChao/Faraday-7B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### inference User: Tell me story about what is an quantization and what do we need to build. I'm a software developer with no EE background. I'll try my best to explain Quantization in a simple manner without much math, as if you were explaining it to a child... Think of a world where everything exists only in whole numbers (integers) - like 1 apple, 2 apples, no half apples. This simplified world makes calculations much easier because everything fits neatly into whole number buckets. Now imagine the real world, full of continuous variations and shades - we live in an analog world. Electronic devices deal with these continuous variations, but for many reasons # Original model card
rtracey/test-trainer
rtracey
2024-02-05T07:43:49Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-05T07:43:14Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: test-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.1
lucasjin/LLava-Qwen-1_8B-Base
lucasjin
2024-02-05T07:36:12Z
6
1
transformers
[ "transformers", "safetensors", "llava", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T04:50:40Z
--- license: other license_name: qwen license_link: LICENSE --- Their non-commercial research license applies. I used this script to make the model and used the tokenizer of CausalLM, as suggested in the comments of the script. https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py
thinkKenya/wav2vec2-large-xls-r-300m-sw
thinkKenya
2024-02-05T07:34:56Z
25
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "sw", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-21T14:12:16Z
--- license: apache-2.0 datasets: - mozilla-foundation/common_voice_11_0 language: - sw metrics: - wer tags: - generated_from_trainer pipeline_tag: automatic-speech-recognition --- # Swahili Automatic Speech Recognition (ASR) ## Model details The Swahili ASR is an end-to-end automatic speech recognition system that was finetuned on the Common Voice Corpus 11.0 Swahili dataset. This repository provides the necessary tools to perform ASR using this model, allowing for high-quality speech-to-text conversions in Swahili. ## Example Usage Here's an example of how you can use this model for speech-to-text conversion: ```python from datasets import load_dataset from transformers import pipeline # replace following lines to load an audio file of your choice commonvoice_sw = load_dataset("mozilla-foundation/common_voice_11_0", "sw", split="test") audio_file = commonvoice_sw[0]["audio"] asr = pipeline("automatic-speech-recognition", model="thinkKenya/wav2vec2-large-xls-r-300m-sw", feature_extractor="thinkKenya/wav2vec2-large-xls-r-300m-sw") translation = asr(audio_file) ``` | EVAL_LOSS | EVAL_WER | EVAL_RUNTIME | EVAL_SAMPLES_PER_SECOND | EVAL_STEPS_PER_SECOND | EPOCH | |-------------------|--------------------|--------------|-------------------------|-----------------------|-------| | 0.345414400100708 | 0.2602372795622284 | 578.4006 | 17.701 | 2.213 | 4.17 | ## Intended Use This model is intended for any application requiring Swahili speech-to-text conversion, including but not limited to transcription services, voice assistants, and accessibility technology. It can be particularly beneficial in any context where demographic metadata (age, sex, accent) is significant, as these features have been taken into account during training. ## Dataset The model was trained on the Common Voice Corpus 11.0 Swahili dataset, which consists of unique MP3 files and corresponding text files, totaling 16,413 validated hours. Additionally, much of the dataset includes valuable demographic metadata, such as age, sex, and accent, contributing to a more accurate and contextually-aware ASR model. [Dataset link](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) ## Training Procedure ### Pipeline Description The ASR system has two interconnected stages: the Tokenizer (unigram) and the Acoustic model (wav2vec2.0 + CTC). 1. **Tokenizer (unigram):** It transforms words into subword units, using a vocabulary extracted from the training and test datasets. The resulting `Wav2Vec2CTCTokenizer` is then pushed to the Hugging Face model hub. 2. **Acoustic model (wav2vec2.0 + CTC):** Utilizes a pretrained wav2vec 2.0 model (`facebook/wav2vec2-base`), which is fine-tuned on the dataset. The processed audio data is passed through the CTC (Connectionist Temporal Classification) decoder, which converts the acoustic representations into a sequence of tokens/characters. The trained model is then also pushed to the Hugging Face model hub. ### Technical Specifications The ASR system uses the Wav2Vec2ForCTC model architecture from the Hugging Face's Transformers library. This model, with a built-in Connectionist Temporal Classification (CTC) layer, provides an optimal solution for speech recognition tasks. The model includes a pretrained wav2vec 2.0 model and a linear layer for CTC, which are trained together in an end-to-end manner. The ASR system's performance is measured using the Word Error Rate (WER) during the training process. ### Compute Infrastructure The training was performed using the following compute infrastructure: | [Compute](https://instances.vantage.sh/aws/ec2/g5.8xlarge#Compute) | Value | | ------------------------------------------------------------------------------------------ | ------------- | | vCPUs | 32 | | Memory (GiB) | 128.0 | | Memory per vCPU (GiB) | 4.0 | | Physical Processor | AMD EPYC 7R32 | | Clock Speed (GHz) | 2.8 | | CPU Architecture | x86_64 | | GPU | 1 | | GPU Architecture | nvidia a10g | | Video Memory (GiB) | 24 | | GPU Compute Capability [(?)](https://handbook.vantage.sh/aws/reference/aws-gpu-instances/) | 7.5 | | FPGA | 0 | ### Training procedure #### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 #### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3 ## About THiNK THiNK is a technology initiative driven by a community of innovators and businesses. It brings together a collaborative platform that provides services to assist businesses in all sectors, particularly in their digital transformation journey.
manishtanwar/bart-cnn-samsum-peft
manishtanwar
2024-02-05T07:28:17Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:ingeniumacademy/bart-cnn-samsum-finetuned", "base_model:finetune:ingeniumacademy/bart-cnn-samsum-finetuned", "license:mit", "region:us" ]
null
2024-01-30T12:46:33Z
--- license: mit base_model: ingeniumacademy/bart-cnn-samsum-finetuned tags: - generated_from_trainer model-index: - name: bart-cnn-samsum-peft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-samsum-peft This model is a fine-tuned version of [ingeniumacademy/bart-cnn-samsum-finetuned](https://huggingface.co/ingeniumacademy/bart-cnn-samsum-finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0678 | 1.0 | 74 | 0.2392 | | 0.0886 | 2.0 | 148 | 0.2317 | | 0.0803 | 3.0 | 222 | 0.2285 | | 0.0866 | 4.0 | 296 | 0.2327 | | 0.0876 | 5.0 | 370 | 0.2334 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Ankush-Chander/code-search-net-tokenizer
Ankush-Chander
2024-02-05T07:27:03Z
0
0
null
[ "region:us" ]
null
2024-02-05T07:25:38Z
Tokenizer trained on code-search-net python data(as part of [huggingface nlp course](https://huggingface.co/learn/nlp-course/chapter6/2?fw=pt))
thenlper/gte-large-zh
thenlper
2024-02-05T07:15:13Z
52,422
99
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "mteb", "sentence-similarity", "Sentence Transformers", "en", "arxiv:2308.03281", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-11-07T07:51:20Z
--- tags: - mteb - sentence-similarity - sentence-transformers - Sentence Transformers model-index: - name: gte-large-zh results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 48.94131905219026 - type: cos_sim_spearman value: 54.58261199731436 - type: euclidean_pearson value: 52.73929210805982 - type: euclidean_spearman value: 54.582632097533676 - type: manhattan_pearson value: 52.73123295724949 - type: manhattan_spearman value: 54.572941830465794 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 47.292931669579005 - type: cos_sim_spearman value: 54.601019783506466 - type: euclidean_pearson value: 54.61393532658173 - type: euclidean_spearman value: 54.60101865708542 - type: manhattan_pearson value: 54.59369555606305 - type: manhattan_spearman value: 54.601098593646036 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.233999999999995 - type: f1 value: 45.68998446563349 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 62.55033151404683 - type: cos_sim_spearman value: 64.40573802644984 - type: euclidean_pearson value: 62.93453281081951 - type: euclidean_spearman value: 64.40574149035828 - type: manhattan_pearson value: 62.839969210895816 - type: manhattan_spearman value: 64.30837945045283 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 42.098169316685045 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 38.90716707051822 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 86.09191911031553 - type: mrr value: 88.6747619047619 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 86.45781885502122 - type: mrr value: 89.01591269841269 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 24.215 - type: map_at_10 value: 36.498000000000005 - type: map_at_100 value: 38.409 - type: map_at_1000 value: 38.524 - type: map_at_3 value: 32.428000000000004 - type: map_at_5 value: 34.664 - type: mrr_at_1 value: 36.834 - type: mrr_at_10 value: 45.196 - type: mrr_at_100 value: 46.214 - type: mrr_at_1000 value: 46.259 - type: mrr_at_3 value: 42.631 - type: mrr_at_5 value: 44.044 - type: ndcg_at_1 value: 36.834 - type: ndcg_at_10 value: 43.146 - type: ndcg_at_100 value: 50.632999999999996 - type: ndcg_at_1000 value: 52.608999999999995 - type: ndcg_at_3 value: 37.851 - type: ndcg_at_5 value: 40.005 - type: precision_at_1 value: 36.834 - type: precision_at_10 value: 9.647 - type: precision_at_100 value: 1.574 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 21.48 - type: precision_at_5 value: 15.649 - type: recall_at_1 value: 24.215 - type: recall_at_10 value: 54.079 - type: recall_at_100 value: 84.943 - type: recall_at_1000 value: 98.098 - type: recall_at_3 value: 38.117000000000004 - type: recall_at_5 value: 44.775999999999996 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 82.51352976548407 - type: cos_sim_ap value: 89.49905141462749 - type: cos_sim_f1 value: 83.89334489486234 - type: cos_sim_precision value: 78.19761567993534 - type: cos_sim_recall value: 90.48398410100538 - type: dot_accuracy value: 82.51352976548407 - type: dot_ap value: 89.49108293121158 - type: dot_f1 value: 83.89334489486234 - type: dot_precision value: 78.19761567993534 - type: dot_recall value: 90.48398410100538 - type: euclidean_accuracy value: 82.51352976548407 - type: euclidean_ap value: 89.49904709975154 - type: euclidean_f1 value: 83.89334489486234 - type: euclidean_precision value: 78.19761567993534 - type: euclidean_recall value: 90.48398410100538 - type: manhattan_accuracy value: 82.48947684906794 - type: manhattan_ap value: 89.49231995962901 - type: manhattan_f1 value: 83.84681215233205 - type: manhattan_precision value: 77.28258726089528 - type: manhattan_recall value: 91.62964694879588 - type: max_accuracy value: 82.51352976548407 - type: max_ap value: 89.49905141462749 - type: max_f1 value: 83.89334489486234 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 78.583 - type: map_at_10 value: 85.613 - type: map_at_100 value: 85.777 - type: map_at_1000 value: 85.77900000000001 - type: map_at_3 value: 84.58 - type: map_at_5 value: 85.22800000000001 - type: mrr_at_1 value: 78.925 - type: mrr_at_10 value: 85.667 - type: mrr_at_100 value: 85.822 - type: mrr_at_1000 value: 85.824 - type: mrr_at_3 value: 84.651 - type: mrr_at_5 value: 85.299 - type: ndcg_at_1 value: 78.925 - type: ndcg_at_10 value: 88.405 - type: ndcg_at_100 value: 89.02799999999999 - type: ndcg_at_1000 value: 89.093 - type: ndcg_at_3 value: 86.393 - type: ndcg_at_5 value: 87.5 - type: precision_at_1 value: 78.925 - type: precision_at_10 value: 9.789 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 30.769000000000002 - type: precision_at_5 value: 19.031000000000002 - type: recall_at_1 value: 78.583 - type: recall_at_10 value: 96.891 - type: recall_at_100 value: 99.473 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 91.438 - type: recall_at_5 value: 94.152 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 25.604 - type: map_at_10 value: 77.171 - type: map_at_100 value: 80.033 - type: map_at_1000 value: 80.099 - type: map_at_3 value: 54.364000000000004 - type: map_at_5 value: 68.024 - type: mrr_at_1 value: 89.85 - type: mrr_at_10 value: 93.009 - type: mrr_at_100 value: 93.065 - type: mrr_at_1000 value: 93.068 - type: mrr_at_3 value: 92.72500000000001 - type: mrr_at_5 value: 92.915 - type: ndcg_at_1 value: 89.85 - type: ndcg_at_10 value: 85.038 - type: ndcg_at_100 value: 88.247 - type: ndcg_at_1000 value: 88.837 - type: ndcg_at_3 value: 85.20299999999999 - type: ndcg_at_5 value: 83.47 - type: precision_at_1 value: 89.85 - type: precision_at_10 value: 40.275 - type: precision_at_100 value: 4.709 - type: precision_at_1000 value: 0.486 - type: precision_at_3 value: 76.36699999999999 - type: precision_at_5 value: 63.75999999999999 - type: recall_at_1 value: 25.604 - type: recall_at_10 value: 85.423 - type: recall_at_100 value: 95.695 - type: recall_at_1000 value: 98.669 - type: recall_at_3 value: 56.737 - type: recall_at_5 value: 72.646 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 51.800000000000004 - type: map_at_10 value: 62.17 - type: map_at_100 value: 62.649 - type: map_at_1000 value: 62.663000000000004 - type: map_at_3 value: 59.699999999999996 - type: map_at_5 value: 61.23499999999999 - type: mrr_at_1 value: 51.800000000000004 - type: mrr_at_10 value: 62.17 - type: mrr_at_100 value: 62.649 - type: mrr_at_1000 value: 62.663000000000004 - type: mrr_at_3 value: 59.699999999999996 - type: mrr_at_5 value: 61.23499999999999 - type: ndcg_at_1 value: 51.800000000000004 - type: ndcg_at_10 value: 67.246 - type: ndcg_at_100 value: 69.58 - type: ndcg_at_1000 value: 69.925 - type: ndcg_at_3 value: 62.197 - type: ndcg_at_5 value: 64.981 - type: precision_at_1 value: 51.800000000000004 - type: precision_at_10 value: 8.32 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 23.133 - type: precision_at_5 value: 15.24 - type: recall_at_1 value: 51.800000000000004 - type: recall_at_10 value: 83.2 - type: recall_at_100 value: 94.1 - type: recall_at_1000 value: 96.8 - type: recall_at_3 value: 69.39999999999999 - type: recall_at_5 value: 76.2 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 49.60369372835706 - type: f1 value: 38.24016248875209 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 86.71669793621012 - type: ap value: 55.75807094995178 - type: f1 value: 81.59033162805417 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 69.50947272908907 - type: cos_sim_spearman value: 74.40054474949213 - type: euclidean_pearson value: 73.53007373987617 - type: euclidean_spearman value: 74.40054474732082 - type: manhattan_pearson value: 73.51396571849736 - type: manhattan_spearman value: 74.38395696630835 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 31.188333827724108 - type: mrr value: 29.84801587301587 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 64.685 - type: map_at_10 value: 73.803 - type: map_at_100 value: 74.153 - type: map_at_1000 value: 74.167 - type: map_at_3 value: 71.98 - type: map_at_5 value: 73.21600000000001 - type: mrr_at_1 value: 66.891 - type: mrr_at_10 value: 74.48700000000001 - type: mrr_at_100 value: 74.788 - type: mrr_at_1000 value: 74.801 - type: mrr_at_3 value: 72.918 - type: mrr_at_5 value: 73.965 - type: ndcg_at_1 value: 66.891 - type: ndcg_at_10 value: 77.534 - type: ndcg_at_100 value: 79.106 - type: ndcg_at_1000 value: 79.494 - type: ndcg_at_3 value: 74.13499999999999 - type: ndcg_at_5 value: 76.20700000000001 - type: precision_at_1 value: 66.891 - type: precision_at_10 value: 9.375 - type: precision_at_100 value: 1.0170000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 27.932000000000002 - type: precision_at_5 value: 17.86 - type: recall_at_1 value: 64.685 - type: recall_at_10 value: 88.298 - type: recall_at_100 value: 95.426 - type: recall_at_1000 value: 98.48700000000001 - type: recall_at_3 value: 79.44200000000001 - type: recall_at_5 value: 84.358 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.30531271015468 - type: f1 value: 70.88091430578575 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.7128446536651 - type: f1 value: 75.06125593532262 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 52.7 - type: map_at_10 value: 59.532 - type: map_at_100 value: 60.085 - type: map_at_1000 value: 60.126000000000005 - type: map_at_3 value: 57.767 - type: map_at_5 value: 58.952000000000005 - type: mrr_at_1 value: 52.900000000000006 - type: mrr_at_10 value: 59.648999999999994 - type: mrr_at_100 value: 60.20100000000001 - type: mrr_at_1000 value: 60.242 - type: mrr_at_3 value: 57.882999999999996 - type: mrr_at_5 value: 59.068 - type: ndcg_at_1 value: 52.7 - type: ndcg_at_10 value: 62.883 - type: ndcg_at_100 value: 65.714 - type: ndcg_at_1000 value: 66.932 - type: ndcg_at_3 value: 59.34700000000001 - type: ndcg_at_5 value: 61.486 - type: precision_at_1 value: 52.7 - type: precision_at_10 value: 7.340000000000001 - type: precision_at_100 value: 0.8699999999999999 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 21.3 - type: precision_at_5 value: 13.819999999999999 - type: recall_at_1 value: 52.7 - type: recall_at_10 value: 73.4 - type: recall_at_100 value: 87.0 - type: recall_at_1000 value: 96.8 - type: recall_at_3 value: 63.9 - type: recall_at_5 value: 69.1 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 76.47666666666667 - type: f1 value: 76.4808576632057 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 77.58527341635084 - type: cos_sim_ap value: 79.32131557636497 - type: cos_sim_f1 value: 80.51948051948052 - type: cos_sim_precision value: 71.7948717948718 - type: cos_sim_recall value: 91.65786694825766 - type: dot_accuracy value: 77.58527341635084 - type: dot_ap value: 79.32131557636497 - type: dot_f1 value: 80.51948051948052 - type: dot_precision value: 71.7948717948718 - type: dot_recall value: 91.65786694825766 - type: euclidean_accuracy value: 77.58527341635084 - type: euclidean_ap value: 79.32131557636497 - type: euclidean_f1 value: 80.51948051948052 - type: euclidean_precision value: 71.7948717948718 - type: euclidean_recall value: 91.65786694825766 - type: manhattan_accuracy value: 77.15213860314023 - type: manhattan_ap value: 79.26178519246496 - type: manhattan_f1 value: 80.22028453418999 - type: manhattan_precision value: 70.94155844155844 - type: manhattan_recall value: 92.29144667370645 - type: max_accuracy value: 77.58527341635084 - type: max_ap value: 79.32131557636497 - type: max_f1 value: 80.51948051948052 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 92.68 - type: ap value: 90.78652757815115 - type: f1 value: 92.67153098230253 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 35.301730226895955 - type: cos_sim_spearman value: 38.54612530948101 - type: euclidean_pearson value: 39.02831131230217 - type: euclidean_spearman value: 38.54612530948101 - type: manhattan_pearson value: 39.04765584936325 - type: manhattan_spearman value: 38.54455759013173 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 32.27907454729754 - type: cos_sim_spearman value: 33.35945567162729 - type: euclidean_pearson value: 31.997628193815725 - type: euclidean_spearman value: 33.3592386340529 - type: manhattan_pearson value: 31.97117833750544 - type: manhattan_spearman value: 33.30857326127779 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.53712784446981 - type: cos_sim_spearman value: 62.975074386224286 - type: euclidean_pearson value: 61.791207731290854 - type: euclidean_spearman value: 62.975073716988064 - type: manhattan_pearson value: 62.63850653150875 - type: manhattan_spearman value: 63.56640346497343 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 79.52067424748047 - type: cos_sim_spearman value: 79.68425102631514 - type: euclidean_pearson value: 79.27553959329275 - type: euclidean_spearman value: 79.68450427089856 - type: manhattan_pearson value: 79.21584650471131 - type: manhattan_spearman value: 79.6419242840243 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 65.8563449629786 - type: mrr value: 75.82550832339254 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 27.889999999999997 - type: map_at_10 value: 72.878 - type: map_at_100 value: 76.737 - type: map_at_1000 value: 76.836 - type: map_at_3 value: 52.738 - type: map_at_5 value: 63.726000000000006 - type: mrr_at_1 value: 89.35600000000001 - type: mrr_at_10 value: 92.622 - type: mrr_at_100 value: 92.692 - type: mrr_at_1000 value: 92.694 - type: mrr_at_3 value: 92.13799999999999 - type: mrr_at_5 value: 92.452 - type: ndcg_at_1 value: 89.35600000000001 - type: ndcg_at_10 value: 81.932 - type: ndcg_at_100 value: 86.351 - type: ndcg_at_1000 value: 87.221 - type: ndcg_at_3 value: 84.29100000000001 - type: ndcg_at_5 value: 82.279 - type: precision_at_1 value: 89.35600000000001 - type: precision_at_10 value: 39.511 - type: precision_at_100 value: 4.901 - type: precision_at_1000 value: 0.513 - type: precision_at_3 value: 72.62100000000001 - type: precision_at_5 value: 59.918000000000006 - type: recall_at_1 value: 27.889999999999997 - type: recall_at_10 value: 80.636 - type: recall_at_100 value: 94.333 - type: recall_at_1000 value: 98.39099999999999 - type: recall_at_3 value: 54.797 - type: recall_at_5 value: 67.824 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 51.979000000000006 - type: f1 value: 50.35658238894168 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 68.36477832710159 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 62.92080622759053 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 59.3 - type: map_at_10 value: 69.299 - type: map_at_100 value: 69.669 - type: map_at_1000 value: 69.682 - type: map_at_3 value: 67.583 - type: map_at_5 value: 68.57799999999999 - type: mrr_at_1 value: 59.3 - type: mrr_at_10 value: 69.299 - type: mrr_at_100 value: 69.669 - type: mrr_at_1000 value: 69.682 - type: mrr_at_3 value: 67.583 - type: mrr_at_5 value: 68.57799999999999 - type: ndcg_at_1 value: 59.3 - type: ndcg_at_10 value: 73.699 - type: ndcg_at_100 value: 75.626 - type: ndcg_at_1000 value: 75.949 - type: ndcg_at_3 value: 70.18900000000001 - type: ndcg_at_5 value: 71.992 - type: precision_at_1 value: 59.3 - type: precision_at_10 value: 8.73 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 25.900000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 59.3 - type: recall_at_10 value: 87.3 - type: recall_at_100 value: 96.5 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 77.7 - type: recall_at_5 value: 82.1 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 88.36999999999999 - type: ap value: 73.29590829222836 - type: f1 value: 86.74250506247606 language: - en license: mit --- # gte-large-zh General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281) The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer different sizes of models for both Chinese and English Languages. The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Model List | Models | Language | Max Sequence Length | Dimension | Model Size | |:-----: | :-----: |:-----: |:-----: |:-----: | |[GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 0.67GB | |[GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.21GB | |[GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.10GB | |[GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 0.67GB | |[GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB | |[GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB | ## Metrics We compared the performance of the GTE models with other popular text embedding models on the MTEB (CMTEB for Chinese language) benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). - Evaluation results on CMTEB | Model | Model Size (GB) | Embedding Dimensions | Sequence Length | Average (35 datasets) | Classification (9 datasets) | Clustering (4 datasets) | Pair Classification (2 datasets) | Reranking (4 datasets) | Retrieval (8 datasets) | STS (8 datasets) | | ------------------- | -------------- | -------------------- | ---------------- | --------------------- | ------------------------------------ | ------------------------------ | --------------------------------------- | ------------------------------ | ---------------------------- | ------------------------ | | **gte-large-zh** | 0.65 | 1024 | 512 | **66.72** | 71.34 | 53.07 | 81.14 | 67.42 | 72.49 | 57.82 | | gte-base-zh | 0.20 | 768 | 512 | 65.92 | 71.26 | 53.86 | 80.44 | 67.00 | 71.71 | 55.96 | | stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 | | stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 | | bge-large-zh-v1.5 | 1.3 | 1024 | 512 | 64.53 | 69.13 | 48.99 | 81.6 | 65.84 | 70.46 | 56.25 | | stella-base-zh-v2 | 0.21 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.96 | 66.1 | 70.08 | 56.92 | | stella-base-zh | 0.21 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 | | piccolo-large-zh | 0.65 | 1024 | 512 | 64.11 | 67.03 | 47.04 | 78.38 | 65.98 | 70.93 | 58.02 | | piccolo-base-zh | 0.2 | 768 | 512 | 63.66 | 66.98 | 47.12 | 76.61 | 66.68 | 71.2 | 55.9 | | gte-small-zh | 0.1 | 512 | 512 | 60.04 | 64.35 | 48.95 | 69.99 | 66.21 | 65.50 | 49.72 | | bge-small-zh-v1.5 | 0.1 | 512 | 512 | 57.82 | 63.96 | 44.18 | 70.4 | 60.92 | 61.77 | 49.1 | | m3e-base | 0.41 | 768 | 512 | 57.79 | 67.52 | 47.68 | 63.99 | 59.54| 56.91 | 50.47 | |text-embedding-ada-002(openai) | - | 1536| 8192 | 53.02 | 64.31 | 45.68 | 69.56 | 54.28 | 52.0 | 43.35 | ## Usage Code example ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel input_texts = [ "中国的首都是哪里", "你喜欢去哪里旅游", "北京", "今天中午吃什么" ] tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-large-zh") model = AutoModel.from_pretrained("thenlper/gte-large-zh") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0] # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('thenlper/gte-large-zh') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### Limitation This model exclusively caters to Chinese texts, and any lengthy texts will be truncated to a maximum of 512 tokens. ### Citation If you find our paper or models helpful, please consider citing them as follows: ``` @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
kinshuk-h/flan-t5-retacred-kg-direct-peft-qlora-bnb-w-context-large-finetuned
kinshuk-h
2024-02-05T07:06:28Z
3
0
peft
[ "peft", "base_model:google/flan-t5-large", "base_model:adapter:google/flan-t5-large", "region:us" ]
null
2023-06-15T14:20:41Z
--- library_name: peft base_model: google/flan-t5-large --- --- license: mit language: - en pipeline_tag: text2text-generation tags: - legal --- # flan-t5-retacred-kg-direct-peft-qlora-bnb-w-context-large-finetuned [flan-t5-large](https://huggingface.co/google/flan-t5-large) finetuned over the TACRED corpus patched as per the [Re-TACRED proposal](https://github.com/gstoica27/re-tacred) using direct concise query prompts with additional context alongside the prompts.