modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Talha01/LLMModel
|
Talha01
| 2024-11-05T17:20:35Z | 126 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T17:11:12Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# TalhaLLM
## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-135M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is gravity?"}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
|
deepnet/SN29-C00-llama-HK13-1
|
deepnet
| 2024-11-05T17:20:12Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"transformers.js",
"tokenizers",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-19T20:20:51Z |
---
library_name: transformers
tags:
- transformers.js
- tokenizers
---
# GPT-4 Tokenizer
A 🤗-compatible version of the **GPT-4 tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js).
## Example usage:
### Transformers/Tokenizers
```py
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-4')
assert tokenizer.encode('hello world') == [15339, 1917]
```
### Transformers.js
```js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4');
const tokens = tokenizer.encode('hello world'); // [15339, 1917]
```
|
sentence-transformers/multi-qa-distilbert-cos-v1
|
sentence-transformers
| 2024-11-05T17:18:43Z | 238,518 | 23 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"openvino",
"distilbert",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:search_qa",
"dataset:eli5",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/QQP",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/Amazon-QA",
"dataset:embedding-data/WikiAnswers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- search_qa
- eli5
- natural_questions
- trivia_qa
- embedding-data/QQP
- embedding-data/PAQ_pairs
- embedding-data/Amazon-QA
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# multi-qa-distilbert-cos-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-distilbert-cos-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-distilbert-cos-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-distilbert-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
|
pszemraj/flan-t5-xl-grammar-synthesis
|
pszemraj
| 2024-11-05T17:18:18Z | 93 | 10 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"t5",
"text2text-generation",
"grammar",
"spelling",
"punctuation",
"error-correction",
"grammar synthesis",
"FLAN",
"dataset:jfleg",
"base_model:google/flan-t5-xl",
"base_model:quantized:google/flan-t5-xl",
"license:cc-by-nc-sa-4.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-05T16:27:40Z |
---
license:
- cc-by-nc-sa-4.0
- apache-2.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
- FLAN
datasets:
- jfleg
languages:
- en
widget:
- text: There car broke down so their hitching a ride to they're class.
example_title: compound-1
- text: i can has cheezburger
example_title: cheezburger
- text: so em if we have an now so with fito ringina know how to estimate the tren
given the ereafte mylite trend we can also em an estimate is nod s i again tort
watfettering an we have estimated the trend an called wot to be called sthat of
exty right now we can and look at wy this should not hare a trend i becan we just
remove the trend an and we can we now estimate tesees ona effect of them exty
example_title: Transcribed Audio Example 2
- text: My coworker said he used a financial planner to help choose his stocks so
he wouldn't loose money.
example_title: incorrect word choice (context)
- text: good so hve on an tadley i'm not able to make it to the exla session on monday
this week e which is why i am e recording pre recording an this excelleision and
so to day i want e to talk about two things and first of all em i wont em wene
give a summary er about ta ohow to remove trents in these nalitives from time
series
example_title: lowercased audio transcription output
- text: Frustrated, the chairs took me forever to set up.
example_title: dangling modifier
- text: I would like a peice of pie.
example_title: simple miss-spelling
- text: Which part of Zurich was you going to go hiking in when we were there for
the first time together? ! ?
example_title: chatbot on Zurich
- text: Most of the course is about semantic or content of language but there are
also interesting topics to be learned from the servicefeatures except statistics
in characters in documents. At this point, Elvthos introduces himself as his native
English speaker and goes on to say that if you continue to work on social scnce,
example_title: social science ASR summary output
- text: they are somewhat nearby right yes please i'm not sure how the innish is tepen
thut mayyouselect one that istatte lo variants in their property e ere interested
and anyone basical e may be applyind reaching the browing approach were
- example_title: medical course audio transcription
inference:
parameters:
max_length: 96
min_length: 4
num_beams: 2
repetition_penalty: 1.15
length_penalty: 1
early_stopping: true
base_model: google/flan-t5-xl
---
# grammar-synthesis: flan-t5-xl
<a href="https://colab.research.google.com/gist/pszemraj/43fc6a5c5acd94a3d064384dd1f3654c/demo-flan-t5-xl-grammar-synthesis.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on an extended version of the `JFLEG` dataset.
- [here is a custom class wrapper](https://gist.github.com/pszemraj/14f7b13bd2d953176db2371e5d320915) that makes using this with `bitsandbytes` easier
- the API can be slow due to model size, try [the notebook](https://colab.research.google.com/gist/pszemraj/43fc6a5c5acd94a3d064384dd1f3654c/demo-flan-t5-xl-grammar-synthesis.ipynb)!
<br>
<img src="https://i.imgur.com/5QGGF0Z.png" alt="ex">
<br>
## Model description
The intent is to create a text2text language model that successfully performs "single-shot grammar correction" on a potentially grammatically incorrect text **that could have many errors** with the important qualifier that **it does not semantically change text/information that IS grammatically correct.**.
Compare some of the more severe error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
## Limitations
- Data set: `cc-by-nc-sa-4.0`
- Model: `apache-2.0`
- currently **a work in progress**! While probably useful for "single-shot grammar correction" in many cases, **check the output for correctness, ok?**.
## Training procedure
### Training hyperparameters
#### Session One
- TODO: add this. It was a single epoch at higher LR
#### Session Two
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2.0
|
mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF
|
mradermacher
| 2024-11-05T17:18:11Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"base_model:xxx777xxxASD/10.7B-Loyal-Toppy-Maid",
"base_model:quantized:xxx777xxxASD/10.7B-Loyal-Toppy-Maid",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T15:33:27Z |
---
base_model: xxx777xxxASD/10.7B-Loyal-Toppy-Maid
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/xxx777xxxASD/10.7B-Loyal-Toppy-Maid
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/10.7B-Loyal-Toppy-Maid-GGUF
|
mradermacher
| 2024-11-05T17:18:09Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"base_model:xxx777xxxASD/10.7B-Loyal-Toppy-Maid",
"base_model:quantized:xxx777xxxASD/10.7B-Loyal-Toppy-Maid",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T16:05:49Z |
---
base_model: xxx777xxxASD/10.7B-Loyal-Toppy-Maid
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xxx777xxxASD/10.7B-Loyal-Toppy-Maid
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/10.7B-Loyal-Toppy-Maid-GGUF/resolve/main/10.7B-Loyal-Toppy-Maid.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sentence-transformers/multi-qa-MiniLM-L6-cos-v1
|
sentence-transformers
| 2024-11-05T17:17:16Z | 7,441,945 | 122 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:search_qa",
"dataset:eli5",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/QQP",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/Amazon-QA",
"dataset:embedding-data/WikiAnswers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- search_qa
- eli5
- natural_questions
- trivia_qa
- embedding-data/QQP
- embedding-data/PAQ_pairs
- embedding-data/Amazon-QA
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# multi-qa-MiniLM-L6-cos-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-MiniLM-L6-cos-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## PyTorch Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## TensorFlow Usage (HuggingFace Transformers)
Similarly to the PyTorch example above, to use the model with TensorFlow you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = tf.cast(tf.tile(tf.expand_dims(attention_mask, -1), [1, 1, token_embeddings.shape[-1]]), tf.float32)
return tf.math.reduce_sum(token_embeddings * input_mask_expanded, 1) / tf.math.maximum(tf.math.reduce_sum(input_mask_expanded, 1), 1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='tf')
# Compute token embeddings
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = tf.math.l2_normalize(embeddings, axis=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
model = TFAutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = (query_emb @ tf.transpose(doc_emb))[0].numpy().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
|
camidenecken/RoBERTa-RM1-v1-2-rm-v1
|
camidenecken
| 2024-11-05T17:13:49Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T17:13:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ASAP_FineTuningBERT_Aug_k1_task1_organization_fold0
|
MayBashendy
| 2024-11-05T17:07:15Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T16:13:33Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k1_task1_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k1_task1_organization_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5628
- Qwk: 0.6860
- Mse: 0.5628
- Rmse: 0.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0417 | 2 | 12.2581 | 0.0 | 12.2581 | 3.5012 |
| No log | 0.0833 | 4 | 10.7305 | 0.0 | 10.7305 | 3.2757 |
| No log | 0.125 | 6 | 9.3780 | 0.0 | 9.3780 | 3.0624 |
| No log | 0.1667 | 8 | 7.8940 | 0.0151 | 7.8940 | 2.8096 |
| No log | 0.2083 | 10 | 6.3407 | 0.0054 | 6.3407 | 2.5181 |
| No log | 0.25 | 12 | 4.9823 | 0.0 | 4.9823 | 2.2321 |
| No log | 0.2917 | 14 | 3.7871 | 0.0301 | 3.7871 | 1.9461 |
| No log | 0.3333 | 16 | 2.7847 | 0.0153 | 2.7847 | 1.6687 |
| No log | 0.375 | 18 | 2.1250 | 0.0115 | 2.1250 | 1.4577 |
| No log | 0.4167 | 20 | 1.6283 | 0.0115 | 1.6283 | 1.2761 |
| No log | 0.4583 | 22 | 1.2686 | 0.1303 | 1.2686 | 1.1263 |
| No log | 0.5 | 24 | 0.9786 | 0.0520 | 0.9786 | 0.9893 |
| No log | 0.5417 | 26 | 0.8287 | 0.0316 | 0.8287 | 0.9104 |
| No log | 0.5833 | 28 | 0.7935 | 0.0316 | 0.7935 | 0.8908 |
| No log | 0.625 | 30 | 0.7533 | 0.0316 | 0.7533 | 0.8679 |
| No log | 0.6667 | 32 | 0.6665 | 0.0753 | 0.6665 | 0.8164 |
| No log | 0.7083 | 34 | 0.6449 | 0.3184 | 0.6449 | 0.8031 |
| No log | 0.75 | 36 | 0.6085 | 0.1907 | 0.6085 | 0.7800 |
| No log | 0.7917 | 38 | 0.5721 | 0.4768 | 0.5721 | 0.7564 |
| No log | 0.8333 | 40 | 0.5803 | 0.2019 | 0.5803 | 0.7617 |
| No log | 0.875 | 42 | 0.5634 | 0.2306 | 0.5634 | 0.7506 |
| No log | 0.9167 | 44 | 0.5410 | 0.5480 | 0.5410 | 0.7356 |
| No log | 0.9583 | 46 | 0.6210 | 0.4826 | 0.6210 | 0.7880 |
| No log | 1.0 | 48 | 0.5538 | 0.5020 | 0.5538 | 0.7442 |
| No log | 1.0417 | 50 | 0.5771 | 0.1895 | 0.5771 | 0.7597 |
| No log | 1.0833 | 52 | 0.5687 | 0.2029 | 0.5687 | 0.7541 |
| No log | 1.125 | 54 | 0.5276 | 0.5275 | 0.5276 | 0.7264 |
| No log | 1.1667 | 56 | 0.5453 | 0.5648 | 0.5453 | 0.7384 |
| No log | 1.2083 | 58 | 0.5077 | 0.5160 | 0.5077 | 0.7126 |
| No log | 1.25 | 60 | 0.5021 | 0.3805 | 0.5021 | 0.7086 |
| No log | 1.2917 | 62 | 0.5300 | 0.4025 | 0.5300 | 0.7280 |
| No log | 1.3333 | 64 | 0.4410 | 0.5028 | 0.4410 | 0.6641 |
| No log | 1.375 | 66 | 0.4923 | 0.5735 | 0.4923 | 0.7016 |
| No log | 1.4167 | 68 | 0.5228 | 0.4128 | 0.5228 | 0.7230 |
| No log | 1.4583 | 70 | 0.5592 | 0.2526 | 0.5592 | 0.7478 |
| No log | 1.5 | 72 | 0.4490 | 0.5548 | 0.4490 | 0.6700 |
| No log | 1.5417 | 74 | 0.4208 | 0.5768 | 0.4208 | 0.6487 |
| No log | 1.5833 | 76 | 0.4057 | 0.5977 | 0.4057 | 0.6370 |
| No log | 1.625 | 78 | 0.3985 | 0.5970 | 0.3985 | 0.6312 |
| No log | 1.6667 | 80 | 0.4250 | 0.6164 | 0.4250 | 0.6519 |
| No log | 1.7083 | 82 | 0.3931 | 0.6120 | 0.3931 | 0.6270 |
| No log | 1.75 | 84 | 0.3950 | 0.6040 | 0.3950 | 0.6285 |
| No log | 1.7917 | 86 | 0.4534 | 0.5853 | 0.4534 | 0.6733 |
| No log | 1.8333 | 88 | 0.4305 | 0.6085 | 0.4305 | 0.6561 |
| No log | 1.875 | 90 | 0.4210 | 0.6151 | 0.4210 | 0.6488 |
| No log | 1.9167 | 92 | 0.3865 | 0.5706 | 0.3865 | 0.6217 |
| No log | 1.9583 | 94 | 0.3874 | 0.5898 | 0.3874 | 0.6224 |
| No log | 2.0 | 96 | 0.3856 | 0.5708 | 0.3856 | 0.6210 |
| No log | 2.0417 | 98 | 0.3949 | 0.5304 | 0.3949 | 0.6284 |
| No log | 2.0833 | 100 | 0.3864 | 0.5834 | 0.3864 | 0.6216 |
| No log | 2.125 | 102 | 0.4122 | 0.6112 | 0.4122 | 0.6421 |
| No log | 2.1667 | 104 | 0.3922 | 0.5980 | 0.3922 | 0.6263 |
| No log | 2.2083 | 106 | 0.3827 | 0.5633 | 0.3827 | 0.6187 |
| No log | 2.25 | 108 | 0.4362 | 0.5978 | 0.4362 | 0.6604 |
| No log | 2.2917 | 110 | 0.4792 | 0.6069 | 0.4792 | 0.6922 |
| No log | 2.3333 | 112 | 0.5797 | 0.5806 | 0.5797 | 0.7614 |
| No log | 2.375 | 114 | 0.4223 | 0.5988 | 0.4223 | 0.6499 |
| No log | 2.4167 | 116 | 0.4240 | 0.5275 | 0.4240 | 0.6512 |
| No log | 2.4583 | 118 | 0.3983 | 0.5674 | 0.3983 | 0.6311 |
| No log | 2.5 | 120 | 0.5535 | 0.5916 | 0.5535 | 0.7440 |
| No log | 2.5417 | 122 | 0.4987 | 0.5933 | 0.4987 | 0.7062 |
| No log | 2.5833 | 124 | 0.3885 | 0.5930 | 0.3885 | 0.6233 |
| No log | 2.625 | 126 | 0.3974 | 0.6135 | 0.3974 | 0.6304 |
| No log | 2.6667 | 128 | 0.3875 | 0.5881 | 0.3875 | 0.6225 |
| No log | 2.7083 | 130 | 0.4514 | 0.6044 | 0.4514 | 0.6718 |
| No log | 2.75 | 132 | 0.5186 | 0.5907 | 0.5186 | 0.7202 |
| No log | 2.7917 | 134 | 0.4316 | 0.6159 | 0.4316 | 0.6570 |
| No log | 2.8333 | 136 | 0.3962 | 0.5359 | 0.3962 | 0.6294 |
| No log | 2.875 | 138 | 0.4328 | 0.5041 | 0.4328 | 0.6579 |
| No log | 2.9167 | 140 | 0.3864 | 0.5561 | 0.3864 | 0.6216 |
| No log | 2.9583 | 142 | 0.4972 | 0.6064 | 0.4972 | 0.7051 |
| No log | 3.0 | 144 | 0.6056 | 0.5604 | 0.6056 | 0.7782 |
| No log | 3.0417 | 146 | 0.4761 | 0.5995 | 0.4761 | 0.6900 |
| No log | 3.0833 | 148 | 0.3775 | 0.5663 | 0.3775 | 0.6144 |
| No log | 3.125 | 150 | 0.3730 | 0.5665 | 0.3730 | 0.6107 |
| No log | 3.1667 | 152 | 0.4008 | 0.6126 | 0.4008 | 0.6331 |
| No log | 3.2083 | 154 | 0.4142 | 0.6154 | 0.4142 | 0.6436 |
| No log | 3.25 | 156 | 0.3880 | 0.5830 | 0.3880 | 0.6229 |
| No log | 3.2917 | 158 | 0.4069 | 0.5207 | 0.4069 | 0.6379 |
| No log | 3.3333 | 160 | 0.3738 | 0.5850 | 0.3738 | 0.6114 |
| No log | 3.375 | 162 | 0.4507 | 0.6037 | 0.4507 | 0.6714 |
| No log | 3.4167 | 164 | 0.5241 | 0.6019 | 0.5241 | 0.7240 |
| No log | 3.4583 | 166 | 0.4212 | 0.5970 | 0.4212 | 0.6490 |
| No log | 3.5 | 168 | 0.3993 | 0.6004 | 0.3993 | 0.6319 |
| No log | 3.5417 | 170 | 0.4117 | 0.6118 | 0.4117 | 0.6416 |
| No log | 3.5833 | 172 | 0.4920 | 0.6171 | 0.4920 | 0.7014 |
| No log | 3.625 | 174 | 0.4386 | 0.6154 | 0.4386 | 0.6622 |
| No log | 3.6667 | 176 | 0.3971 | 0.5804 | 0.3971 | 0.6302 |
| No log | 3.7083 | 178 | 0.4044 | 0.6160 | 0.4044 | 0.6359 |
| No log | 3.75 | 180 | 0.4749 | 0.6503 | 0.4749 | 0.6892 |
| No log | 3.7917 | 182 | 0.4711 | 0.6558 | 0.4711 | 0.6864 |
| No log | 3.8333 | 184 | 0.4122 | 0.6236 | 0.4122 | 0.6421 |
| No log | 3.875 | 186 | 0.4383 | 0.6486 | 0.4383 | 0.6620 |
| No log | 3.9167 | 188 | 0.5540 | 0.6602 | 0.5540 | 0.7443 |
| No log | 3.9583 | 190 | 0.5229 | 0.6323 | 0.5229 | 0.7231 |
| No log | 4.0 | 192 | 0.4143 | 0.6229 | 0.4143 | 0.6437 |
| No log | 4.0417 | 194 | 0.4092 | 0.6111 | 0.4092 | 0.6397 |
| No log | 4.0833 | 196 | 0.4004 | 0.5858 | 0.4004 | 0.6328 |
| No log | 4.125 | 198 | 0.4240 | 0.6261 | 0.4240 | 0.6511 |
| No log | 4.1667 | 200 | 0.4509 | 0.6569 | 0.4509 | 0.6715 |
| No log | 4.2083 | 202 | 0.4430 | 0.6487 | 0.4430 | 0.6656 |
| No log | 4.25 | 204 | 0.4532 | 0.6583 | 0.4532 | 0.6732 |
| No log | 4.2917 | 206 | 0.4805 | 0.6885 | 0.4805 | 0.6931 |
| No log | 4.3333 | 208 | 0.4514 | 0.6544 | 0.4514 | 0.6719 |
| No log | 4.375 | 210 | 0.4293 | 0.6459 | 0.4293 | 0.6552 |
| No log | 4.4167 | 212 | 0.4752 | 0.6891 | 0.4752 | 0.6893 |
| No log | 4.4583 | 214 | 0.4589 | 0.6740 | 0.4589 | 0.6775 |
| No log | 4.5 | 216 | 0.4621 | 0.6708 | 0.4621 | 0.6798 |
| No log | 4.5417 | 218 | 0.4957 | 0.7025 | 0.4957 | 0.7041 |
| No log | 4.5833 | 220 | 0.4422 | 0.6703 | 0.4422 | 0.6650 |
| No log | 4.625 | 222 | 0.4888 | 0.6952 | 0.4888 | 0.6991 |
| No log | 4.6667 | 224 | 0.5012 | 0.7037 | 0.5012 | 0.7080 |
| No log | 4.7083 | 226 | 0.4190 | 0.6442 | 0.4190 | 0.6473 |
| No log | 4.75 | 228 | 0.4185 | 0.6441 | 0.4185 | 0.6469 |
| No log | 4.7917 | 230 | 0.4844 | 0.7022 | 0.4844 | 0.6960 |
| No log | 4.8333 | 232 | 0.5227 | 0.7161 | 0.5227 | 0.7230 |
| No log | 4.875 | 234 | 0.4612 | 0.6805 | 0.4612 | 0.6791 |
| No log | 4.9167 | 236 | 0.4814 | 0.6974 | 0.4814 | 0.6938 |
| No log | 4.9583 | 238 | 0.5769 | 0.7178 | 0.5769 | 0.7596 |
| No log | 5.0 | 240 | 0.4859 | 0.7007 | 0.4859 | 0.6971 |
| No log | 5.0417 | 242 | 0.4442 | 0.6773 | 0.4442 | 0.6665 |
| No log | 5.0833 | 244 | 0.4165 | 0.6516 | 0.4165 | 0.6454 |
| No log | 5.125 | 246 | 0.4202 | 0.6602 | 0.4202 | 0.6482 |
| No log | 5.1667 | 248 | 0.5381 | 0.7025 | 0.5381 | 0.7335 |
| No log | 5.2083 | 250 | 0.5786 | 0.6944 | 0.5786 | 0.7606 |
| No log | 5.25 | 252 | 0.4437 | 0.6777 | 0.4437 | 0.6661 |
| No log | 5.2917 | 254 | 0.4171 | 0.6386 | 0.4171 | 0.6458 |
| No log | 5.3333 | 256 | 0.4688 | 0.6921 | 0.4689 | 0.6847 |
| No log | 5.375 | 258 | 0.6081 | 0.7018 | 0.6081 | 0.7798 |
| No log | 5.4167 | 260 | 0.5369 | 0.6948 | 0.5369 | 0.7327 |
| No log | 5.4583 | 262 | 0.4274 | 0.5976 | 0.4274 | 0.6538 |
| No log | 5.5 | 264 | 0.4447 | 0.5303 | 0.4447 | 0.6668 |
| No log | 5.5417 | 266 | 0.4336 | 0.6118 | 0.4336 | 0.6585 |
| No log | 5.5833 | 268 | 0.5903 | 0.6987 | 0.5903 | 0.7683 |
| No log | 5.625 | 270 | 0.7164 | 0.6834 | 0.7164 | 0.8464 |
| No log | 5.6667 | 272 | 0.6050 | 0.7011 | 0.6050 | 0.7778 |
| No log | 5.7083 | 274 | 0.4476 | 0.6450 | 0.4476 | 0.6690 |
| No log | 5.75 | 276 | 0.4332 | 0.5720 | 0.4332 | 0.6582 |
| No log | 5.7917 | 278 | 0.4307 | 0.6173 | 0.4307 | 0.6563 |
| No log | 5.8333 | 280 | 0.5598 | 0.6834 | 0.5598 | 0.7482 |
| No log | 5.875 | 282 | 0.6906 | 0.6884 | 0.6906 | 0.8310 |
| No log | 5.9167 | 284 | 0.5984 | 0.6804 | 0.5984 | 0.7735 |
| No log | 5.9583 | 286 | 0.4716 | 0.6710 | 0.4716 | 0.6867 |
| No log | 6.0 | 288 | 0.4506 | 0.6608 | 0.4506 | 0.6713 |
| No log | 6.0417 | 290 | 0.5083 | 0.6793 | 0.5083 | 0.7129 |
| No log | 6.0833 | 292 | 0.5147 | 0.6824 | 0.5147 | 0.7174 |
| No log | 6.125 | 294 | 0.5067 | 0.6828 | 0.5067 | 0.7119 |
| No log | 6.1667 | 296 | 0.4556 | 0.6576 | 0.4556 | 0.6750 |
| No log | 6.2083 | 298 | 0.4707 | 0.6685 | 0.4707 | 0.6861 |
| No log | 6.25 | 300 | 0.6061 | 0.6875 | 0.6061 | 0.7785 |
| No log | 6.2917 | 302 | 0.6728 | 0.6809 | 0.6728 | 0.8203 |
| No log | 6.3333 | 304 | 0.5539 | 0.6913 | 0.5539 | 0.7443 |
| No log | 6.375 | 306 | 0.4382 | 0.6209 | 0.4382 | 0.6619 |
| No log | 6.4167 | 308 | 0.4331 | 0.5703 | 0.4331 | 0.6581 |
| No log | 6.4583 | 310 | 0.4449 | 0.6556 | 0.4449 | 0.6670 |
| No log | 6.5 | 312 | 0.5719 | 0.6966 | 0.5719 | 0.7563 |
| No log | 6.5417 | 314 | 0.6454 | 0.7033 | 0.6454 | 0.8034 |
| No log | 6.5833 | 316 | 0.5645 | 0.6968 | 0.5645 | 0.7513 |
| No log | 6.625 | 318 | 0.4620 | 0.6619 | 0.4620 | 0.6797 |
| No log | 6.6667 | 320 | 0.4495 | 0.6195 | 0.4495 | 0.6705 |
| No log | 6.7083 | 322 | 0.4732 | 0.6789 | 0.4732 | 0.6879 |
| No log | 6.75 | 324 | 0.5300 | 0.6948 | 0.5300 | 0.7280 |
| No log | 6.7917 | 326 | 0.5037 | 0.6855 | 0.5037 | 0.7097 |
| No log | 6.8333 | 328 | 0.4607 | 0.6759 | 0.4607 | 0.6787 |
| No log | 6.875 | 330 | 0.5010 | 0.6861 | 0.5010 | 0.7078 |
| No log | 6.9167 | 332 | 0.5533 | 0.6893 | 0.5533 | 0.7439 |
| No log | 6.9583 | 334 | 0.5596 | 0.6945 | 0.5596 | 0.7481 |
| No log | 7.0 | 336 | 0.5392 | 0.6886 | 0.5392 | 0.7343 |
| No log | 7.0417 | 338 | 0.4774 | 0.6620 | 0.4774 | 0.6909 |
| No log | 7.0833 | 340 | 0.4694 | 0.6394 | 0.4694 | 0.6852 |
| No log | 7.125 | 342 | 0.5115 | 0.6768 | 0.5115 | 0.7152 |
| No log | 7.1667 | 344 | 0.6132 | 0.7095 | 0.6132 | 0.7831 |
| No log | 7.2083 | 346 | 0.5996 | 0.7072 | 0.5996 | 0.7743 |
| No log | 7.25 | 348 | 0.5277 | 0.6910 | 0.5277 | 0.7264 |
| No log | 7.2917 | 350 | 0.4747 | 0.6340 | 0.4747 | 0.6890 |
| No log | 7.3333 | 352 | 0.4738 | 0.6486 | 0.4738 | 0.6883 |
| No log | 7.375 | 354 | 0.5077 | 0.6793 | 0.5077 | 0.7126 |
| No log | 7.4167 | 356 | 0.5837 | 0.7007 | 0.5837 | 0.7640 |
| No log | 7.4583 | 358 | 0.5623 | 0.6994 | 0.5623 | 0.7498 |
| No log | 7.5 | 360 | 0.4817 | 0.6795 | 0.4817 | 0.6941 |
| No log | 7.5417 | 362 | 0.4693 | 0.6629 | 0.4693 | 0.6851 |
| No log | 7.5833 | 364 | 0.5108 | 0.6813 | 0.5108 | 0.7147 |
| No log | 7.625 | 366 | 0.5911 | 0.7010 | 0.5911 | 0.7688 |
| No log | 7.6667 | 368 | 0.5656 | 0.7047 | 0.5656 | 0.7520 |
| No log | 7.7083 | 370 | 0.5176 | 0.6799 | 0.5176 | 0.7194 |
| No log | 7.75 | 372 | 0.5084 | 0.6783 | 0.5084 | 0.7131 |
| No log | 7.7917 | 374 | 0.5234 | 0.6770 | 0.5234 | 0.7235 |
| No log | 7.8333 | 376 | 0.5778 | 0.6995 | 0.5778 | 0.7601 |
| No log | 7.875 | 378 | 0.5784 | 0.6920 | 0.5784 | 0.7605 |
| No log | 7.9167 | 380 | 0.5380 | 0.6839 | 0.5380 | 0.7335 |
| No log | 7.9583 | 382 | 0.5480 | 0.6838 | 0.5480 | 0.7403 |
| No log | 8.0 | 384 | 0.5856 | 0.6847 | 0.5856 | 0.7653 |
| No log | 8.0417 | 386 | 0.6271 | 0.6930 | 0.6271 | 0.7919 |
| No log | 8.0833 | 388 | 0.5795 | 0.6822 | 0.5795 | 0.7613 |
| No log | 8.125 | 390 | 0.5065 | 0.6551 | 0.5065 | 0.7117 |
| No log | 8.1667 | 392 | 0.4912 | 0.6515 | 0.4912 | 0.7009 |
| No log | 8.2083 | 394 | 0.5290 | 0.6635 | 0.5290 | 0.7273 |
| No log | 8.25 | 396 | 0.5962 | 0.6864 | 0.5962 | 0.7722 |
| No log | 8.2917 | 398 | 0.6260 | 0.6898 | 0.6260 | 0.7912 |
| No log | 8.3333 | 400 | 0.5698 | 0.6918 | 0.5698 | 0.7549 |
| No log | 8.375 | 402 | 0.4957 | 0.6631 | 0.4957 | 0.7040 |
| No log | 8.4167 | 404 | 0.4712 | 0.6402 | 0.4712 | 0.6864 |
| No log | 8.4583 | 406 | 0.4821 | 0.6604 | 0.4821 | 0.6943 |
| No log | 8.5 | 408 | 0.5369 | 0.6874 | 0.5369 | 0.7328 |
| No log | 8.5417 | 410 | 0.5864 | 0.6789 | 0.5864 | 0.7658 |
| No log | 8.5833 | 412 | 0.5693 | 0.6734 | 0.5693 | 0.7545 |
| No log | 8.625 | 414 | 0.5160 | 0.6800 | 0.5160 | 0.7184 |
| No log | 8.6667 | 416 | 0.4742 | 0.6562 | 0.4742 | 0.6886 |
| No log | 8.7083 | 418 | 0.4792 | 0.6613 | 0.4792 | 0.6922 |
| No log | 8.75 | 420 | 0.5247 | 0.6751 | 0.5247 | 0.7244 |
| No log | 8.7917 | 422 | 0.5716 | 0.6781 | 0.5716 | 0.7561 |
| No log | 8.8333 | 424 | 0.5928 | 0.6819 | 0.5928 | 0.7699 |
| No log | 8.875 | 426 | 0.5721 | 0.6802 | 0.5721 | 0.7563 |
| No log | 8.9167 | 428 | 0.5185 | 0.6805 | 0.5185 | 0.7201 |
| No log | 8.9583 | 430 | 0.4814 | 0.6604 | 0.4814 | 0.6938 |
| No log | 9.0 | 432 | 0.4823 | 0.6578 | 0.4823 | 0.6945 |
| No log | 9.0417 | 434 | 0.5074 | 0.6720 | 0.5074 | 0.7123 |
| No log | 9.0833 | 436 | 0.5592 | 0.6892 | 0.5592 | 0.7478 |
| No log | 9.125 | 438 | 0.6047 | 0.6928 | 0.6047 | 0.7776 |
| No log | 9.1667 | 440 | 0.6198 | 0.6935 | 0.6198 | 0.7873 |
| No log | 9.2083 | 442 | 0.5909 | 0.6912 | 0.5909 | 0.7687 |
| No log | 9.25 | 444 | 0.5490 | 0.6911 | 0.5490 | 0.7409 |
| No log | 9.2917 | 446 | 0.5281 | 0.6831 | 0.5281 | 0.7267 |
| No log | 9.3333 | 448 | 0.5187 | 0.6817 | 0.5187 | 0.7202 |
| No log | 9.375 | 450 | 0.5260 | 0.6816 | 0.5260 | 0.7253 |
| No log | 9.4167 | 452 | 0.5262 | 0.6816 | 0.5262 | 0.7254 |
| No log | 9.4583 | 454 | 0.5384 | 0.6894 | 0.5384 | 0.7338 |
| No log | 9.5 | 456 | 0.5464 | 0.6813 | 0.5464 | 0.7392 |
| No log | 9.5417 | 458 | 0.5426 | 0.6813 | 0.5426 | 0.7366 |
| No log | 9.5833 | 460 | 0.5340 | 0.6781 | 0.5340 | 0.7308 |
| No log | 9.625 | 462 | 0.5341 | 0.6781 | 0.5341 | 0.7308 |
| No log | 9.6667 | 464 | 0.5366 | 0.6763 | 0.5366 | 0.7326 |
| No log | 9.7083 | 466 | 0.5371 | 0.6745 | 0.5371 | 0.7328 |
| No log | 9.75 | 468 | 0.5408 | 0.6757 | 0.5408 | 0.7354 |
| No log | 9.7917 | 470 | 0.5518 | 0.6774 | 0.5518 | 0.7429 |
| No log | 9.8333 | 472 | 0.5634 | 0.6883 | 0.5634 | 0.7506 |
| No log | 9.875 | 474 | 0.5658 | 0.6883 | 0.5658 | 0.7522 |
| No log | 9.9167 | 476 | 0.5639 | 0.6860 | 0.5639 | 0.7509 |
| No log | 9.9583 | 478 | 0.5630 | 0.6860 | 0.5630 | 0.7503 |
| No log | 10.0 | 480 | 0.5628 | 0.6860 | 0.5628 | 0.7502 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
sentence-transformers/msmarco-distilbert-cos-v5
|
sentence-transformers
| 2024-11-05T17:05:46Z | 232,891 | 10 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"openvino",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# msmarco-distilbert-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
sentence-transformers/msmarco-MiniLM-L-6-v3
|
sentence-transformers
| 2024-11-05T16:56:21Z | 18,151 | 13 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# sentence-transformers/msmarco-MiniLM-L-6-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L-6-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3')
model = AutoModel.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-MiniLM-L-6-v3)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
sentence-transformers/msmarco-MiniLM-L-12-v3
|
sentence-transformers
| 2024-11-05T16:55:38Z | 64,413 | 23 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# sentence-transformers/msmarco-MiniLM-L-12-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L-12-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-MiniLM-L-12-v3')
model = AutoModel.from_pretrained('sentence-transformers/msmarco-MiniLM-L-12-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-MiniLM-L-12-v3)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
griffio/vit-base-patch16-224-in21k-rotated-dungeons-v001
|
griffio
| 2024-11-05T16:54:29Z | 217 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-05T16:54:14Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-rotated-dungeons-v001
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rotated_maps
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8214285714285714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-rotated-dungeons-v001
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rotated_maps dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1914
- Accuracy: 0.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.4924 | 0.5 |
| No log | 2.0 | 12 | 1.4798 | 0.5 |
| No log | 3.0 | 18 | 1.4518 | 0.5357 |
| No log | 4.0 | 24 | 1.4402 | 0.5357 |
| No log | 5.0 | 30 | 1.4052 | 0.5357 |
| No log | 6.0 | 36 | 1.3828 | 0.6786 |
| No log | 7.0 | 42 | 1.3588 | 0.6786 |
| No log | 8.0 | 48 | 1.3295 | 0.6786 |
| No log | 9.0 | 54 | 1.3263 | 0.7143 |
| No log | 10.0 | 60 | 1.3072 | 0.75 |
| No log | 11.0 | 66 | 1.2918 | 0.7143 |
| No log | 12.0 | 72 | 1.2718 | 0.8214 |
| No log | 13.0 | 78 | 1.2728 | 0.7857 |
| No log | 14.0 | 84 | 1.2628 | 0.75 |
| No log | 15.0 | 90 | 1.2333 | 0.7857 |
| No log | 16.0 | 96 | 1.2253 | 0.7857 |
| No log | 17.0 | 102 | 1.2240 | 0.7857 |
| No log | 18.0 | 108 | 1.2249 | 0.7857 |
| No log | 19.0 | 114 | 1.2177 | 0.7857 |
| No log | 20.0 | 120 | 1.2098 | 0.7857 |
| No log | 21.0 | 126 | 1.2029 | 0.8214 |
| No log | 22.0 | 132 | 1.1875 | 0.8571 |
| No log | 23.0 | 138 | 1.1873 | 0.8571 |
| No log | 24.0 | 144 | 1.2051 | 0.7857 |
| No log | 25.0 | 150 | 1.1914 | 0.8214 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
sdadas/mmlw-retrieval-e5-small
|
sdadas
| 2024-11-05T16:53:18Z | 10 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"information-retrieval",
"pl",
"arxiv:2402.13350",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-18T18:52:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- information-retrieval
language: pl
license: apache-2.0
widget:
- source_sentence: "query: Jak dożyć 100 lat?"
sentences:
- "passage: Trzeba zdrowo się odżywiać i uprawiać sport."
- "passage: Trzeba pić alkohol, imprezować i jeździć szybkimi autami."
- "passage: Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
---
<h1 align="center">MMLW-retrieval-e5-small</h1>
MMLW (muszę mieć lepszą wiadomość) are neural text encoders for Polish.
This model is optimized for information retrieval tasks. It can transform queries and passages to 384 dimensional vectors.
The model was developed using a two-step procedure:
- In the first step, it was initialized with multilingual E5 checkpoint, and then trained with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) on a diverse corpus of 60 million Polish-English text pairs. We utilised [English FlagEmbeddings (BGE)](https://huggingface.co/BAAI/bge-small-en) as teacher models for distillation.
- The second step involved fine-tuning the obtained models with contrastrive loss on [Polish MS MARCO](https://huggingface.co/datasets/clarin-knext/msmarco-pl) training split. In order to improve the efficiency of contrastive training, we used large batch sizes - 1152 for small, 768 for base, and 288 for large models. Fine-tuning was conducted on a cluster of 12 A100 GPUs.
⚠️ **2023-12-26:** We have updated the model to a new version with improved results. You can still download the previous version using the **v1** tag: `AutoModel.from_pretrained("sdadas/mmlw-retrieval-e5-small", revision="v1")` ⚠️
## Usage (Sentence-Transformers)
⚠️ Our dense retrievers require the use of specific prefixes and suffixes when encoding texts. For this model, queries should be prefixed with **"query: "** and passages with **"passage: "** ⚠️
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
query_prefix = "query: "
answer_prefix = "passage: "
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
answer_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
answer_prefix + "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
answer_prefix + "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
model = SentenceTransformer("sdadas/mmlw-retrieval-e5-small")
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Trzeba zdrowo się odżywiać i uprawiać sport.
```
## Evaluation Results
The model achieves **NDCG@10** of **52.34** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Acknowledgements
This model was trained with the A100 GPU cluster support delivered by the Gdansk University of Technology within the TASK center initiative.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Sayankotor/llama-2-7b_exp_gen_lora
|
Sayankotor
| 2024-11-05T16:49:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T16:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Q25-1.5B-VeoLu-GGUF
|
QuantFactory
| 2024-11-05T16:45:57Z | 121 | 2 |
peft
|
[
"peft",
"gguf",
"mergekit",
"merge",
"llama-factory",
"lora",
"dataset:allura-org/fujin-cleaned-stage-1",
"dataset:Dampfinchen/Creative_Writing_Multiturn",
"dataset:ToastyPigeon/SpringDragon",
"dataset:allura-org/medquad_sharegpt",
"dataset:allura-org/scienceqa_sharegpt",
"dataset:Alignment-Lab-AI/orcamath-sharegpt",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T06:04:10Z |
---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
tags:
- mergekit
- merge
- llama-factory
- lora
datasets:
- allura-org/fujin-cleaned-stage-1
- Dampfinchen/Creative_Writing_Multiturn
- ToastyPigeon/SpringDragon
- allura-org/medquad_sharegpt
- allura-org/scienceqa_sharegpt
- Alignment-Lab-AI/orcamath-sharegpt
---
[](https://hf.co/QuantFactory)
# QuantFactory/Q25-1.5B-VeoLu-GGUF
This is quantized version of [Alfitaria/Q25-1.5B-VeoLu](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu) created using llama.cpp
# Original Model Card
# Q25-1.5-VeoLu-R2

[*A source of life and hope for the land.*](https://www.youtube.com/watch?v=TJRq1Ag2Wmw)
Q25-1.5B-Veo Lu is a tiny General-Purpose Creative model, made up of a merge of bespoke finetunes on Qwen 2.5-1.5B-Instruct.
Inspired by the success of [MN-12B-Mag Mell](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) and [MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B), Veo Lu was trained on a healthy, balanced diet of of Internet fiction, roleplaying, adventuring, and reasoning/general knowledge.
The components of Veo Lu are:
* Bard (pretrain, writing): [Fujin (Cleaned/extended Rosier)](https://huggingface.co/allura-org/fujin-cleaned-stage-1)
* Scribe (pretrain, roleplay): [Creative Writing Multiturn](https://huggingface.co/Dampfinchen/Creative_Writing_Multiturn)
* Cartographer (pretrain, adventuring): [SpringDragon](https://huggingface.co/ToastyPigeon/SpringDragon)
* Alchemist (SFT, science/reasoning): [ScienceQA,](https://huggingface.co/allura-org/scienceqa_sharegpt) [MedquadQA,](https://huggingface.co/allura-org/medquad_sharegpt) [Orca Math Word Problems](https://huggingface.co/Alignment-Lab-AI/orcamath-sharegpt)
This model is capable of carrying on a scene without going completely off the rails. That being said, it only has 1.5B parameters. So please, for the love of God, *manage your expectations.*
Since it's Qwen, use ChatML formatting. Turn the temperature down to ~0.7-0.8 and try a dash of rep-pen.
GGUFs coming soon, but honestly, the full-precision model is 3.5GB in size. You might wanna have a go at running this unquantized with vLLM.
```
pip install vllm
vllm serve Alfitaria/Q25-1.5B-VeoLu --max-model-len 16384 --max-num-seqs 1
```
Made by inflatebot.
Special thanks to our friends at [Allura](https://huggingface.co/allura-org), and especially to [Auri](https://huggingface.co/AuriAetherwiing), who basically held my hand through the whole process. Her effort and enthusiasm carried this project forward.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen2.5-1.5B-Instruct
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/bard
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/scribe
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/cartographer
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/alchemist
parameters:
weight: 1.0
- layer_range: [0, 28]
model: Qwen/Qwen2.5-1.5B-Instruct
```
|
ShaySha/musicgen-large-lora-acid_techno_4ep_25da-colab
|
ShaySha
| 2024-11-05T16:39:19Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"text-to-audio",
"acid_data",
"generated_from_trainer",
"base_model:facebook/musicgen-large",
"base_model:adapter:facebook/musicgen-large",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-11-05T16:38:32Z |
---
base_model: facebook/musicgen-large
library_name: peft
license: cc-by-nc-4.0
tags:
- text-to-audio
- acid_data
- generated_from_trainer
model-index:
- name: musicgen-large-lora-acid_techno_4ep_25da-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-large-lora-acid_techno_4ep_25da-colab
This model is a fine-tuned version of [facebook/musicgen-large](https://huggingface.co/facebook/musicgen-large) on the ShaySha/acid_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 3.1.0
- Tokenizers 0.20.2
|
griffio/vit-base-patch16-224-in21k-rotated-dungeons-v18
|
griffio
| 2024-11-05T16:37:04Z | 195 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-04T17:06:46Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-rotated-dungeons-v18
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rotated_maps
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.35714285714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-rotated-dungeons-v18
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rotated_maps dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5308
- Accuracy: 0.3571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2464 | 12.5 | 100 | 1.6494 | 0.2857 |
| 0.9502 | 25.0 | 200 | 1.5193 | 0.3929 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
plj09876/japanesezephyr-7b-lora-8bit
|
plj09876
| 2024-11-05T16:35:09Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-05T16:30:49Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
backyardai/Rocinante-12B-v1.1-GGUF
|
backyardai
| 2024-11-05T16:32:41Z | 437 | 0 | null |
[
"gguf",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:quantized:TheDrummer/Rocinante-12B-v1.1",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-11-05T16:08:38Z |
---
base_model: TheDrummer/Rocinante-12B-v1.1
license: other
model_name: Rocinante-12B-v1.1-GGUF
quantized_by: brooketh
parameter_count: 12247782400
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Rocinante V1.1 12B
- **Creator:** [TheDrummer](https://huggingface.co/TheDrummer/)
- **Original:** [Rocinante V1.1 12B](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
- **Date Created:** 2024-08-15
- **Trained Context:** 1024000 tokens
- **Description:** A versatile model for engaging and adventure-filled storytelling by TheDrummer. Similar to Cydonia but smaller in size, so will run well on more machines. Based on Mistral Nemo.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
***
|
luissattelmayer/nli-stance-finetuning-laurer
|
luissattelmayer
| 2024-11-05T16:27:19Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:mlburnham/Political_DEBATE_large_v1.0",
"base_model:finetune:mlburnham/Political_DEBATE_large_v1.0",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-25T15:46:11Z |
---
library_name: transformers
license: mit
base_model: mlburnham/Political_DEBATE_large_v1.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nli-stance-finetuning-laurer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-stance-finetuning-laurer
This model is a fine-tuned version of [mlburnham/Political_DEBATE_large_v1.0](https://huggingface.co/mlburnham/Political_DEBATE_large_v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0075
- Accuracy: 0.8140
- F1 Macro: 0.7763
- Accuracy Balanced: 0.7736
- F1 Micro: 0.8140
- Precision Macro: 0.8120
- Recall Macro: 0.7736
- Precision Micro: 0.8140
- Recall Micro: 0.8140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 10
- seed: 1234
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Accuracy Balanced | F1 Micro | Precision Macro | Recall Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:|
| 0.3663 | 1.0 | 538 | 1.0075 | 0.8140 | 0.7763 | 0.7736 | 0.8140 | 0.8120 | 0.7736 | 0.8140 | 0.8140 |
| 0.3854 | 2.0 | 1076 | 1.1733 | 0.7326 | 0.6903 | 0.7285 | 0.7326 | 0.6831 | 0.7285 | 0.7326 | 0.7326 |
| 0.1758 | 3.0 | 1614 | 1.4595 | 0.7326 | 0.6823 | 0.7215 | 0.7326 | 0.6760 | 0.7215 | 0.7326 | 0.7326 |
| 0.0743 | 4.0 | 2152 | 1.6271 | 0.7442 | 0.6692 | 0.6851 | 0.7442 | 0.6634 | 0.6851 | 0.7442 | 0.7442 |
| 0.0343 | 5.0 | 2690 | 1.7175 | 0.7326 | 0.6575 | 0.6750 | 0.7326 | 0.6531 | 0.6750 | 0.7326 | 0.7326 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
JhonMR/RoBertaLex_v12
|
JhonMR
| 2024-11-05T16:26:51Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/RoBERTalex",
"base_model:finetune:PlanTL-GOB-ES/RoBERTalex",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T16:22:04Z |
---
library_name: transformers
license: apache-2.0
base_model: PlanTL-GOB-ES/RoBERTalex
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: RoBertaLex_v12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBertaLex_v12
This model is a fine-tuned version of [PlanTL-GOB-ES/RoBERTalex](https://huggingface.co/PlanTL-GOB-ES/RoBERTalex) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8997
- F1: 0.8995
- Precision: 0.9028
- Recall: 0.9010
- Loss: 0.4588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Grohv/bodypaint_lora
|
Grohv
| 2024-11-05T16:25:30Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-05T16:25:01Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/bodypaint-lora_001225_00_20241105162125.png
text: Portrait of a beautiful woman in the style of bodypaint_lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: bodypaint_lora
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# bodypaint_lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `bodypaint_lora` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
HoneyBadger2989/Llama-3-Groq-8B-Tool-Use-GGUF
|
HoneyBadger2989
| 2024-11-05T16:24:46Z | 15 | 0 | null |
[
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"groq",
"tool-use",
"function-calling",
"autoquant",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-05T14:47:13Z |
---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- groq
- tool-use
- function-calling
- autoquant
- gguf
---
# Llama-3-Groq-8B-Tool-Use
This is the 8B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks.
## Model Details
- **Model Type:** Causal language model fine-tuned for tool use
- **Language(s):** English
- **License:** Meta Llama 3 Community License
- **Model Architecture:** Optimized transformer
- **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 8B base model
- **Input:** Text
- **Output:** Text, with enhanced capabilities for tool use and function calling
## Performance
- **Berkeley Function Calling Leaderboard (BFCL) Score:** 89.06% overall accuracy
- This score represents the best performance among all open-source 8B LLMs on the BFCL
## Usage and Limitations
This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note:
- For general knowledge or open-ended tasks, a general-purpose language model may be more suitable
- The model may still produce inaccurate or biased content in some cases
- Users are responsible for implementing appropriate safety measures for their specific use case
Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed.
Text prompt example:
We'd like to give a special shoutout to [@NousResearch](https://x.com/NousResearch) for pushing open source tool use forward with their public & open exploration of tool use in LLMs.
```
<|start_header_id|>system<|end_header_id|>
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>
Here are the available tools:
<tools> {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"properties": {
"location": {
"description": "The city and state, e.g. San Francisco, CA",
"type": "string"
},
"unit": {
"enum": [
"celsius",
"fahrenheit"
],
"type": "string"
}
},
"required": [
"location"
],
"type": "object"
}
} </tools><|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather like in San Francisco?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>
<tool_response>
{"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
</tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Ethical Considerations
While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application.
## Availability
The model is available through:
- [Groq API console](https://console.groq.com)
- [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use)
For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.
|
emozilla/llama3-1.4b-init-2
|
emozilla
| 2024-11-05T16:23:55Z | 127 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T02:38:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiangchengchengNLP/Chinese_resume_extract
|
jiangchengchengNLP
| 2024-11-05T16:22:06Z | 11 | 0 | null |
[
"safetensors",
"roberta",
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T14:51:17Z |
---
license: apache-2.0
---
# Chinese_resume_extract
## 模型概述
本模型主要针对中文简历信息提取任务,对 RoBERTa 模型进行了微调。该微调模型旨在提高bert系模型在中文简历信息提取方面的性能,弥补开源中文简历提取模型的缺少。
## 训练数据
本模型使用了天池大赛项目数据集 [比赛地址](https://tianchi.aliyun.com/competition/entrance/231771/information)。数据处理流程包括:
1. 原始简历文本的提取:通过段落和表格的方式从比赛提供的 word 文档中抽取文本,并去重。
2. 标签数据的生成:递归提取需要的信息字段,设计相应类别标签,不额外设置 BIO 标签,前缀树搜索标注原始文本。
3. 数据合成:对训练数据进行人工合成,处理方法包括重抽样、随机抽样调整各类别数据占比,确保项目经历和项目职责类数据不超过总数的 12%,同时加入噪音数据,即完全与简历无关的负样本,共形成3000 条数据。
## 模型架构
底座模型采用 RoBERTa-base,由 Benjamin 提供的 `roberta-base-wechsel-chinese` 完成中文语义训练。
## 训练参数
- 优化器:AdamW
- 学习率:3e-5
- 权重衰减:bias、gamma、beta 权重不参与权重衰减,其余权重衰减率为 0.01
- 损失函数:nn.CrossEntropyLoss
## 训练过程
1. 模型首先在随机抽样的 500 条数据上进行初步训练,损失降至约 1.2。
2. 随后在生成的 3000 条数据上继续训练,总计训练轮数为 15 轮。
3. 训练硬件条件为 V100 32G。
## 训练效果
目前模型效果尚未在测试集上进行评估,后续将会考虑补充。
训练过程中取得的最小平均损失为 0.11216351033912765,该数据仅供参考。
## 使用示例
```python
import torch
from transformers import RobertaForTokenClassification, AutoTokenizer
model=RobertaForTokenClassification.from_pretrained('jiangchengchengNLP/Chinese_resume_extract')
tokenizer = AutoTokenizer.from_pretrained('jiangchengchengNLP/Chinese_resume_extract',do_lower_case=True)
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.eval()
model.to(device)
import json
label_list={
0:'其他',
1:'电话',
2:'毕业时间', #毕业时间
3:'出生日期', #出生日期
4:'项目名称', #项目名称
5:'毕业院校', #毕业院校
6:'职务', #职务
7:'籍贯', #籍贯
8:'学位', #学位
9:'性别', #性别
10:'姓名', #姓名
11:'工作时间', #工作时间
12:'落户市县', #落户市县
13:'项目时间', #项目时间
14:'最高学历', #最高学历
15:'工作单位', #工作单位
16:'政治面貌', #政治面貌
17:'工作内容', #工作内容
18:'项目责任', #项目责任
}
def get_info(text):
#文本处理
text=text.strip()
text=text.replace('\n',',') # 将换行符替换为逗号
text=text.replace('\r',',') # 将回车符替换为逗号
text=text.replace('\t',',') # 将制表符替换为逗号
text=text.replace(' ',',') # 将空格替换为逗号
#将连续的逗号合并成一个逗号
while ',,' in text:
text=text.replace(',,',',')
block_list=[]
if len(text)>300:
#切块策略
#先切分成句
sentence_list=text.split(',')
#然后拼接句子长度不超过300,一旦超过300,当前句子放到下一个块中
boundary=300
block_list=[]
block=sentence_list[0]
for i in range(1,len(sentence_list)):
if len(block)+len(sentence_list[i])<=boundary:
block+=sentence_list[i]
else:
block_list.append(block)
block=sentence_list[i]
block_list.append(block)
else:
block_list.append(text)
_input = tokenizer(block_list, return_tensors='pt',padding=True,truncation=True)
#如果有GPU,将输入数据移到GPU
input_ids = _input['input_ids'].to(device)
attention_mask = _input['attention_mask'].to(device)
# 模型推理
with torch.no_grad():
logits = model(input_ids=input_ids, attention_mask=attention_mask)[0]
# 获取预测的标签ID
#print(logits.shape)
ids = torch.argmax(logits, dim=-1)
input_ids=input_ids.reshape(-1)
#将张量在最后一个维度拼接,并以0为分界,拼接成句
ids =ids.reshape(-1)
# 按标签组合成提取内容
extracted_info = {}
word_list=[]
flag=None
for idx, label_id in enumerate(ids):
label_id = label_id.item()
if label_id!= 0 and (flag==None or flag==label_id): #不等于零时
if flag==None:
flag=label_id
label = label_list[label_id] # 获取对应的标签
word_list.append(input_ids[idx].item())
if label not in extracted_info:
extracted_info[label] = []
else:
if word_list:
#忽略特殊token
sentence=''.join(tokenizer.decode(word_list,skip_special_tokens=True))
extracted_info[label].append(sentence)
flag=None
word_list=[]
if label_id!= 0:
label = label_list[label_id] # 获取对应的标签
word_list.append(input_ids[idx].item())
if label not in extracted_info:
extracted_info[label] = []
# 返回JSON格式的提取内容
return extracted_info
```
```python
file_name=r'roberta-base-ner\example\test.docx' #测试docx在仓库中
from docx import Document
def read_docx(file_path):
# 创建 Document 对象
doc = Document(file_path)
# 初始化一个列表用于存储段落文本
text = set()
# 遍历文档的每个段落
for para in doc.paragraphs:
text.add(para.text) # 获取段落文本并添加到列表
for table in doc.tables:
for row in table.rows:
for cell in row.cells:
text.add(cell.text) # 获取表格文本并添加到列表
result=''.join(text) # 将列表中的文本连接成一个字符串
return result # 将文本列表连接成一个字符串
document_text=read_docx(file_name)
print(document_text)
```
```python
print(get_info(document_text))
```
```python
{'项目责任': ['项目经历',
'、分季节/节庆提前进行平面主视觉设计有主题有亮点独特展现企业形象风格;2、了解当前基础制作/印刷材料材料市场行情熟悉作业方式及最终想表现效果3、平面设计主题明确表达精准能够新颖、直观表达企业主张4、良好的沟通协调与组织能力为商户提供有效形象服务5、完成领导交办的其他工作教育背景',
'良好的心态和责任感吃苦耐劳擅于管理时间勇于面对变化和挑战。',
'',
'1.负责设备系统管理平台的信息管理;2.负责牧场级《设备管理制度》的起草及修订;3.负责建立牧场所有基础设施、房屋建筑、车辆机具、机器设备等台账并及时更新后报送上级单位保证账物相符;4.负责报批牧场设备闲置、调拨、报废等业务流程;5.负责定期向上级单位报送牧场《闲置资产统计表》;6.负责按月向上级单位报批牧场《设备维修/服务申请》;7.负责向供应商下达牧场各类机备件、五金件、机辅油、维修服务供货订单;8.负责定期向上级单位报送《备件出入库明细表》、《备件库存明细表》、《闲置备件明细表》;9.负责报批牧场《闲置备件的调拨申请》;10.根据设备实际运行数据',
'统筹产品进程及时纠错跟进产品进度;5、负责各新产品上市的跟踪及产品的改良升级;6、负责竞争对手产品分析并作出合理的改进机制。良好的公共关系意识善于沟通具备一定的活动策划和组织协调能力。',
'1、接听电话接收传真按要求转接电话或记录信息确保及时准确。2、对来访客人做好接待、登记、引导工作及时通知被访人员。3、负责公司快递、信件、包裹的收发工作。4、负责办公用品的管理及采购。5、负责复印、传真和打印等设备的使用与管理工作合理使用降低材料消耗。6、做好会前准备、会议记录和会后内容整理工作。7、做好公司相关资料、档案管理工作。8、日常费用的申请',
'公司钉钉的维护管理。9、领导交办的其他人事行政工作'],
'项目时间': ['2001.07-2015.091', '2009年03月-2010年12月', '1997/12-2012/11'],
'项目名称': ['中国与印度关系研究软硬法视域下的廉政党内法规与国家法律衔接协调问题研究', '化学技术风险的伦理评估与社会治理研究'],
'工作时间': ['1993年05月-2012年07月', '2014.09-2018.09', '1998.11-2016.10'],
'工作内容': ['负责填写并向上级单位报送《设备故障率日报表》;11.负责牧场《能源设备日运行记录》、《设备日巡检记录》、《设备大修记录》存档工作;12.负责每月向上级部门报送牧场《月度实际发生维修费用汇总表》;13.负责向牧场财务及上级部门报送牧场《月度实际发生能耗费用汇总表》;14.负责牧场《设备自评估报告》按规定时间报送设备管理部;1、负责从前期市场调研到产品立项、策划到新品上市的开发及跟踪工作;2、及时洞察市场动态对行业发展趋势、竞品信息、消费者需求进行深度调查;3、新产品开发阶段结合市场分析从产品定位角度提出建议方案参与新品立项;4、在产品研发阶段能够与开发部沟通专业细节问题同时需要跟部门经理保持沟通',
'良好的学习能力习惯制定切实可行的学习计划勤于学习能不断提高',
'1通过为客户提供培训、定期和紧急的服务以及销售示范实现现有餐饮客户的销售增长;2了解客户的食品安全需求作为高端清洁卫生流程及计划方面的专家为客户定制解决方案;3安装、维修和维护客户的机器设备利用该项服务加强与客户的关系。',
'人事专员'],
'落户市县': [],
'籍贯': ['香港省香港', '香港省香港市'],
'工作单位': ['市昆明红星商业管理有限公司'],
'毕业院校': ['北京邮电大学世纪学院'],
'电话': ['15500671244'],
'出生日期': ['1924.06']}
```
|
emozilla/llama3-1.2b-init-2
|
emozilla
| 2024-11-05T16:20:28Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T14:16:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jibala1022/easyrec-small
|
jibala1022
| 2024-11-05T16:16:32Z | 12 | 0 | null |
[
"pytorch",
"roberta",
"recommendation",
"collaborative filtering",
"sentence-similarity",
"en",
"arxiv:2408.08821",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"region:us"
] |
sentence-similarity
| 2024-11-03T17:15:38Z |
---
license: apache-2.0
language: en
tags:
- recommendation
- collaborative filtering
metrics: recall@10
base_model: FacebookAI/roberta-base
pipeline_tag: sentence-similarity
repo: https://github.com/jibala-1022/EasyRec
---
# EasyRec-Base
## Overview
- **Description**: EasyRec is a series of language models designed for recommendations, trained to match the textual profiles of users and items with collaborative signals.
- **Usage**: You can use EasyRec to encode user and item text embeddings based on the textual profiles that reflect their preferences for various recommendation scenarios.
- **Evaluation**: We evaluate the performance of EasyRec in: (i) Text-based zero-shot recommendation and (ii) Text-enhanced collaborative filtering.
- **Finetuned from model:** EasyRec is finetuned from [RoBERTa](https://huggingface.co/FacebookAI/roberta-large) within English.
For details please refer [💻[GitHub Code](https://github.com/jibala-1022/EasyRec)] and [📖[Paper](https://arxiv.org/abs/2408.08821)].
### Model List
We release a series of EasyRec checkpoints with varying sizes. You can easily load these models from Hugging Face by replacing the model name.
| Model | Size | Parameters | Recall@10 on Movies |
|:-----:|:----:|:----------:|:-------------------:|
| [jibala-1022/easyrec-small](https://huggingface.co/jibala-1022/easyrec-small) | 243 MB | 121,364,313 | 0.0086 |
| [jibala-1022/easyrec-base](https://huggingface.co/jibala-1022/easyrec-base) | 328 MB | 163,891,545 | 0.0166 |
| [jibala-1022/easyrec-large](https://huggingface.co/jibala-1022/easyrec-large) | 816 MB | 407,933,017 | 0.0166 |
## 🌟 Citation
```bibtex
@article{ren2024easyrec,
title={EasyRec: Simple yet Effective Language Models for Recommendation},
author={Ren, Xubin and Huang, Chao},
journal={arXiv preprint arXiv:2408.08821},
year={2024}
}
```
|
jibala1022/easyrec-base
|
jibala1022
| 2024-11-05T16:09:23Z | 8 | 0 | null |
[
"pytorch",
"roberta",
"recommendation",
"collaborative filtering",
"sentence-similarity",
"en",
"arxiv:2408.08821",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"region:us"
] |
sentence-similarity
| 2024-11-03T10:13:33Z |
---
license: apache-2.0
language: en
tags:
- recommendation
- collaborative filtering
metrics: recall@10
base_model: FacebookAI/roberta-base
pipeline_tag: sentence-similarity
---
# EasyRec-Base
## Overview
- **Description**: EasyRec is a series of language models designed for recommendations, trained to match the textual profiles of users and items with collaborative signals.
- **Usage**: You can use EasyRec to encode user and item text embeddings based on the textual profiles that reflect their preferences for various recommendation scenarios.
- **Evaluation**: We evaluate the performance of EasyRec in: (i) Text-based zero-shot recommendation and (ii) Text-enhanced collaborative filtering.
- **Finetuned from model:** EasyRec is finetuned from [RoBERTa](https://huggingface.co/FacebookAI/roberta-large) within English.
For details please refer [💻[GitHub Code](https://github.com/jibala-1022/EasyRec)] and [📖[Paper](https://arxiv.org/abs/2408.08821)].
### Model List
We release a series of EasyRec checkpoints with varying sizes. You can easily load these models from Hugging Face by replacing the model name.
| Model | Size | Parameters | Recall@10 on Movies |
|:-----:|:----:|:----------:|:-------------------:|
| [jibala-1022/easyrec-small](https://huggingface.co/jibala-1022/easyrec-small) | 243 MB | 121,364,313 | 0.0086 |
| [jibala-1022/easyrec-base](https://huggingface.co/jibala-1022/easyrec-base) | 328 MB | 163,891,545 | 0.0166 |
| [jibala-1022/easyrec-large](https://huggingface.co/jibala-1022/easyrec-large) | 816 MB | 407,933,017 | 0.0166 |
## 🌟 Citation
```bibtex
@article{ren2024easyrec,
title={EasyRec: Simple yet Effective Language Models for Recommendation},
author={Ren, Xubin and Huang, Chao},
journal={arXiv preprint arXiv:2408.08821},
year={2024}
}
```
|
PeterJinGo/llama3-8b-dpo
|
PeterJinGo
| 2024-11-05T16:06:10Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T14:58:19Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: llama3-8b-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama3-8b-dpo
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PeterJinGo/llama3-8b-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/uiuc-dmg/huggingface/runs/1ov1vz7b)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sayankotor/SparseGradllama
|
Sayankotor
| 2024-11-05T16:03:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:41:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Trinity-13B-GGUF
|
mradermacher
| 2024-11-05T15:59:10Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:WhiteRabbitNeo/Trinity-13B",
"base_model:quantized:WhiteRabbitNeo/Trinity-13B",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T06:26:41Z |
---
base_model: WhiteRabbitNeo/Trinity-13B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WhiteRabbitNeo/Trinity-13B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Trinity-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trinity-13B-GGUF/resolve/main/Trinity-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mav23/Qwen2.5-1.5B-GGUF
|
mav23
| 2024-11-05T15:51:55Z | 122 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-05T15:34:45Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-1.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
zaursamedov1/npcgen
|
zaursamedov1
| 2024-11-05T15:48:59Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-05T15:15:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NPC
---
# Npcgen
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NPC` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zaursamedov1/npcgen', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
jdchang/rm_70b_soup
|
jdchang
| 2024-11-05T15:44:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pairwise_rm",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-11-05T15:08:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gozdenergiz/layoutlmv2-base-uncased_finetuned_docvqa
|
gozdenergiz
| 2024-11-05T15:44:07Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2024-10-20T10:38:05Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased_finetuned_docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.3224 | 0.2212 | 50 | 4.5586 |
| 4.5246 | 0.4425 | 100 | 4.1173 |
| 4.1619 | 0.6637 | 150 | 3.8601 |
| 3.7534 | 0.8850 | 200 | 3.6319 |
| 3.6105 | 1.1062 | 250 | 3.7778 |
| 3.3319 | 1.3274 | 300 | 3.1775 |
| 3.0645 | 1.5487 | 350 | 2.8592 |
| 2.8209 | 1.7699 | 400 | 2.7744 |
| 2.7174 | 1.9912 | 450 | 2.7408 |
| 2.0437 | 2.2124 | 500 | 2.7848 |
| 2.0063 | 2.4336 | 550 | 2.9319 |
| 1.9314 | 2.6549 | 600 | 2.3084 |
| 1.7939 | 2.8761 | 650 | 2.4124 |
| 1.7613 | 3.0973 | 700 | 2.5776 |
| 1.3099 | 3.3186 | 750 | 2.2375 |
| 1.4457 | 3.5398 | 800 | 2.7229 |
| 1.4964 | 3.7611 | 850 | 2.5109 |
| 1.428 | 3.9823 | 900 | 2.4552 |
| 0.9892 | 4.2035 | 950 | 3.2111 |
| 1.0568 | 4.4248 | 1000 | 2.3875 |
| 0.8754 | 4.6460 | 1050 | 2.8059 |
| 0.8201 | 4.8673 | 1100 | 2.5949 |
| 1.0239 | 5.0885 | 1150 | 2.8688 |
| 0.7348 | 5.3097 | 1200 | 2.8210 |
| 0.7866 | 5.5310 | 1250 | 2.4231 |
| 0.5954 | 5.7522 | 1300 | 2.8619 |
| 0.7299 | 5.9735 | 1350 | 2.8536 |
| 0.5132 | 6.1947 | 1400 | 2.6224 |
| 0.7035 | 6.4159 | 1450 | 3.2108 |
| 0.5626 | 6.6372 | 1500 | 2.8695 |
| 0.431 | 6.8584 | 1550 | 3.3508 |
| 0.4354 | 7.0796 | 1600 | 3.4196 |
| 0.3896 | 7.3009 | 1650 | 3.1219 |
| 0.4899 | 7.5221 | 1700 | 3.0649 |
| 0.5703 | 7.7434 | 1750 | 3.0621 |
| 0.435 | 7.9646 | 1800 | 3.3686 |
| 0.3251 | 8.1858 | 1850 | 3.2093 |
| 0.2464 | 8.4071 | 1900 | 3.9491 |
| 0.4524 | 8.6283 | 1950 | 3.4324 |
| 0.5715 | 8.8496 | 2000 | 3.5811 |
| 0.3552 | 9.0708 | 2050 | 3.9434 |
| 0.1147 | 9.2920 | 2100 | 4.5776 |
| 0.2613 | 9.5133 | 2150 | 4.0439 |
| 0.5679 | 9.7345 | 2200 | 3.4187 |
| 0.3372 | 9.9558 | 2250 | 3.3868 |
| 0.3143 | 10.1770 | 2300 | 4.2051 |
| 0.1989 | 10.3982 | 2350 | 3.7925 |
| 0.1859 | 10.6195 | 2400 | 4.1932 |
| 0.3882 | 10.8407 | 2450 | 4.1672 |
| 0.1824 | 11.0619 | 2500 | 4.3516 |
| 0.106 | 11.2832 | 2550 | 4.5112 |
| 0.2096 | 11.5044 | 2600 | 4.3784 |
| 0.1035 | 11.7257 | 2650 | 4.3866 |
| 0.2113 | 11.9469 | 2700 | 4.1279 |
| 0.2263 | 12.1681 | 2750 | 4.2749 |
| 0.1014 | 12.3894 | 2800 | 4.5176 |
| 0.1555 | 12.6106 | 2850 | 3.9479 |
| 0.1732 | 12.8319 | 2900 | 4.2414 |
| 0.1484 | 13.0531 | 2950 | 4.0296 |
| 0.1051 | 13.2743 | 3000 | 4.5086 |
| 0.1282 | 13.4956 | 3050 | 4.6194 |
| 0.1471 | 13.7168 | 3100 | 4.6707 |
| 0.1888 | 13.9381 | 3150 | 4.3906 |
| 0.0723 | 14.1593 | 3200 | 4.9790 |
| 0.0302 | 14.3805 | 3250 | 5.0363 |
| 0.1599 | 14.6018 | 3300 | 4.8371 |
| 0.1179 | 14.8230 | 3350 | 4.3327 |
| 0.1128 | 15.0442 | 3400 | 5.0618 |
| 0.0493 | 15.2655 | 3450 | 5.2469 |
| 0.0341 | 15.4867 | 3500 | 5.3640 |
| 0.0545 | 15.7080 | 3550 | 5.0736 |
| 0.0883 | 15.9292 | 3600 | 5.1372 |
| 0.0461 | 16.1504 | 3650 | 5.0354 |
| 0.0244 | 16.3717 | 3700 | 5.4353 |
| 0.0541 | 16.5929 | 3750 | 5.3114 |
| 0.0164 | 16.8142 | 3800 | 5.4107 |
| 0.0336 | 17.0354 | 3850 | 5.4258 |
| 0.0483 | 17.2566 | 3900 | 5.3555 |
| 0.0994 | 17.4779 | 3950 | 5.2090 |
| 0.0351 | 17.6991 | 4000 | 5.3768 |
| 0.0065 | 17.9204 | 4050 | 5.5076 |
| 0.0053 | 18.1416 | 4100 | 5.4823 |
| 0.0043 | 18.3628 | 4150 | 5.4850 |
| 0.0452 | 18.5841 | 4200 | 5.4849 |
| 0.0086 | 18.8053 | 4250 | 5.5881 |
| 0.0322 | 19.0265 | 4300 | 5.5167 |
| 0.0135 | 19.2478 | 4350 | 5.5502 |
| 0.0229 | 19.4690 | 4400 | 5.5385 |
| 0.042 | 19.6903 | 4450 | 5.5602 |
| 0.0404 | 19.9115 | 4500 | 5.5645 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
ProMeText/aquilign_spanish_segmenter
|
ProMeText
| 2024-11-05T15:28:34Z | 5 | 0 | null |
[
"safetensors",
"bert",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-11-05T15:22:05Z |
---
license: cc-by-nc-sa-4.0
---
|
ProMeText/aquilign_french_segmenter
|
ProMeText
| 2024-11-05T15:28:34Z | 5 | 0 | null |
[
"safetensors",
"bert",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-11-05T15:21:17Z |
---
license: cc-by-nc-sa-4.0
---
|
ProMeText/aquilign_italian_segmenter
|
ProMeText
| 2024-11-05T15:26:11Z | 5 | 0 | null |
[
"safetensors",
"bert",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-11-05T15:22:18Z |
---
license: cc-by-nc-sa-4.0
---
|
mradermacher/neuronal-7b-Mlab-GGUF
|
mradermacher
| 2024-11-05T15:26:08Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"en",
"base_model:Kukedlc/neuronal-7b-Mlab",
"base_model:quantized:Kukedlc/neuronal-7b-Mlab",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T10:58:17Z |
---
base_model: Kukedlc/neuronal-7b-Mlab
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kukedlc/neuronal-7b-Mlab
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF/resolve/main/neuronal-7b-Mlab.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/neuronal-7b-Mlab-i1-GGUF
|
mradermacher
| 2024-11-05T15:26:08Z | 172 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"en",
"base_model:Kukedlc/neuronal-7b-Mlab",
"base_model:quantized:Kukedlc/neuronal-7b-Mlab",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T12:38:56Z |
---
base_model: Kukedlc/neuronal-7b-Mlab
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kukedlc/neuronal-7b-Mlab
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/neuronal-7b-Mlab-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/neuronal-7b-Mlab-i1-GGUF/resolve/main/neuronal-7b-Mlab.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TornikeO/sanitas-insurance-bot-embed-v1
|
TornikeO
| 2024-11-05T15:25:48Z | 22 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"openvino",
"mpnet",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-01-12T10:11:38Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
KRX-Trader/qwen2.5-inst-test-v3
|
KRX-Trader
| 2024-11-05T15:22:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T12:00:45Z |
---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** KRX-Trader
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
salsarra/ConfliBERT-QA
|
salsarra
| 2024-11-05T15:17:44Z | 123 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"dataset:rajpurkar/squad",
"base_model:snowood1/ConfliBERT-cont-cased",
"base_model:finetune:snowood1/ConfliBERT-cont-cased",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-07-14T20:52:12Z |
---
datasets:
- rajpurkar/squad
base_model:
- snowood1/ConfliBERT-cont-cased
---
|
borisf/bestemma-bob
|
borisf
| 2024-11-05T15:16:52Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-05T09:03:37Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: emma
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# bestemma-bob
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `emma` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
homeb82784/Qwen2-7B-Instruct-it-v1.0-v2.0
|
homeb82784
| 2024-11-05T15:16:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:02:16Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sentence-transformers/multi-qa-mpnet-base-dot-v1
|
sentence-transformers
| 2024-11-05T15:15:42Z | 1,674,778 | 163 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"openvino",
"mpnet",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:search_qa",
"dataset:eli5",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/QQP",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/Amazon-QA",
"dataset:embedding-data/WikiAnswers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- search_qa
- eli5
- natural_questions
- trivia_qa
- embedding-data/QQP
- embedding-data/PAQ_pairs
- embedding-data/Amazon-QA
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-mpnet-base-dot-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-mpnet-base-dot-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-mpnet-base-dot-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | No |
| Pooling-Method | CLS pooling |
| Suitable score functions | dot-product (e.g. `util.dot_score`) |
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using CLS-pooling, dot-product as similarity function, and a scale of 1.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
|
ZurichNLP/swiss-german-xlm-roberta-base
|
ZurichNLP
| 2024-11-05T15:15:08Z | 167 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"gsw",
"multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-18T17:59:45Z |
---
license: cc-by-nc-4.0
language:
- gsw
- multilingual
widget:
- text: "I cha etz au Schwiizerdütsch. <mask> zäme! 😊"
---
The [**xlm-roberta-base**](https://huggingface.co/xlm-roberta-base) model ([Conneau et al., ACL 2020](https://aclanthology.org/2020.acl-main.747/)) trained on Swiss German text data via continued pre-training.
## Training Data
For continued pre-training, we used the following two datasets of written Swiss German:
1. [SwissCrawl](https://icosys.ch/swisscrawl) ([Linder et al., LREC 2020](https://aclanthology.org/2020.lrec-1.329)), a collection of Swiss German web text (forum discussions, social media).
2. A custom dataset of Swiss German tweets
In addition, we trained the model on an equal amount of Standard German data. We used news articles retrieved from [Swissdox@LiRI](https://t.uzh.ch/1hI).
## License
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
## Citation
```bibtex
@inproceedings{vamvas-etal-2024-modular,
title={Modular Adaptation of Multilingual Encoders to Written Swiss German Dialect},
author={Jannis Vamvas and No{\"e}mi Aepli and Rico Sennrich},
booktitle={First Workshop on Modular and Open Multilingual NLP},
year={2024},
}
```
|
appvoid/arco-reddit-v3
|
appvoid
| 2024-11-05T15:14:38Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:appvoid/arco-2",
"base_model:finetune:appvoid/arco-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:14:19Z |
---
base_model: appvoid/arco-2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** appvoid
- **License:** apache-2.0
- **Finetuned from model :** appvoid/arco-2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DJKPARIS/cyane2
|
DJKPARIS
| 2024-11-05T15:14:32Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-05T14:37:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cyane2
---
# Cyane2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cyane2` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DJKPARIS/cyane2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Xhubuser/cnss-Llama-3.1-8B-iter-2-v2_merged_16bit
|
Xhubuser
| 2024-11-05T15:14:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:10:20Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Xhubuser
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kaiest/Python_Code_Generation_GPT2
|
kaiest
| 2024-11-05T15:14:12Z | 208 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:00:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProMeText/aquilign_segmenter_latin
|
ProMeText
| 2024-11-05T15:12:55Z | 6 | 0 | null |
[
"safetensors",
"bert",
"doi:10.57967/hf/3454",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-11-05T14:20:19Z |
---
license: cc-by-nc-sa-4.0
---
|
featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF
|
featherless-ai-quants
| 2024-11-05T15:12:35Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T14:02:21Z |
---
base_model: MaziyarPanahi-YamshadowInex12_Experiment26T3q
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# MaziyarPanahi-YamshadowInex12_Experiment26T3q GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-YamshadowInex12_Experiment26T3q-GGUF/blob/main/MaziyarPanahi-YamshadowInex12_Experiment26T3q-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF
|
featherless-ai-quants
| 2024-11-05T15:09:37Z | 10 | 0 | null |
[
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-05T13:46:59Z |
---
base_model: lodrick-the-lafted-Fuselage-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lodrick-the-lafted-Fuselage-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lodrick-the-lafted-Fuselage-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lodrick-the-lafted-Fuselage-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lodrick-the-lafted-Fuselage-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lodrick-the-lafted-Fuselage-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lodrick-the-lafted-Fuselage-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lodrick-the-lafted-Fuselage-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lodrick-the-lafted-Fuselage-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lodrick-the-lafted-Fuselage-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lodrick-the-lafted-Fuselage-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lodrick-the-lafted-Fuselage-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lodrick-the-lafted-Fuselage-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Fuselage-8B-GGUF/blob/main/lodrick-the-lafted-Fuselage-8B-Q8_0.gguf) | 8145.11 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Rookieez/detr_finetuned_cppe5
|
Rookieez
| 2024-11-05T15:07:40Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-11-05T04:54:40Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: detr_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1426
- Map: 0.0
- Map 50: 0.0
- Map 75: 0.0
- Map Small: -1.0
- Map Medium: 0.0
- Map Large: -1.0
- Mar 1: 0.0
- Mar 10: 0.0
- Mar 100: 0.0
- Mar Small: -1.0
- Mar Medium: 0.0
- Mar Large: -1.0
- Map Grey Star: -1.0
- Mar 100 Grey Star: -1.0
- Map Insect: 0.0
- Mar 100 Insect: 0.0
- Map Moon: 0.0
- Mar 100 Moon: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Black Star | Mar 100 Black Star | Map Cat | Mar 100 Cat | Map Grey Star | Mar 100 Grey Star | Map Insect | Mar 100 Insect | Map Moon | Mar 100 Moon | Map Unicorn Head | Mar 100 Unicorn Head | Map Unicorn Whole | Mar 100 Unicorn Whole |
|:-------------:|:-----:|:----:|:---------------:|:---:|:------:|:------:|:---------:|:----------:|:---------:|:-----:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------:|:------------------:|:-------:|:-----------:|:-------------:|:-----------------:|:----------:|:--------------:|:--------:|:------------:|:----------------:|:--------------------:|:-----------------:|:---------------------:|
| No log | 1.0 | 9 | 3.0843 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 2.0 | 18 | 2.8919 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 3.0 | 27 | 2.7564 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 4.0 | 36 | 2.7363 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 5.0 | 45 | 2.6145 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 6.0 | 54 | 2.5328 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 7.0 | 63 | 2.5044 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 |
| No log | 8.0 | 72 | 2.4637 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 9.0 | 81 | 2.5407 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 10.0 | 90 | 2.4309 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 11.0 | 99 | 2.4180 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 12.0 | 108 | 2.5998 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 13.0 | 117 | 2.4657 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | -1.0 | -1.0 |
| No log | 14.0 | 126 | 2.3398 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 15.0 | 135 | 2.3136 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 16.0 | 144 | 2.2952 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 17.0 | 153 | 2.3616 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 18.0 | 162 | 2.4010 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 19.0 | 171 | 2.3679 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 20.0 | 180 | 2.3450 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 21.0 | 189 | 2.3824 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 22.0 | 198 | 2.2668 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 23.0 | 207 | 2.1832 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 24.0 | 216 | 2.1715 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 25.0 | 225 | 2.1695 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 26.0 | 234 | 2.1456 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 27.0 | 243 | 2.1490 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 28.0 | 252 | 2.1432 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 29.0 | 261 | 2.1424 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 30.0 | 270 | 2.1426 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | -1.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.2
|
nizarmichaud/whisper-tiny-swiss-german
|
nizarmichaud
| 2024-11-05T15:00:34Z | 135 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-05T07:15:40Z |
---
license: mit
language:
- de
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
## Model Description
Fine-tuned Whisper-tiny on SwissDial-ZH dataset for Swiss German dialects.
## Model Details
- **Model Name**: nizarmichaud/whisper-tiny-swiss-german
- **Base Model**: Whisper-tiny-v3
- **Dataset**: SwissDial-ZH (8 Swiss German dialects): https://mtc.ethz.ch/publications/open-source/swiss-dial.html
- **Languages**: Swiss German
## Training
- **Duration**: 4 hours
- **Hardware**: NVIDIA RTX 3080
- **Batch Size**: 32
- **Train/Test Split**: 90%/10% (specific sentence selection)
## Performance
- **WER**: ~37% on test set
## Usage
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
model_name = "nizarmichaud/whisper-tiny-swiss-german"
model = WhisperForConditionalGeneration.from_pretrained(model_name)
processor = WhisperProcessor.from_pretrained(model_name)
audio_input = ... # Your audio input here
inputs = processor(audio_input, return_tensors="pt", sampling_rate=16000)
generated_ids = model.generate(inputs["input_features"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(transcription)
```
---
license: mit
---
|
AIStudioGPT/Llama-3.2-3B-Instruct-iski2000
|
AIStudioGPT
| 2024-11-05T14:54:08Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T00:39:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SamagraDataGov/embedding_finetuned
|
SamagraDataGov
| 2024-11-05T14:47:26Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:7033",
"loss:GISTEmbedLoss",
"arxiv:1908.10084",
"arxiv:2402.16829",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-08-19T23:10:45Z |
---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@5
- cosine_ndcg@10
- cosine_ndcg@100
- cosine_mrr@5
- cosine_mrr@10
- cosine_mrr@100
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@5
- dot_recall@10
- dot_ndcg@5
- dot_ndcg@10
- dot_ndcg@100
- dot_mrr@5
- dot_mrr@10
- dot_mrr@100
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:7033
- loss:GISTEmbedLoss
widget:
- source_sentence: How will the performance of CBBOs be assessed in the third and
fourth year?
sentences:
- ''' (iv) In third and fourth year, performance of the CBBOs will be assessed based
on - (a) issuing Share Certificates to each member in third year, if any; (b)
audited Financial Statements for FPOs for second year and third year in due time
and filing as required; (c) MoU and vendor registration as per Business Plan with
Marketing Agencies/Institutional Buyers; (d) trading/uploading of produce in e-NAM/other
sources, if any; (e) second tranche equity grant to FPOs, if any; and (f) second
tranche of credit guarantee facility, if any . (v) In the fifth year, performance
of the CBBOs will be assessed based on (a) audited Statements of accounts of
FPO and filing it; (b) 100% of agri-business plan executed and value chain developed; (c)
revenue model showing financial growth in last 3 consecutive years; (d) detailed
project completion Report; and (e) third tranche of credit guarantee facility
if any.'''
- '''5. Tussock caterpillar, Notolopus (=Orygyia) postica , Lymantriidae, Lepidoptera
Symptom of damage: Defoliation. Nature of damage: Caterpillars of the moth feed
on the leaves. Egg: Eggs are laid in clusters on the leaves and covered over with
hairs. Larva: Caterpillars are gregarious in young stages. Full grown larva possess
a brown head, a pair of long pencil of hairs projecting forwardly from the prothorax,
yellowish tuft of hairs arising from the lateral side of the first two abdominal
segment and long brownish hairs arising from 8 th abdominal segment. Pupa: Pupation
takes place in silken cocoon. Adult: Small adult with yellowish brown wings.
Female moth is wingless. Presence of bipectinate antenna.'''
- '''The Kisan Credit Card (KCC) scheme was introduced in 1998 for issue of Kisan
Credit Cards to farmers on the basis of their holdings for uniform adoption by
the banks so that farmers may use them to readily purchase agriculture inputs
such as seeds, fertilizers, pesticides etc. and draw cash for their production
needs. The scheme was further extended for the investment credit requirement of
farmers viz. allied and non-farm activities in the year 2004. The scheme was further
revisited in 2012 by a working Group under the Chairmanship of Shri T. M. Bhasin,
CMD, Indian Bank with a view to simplify the scheme and facilitate issue of Electronic
Kisan Credit Cards. The scheme provides broad guidelines to banks for operationalizing
the KCC scheme. Implementing banks will have the discretion to adopt the same
to suit institution/location specific requirements.'''
- source_sentence: How should State Government disclose ceiling premium rate for a
crop in the tender document?
sentences:
- '''However, in absence of insured area of last year/season for all proposed crops
or any crop, net sown area of that crop(s) will be considered for calculation
of weighted premium of district. This data will be used for calculation of L1
only. 7.1.5 Bidding **shall be done through e-tendering** and work order may
be released within 2 weeks of the opening of the Tender. 7.1.6 Depending on
the risk profile, historical loss cost and cost benefit analysis for the proposed
crop(s) in district(s) of any cluster, if the State Government feels that the
premium rate likely to be offered by bidding Insurance Companies would be abnormally
high, then the State Govt. can fix a ceiling on premium rates for such crop(s)
proposed to be included in the bidding evaluation for the bidding period. However,
recourse to this ceiling provision may be done only in well justified cases and
not as a general practice. The ceiling premium rate may be derived based on statistical
evaluation/actuarial premium analysis, loss cost, historical payout etc and name
of such crop should be disclosed by State Govt. compulsorily in the tender document. 7.1.7 In
such cases where a ceiling has been indicated, State government must call financial
bids in two step bidding or in two separate envelopes. First bid/envelop is for
disclosing the premium rate offered by each participating Insurance Company for
such ceiling crops and must be categorised under \''Ceiling Premium Rate\'' and 2nd bid
envelop is for bidding of crop wise premium rate for all crops included in tender.
Time interval for opening of both bid/envelop should be compulsorily mentioned
in the bidding documents and should preferably be on the same day. All participating
Insurance Companies have to submit the bid offer as per the procedure mentioned
above. 7.1.8 State Govt.'''
- '''| Chapters | Particulars |
Page No. |\n|---------------|------------------------------------------------------------|-------------|\n|
1 | Concept of Producer Organisation | 1 |\n|
2 | Producer Organisation Registered as Cooperative Society | 15 |\n|
3 | Producer Organisation Registered as Producer Company | 19 |\n|
4 | Producer Organisation Registered as Non-Profit Society | 33 |\n|
5 | Producer Organisation Registered as Trust | 36 |\n|
6 | Producer Organisation Registered as Section 8 Company | 39 |\n|
7 | Business Planning | 42 |\n|
8 | Financial Management | 55 |\n|
9 | Funding Arrangement | 60 |\n|
10 | Monitoring by the PO, POPI and Funding Agencies | 80 |\n|
Attachment | | |\n|
1 | Producer Company Act provisions | |\n|
2 | PRODUCE Fund Operational Guidelines | 106 |\n|
3 | SFAC Circular on Promoting / supporting Producer Companies | 114 |\n|
4 | Case Study on Bilaspur Model of PO | 125 |\n|
5 | Indicative Framework of the process of forming a PO | 131 |\n|
6 | References | 138 |\n|
7 | Memorandum of Agreement between NABARD and POPI | 139 |\n|
8 | Memorandum of Understanding between NABARD and RSA | 143 |\n|
9 | | |\n|
Abbreviations | | |\n| | | |\n|
146 | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |\n| | | |'''
- '''Agro-industries generate residues like husk, hull, shell, peel, testa, skin,
fibre, bran, linter, stone, seed, cob, prawn, head, frog legs, low grade fish,
leather waste, hair, bones, coir dust, saw dust, bamboo dust, etc. which could
be recycled or used efficiently through agro-processing centres. In the last three
decades, rice and sugarcane residues have increased by 162 and 172 %, respectively.
Their disposal problem needs serious rethinking (Vimal, 1981). To some extent
these organic residues are used as soil conditioner, animal feed, fuel, thatching
and packing materials. These can also be put to new uses for manufacture of various
chemicals and specific products (like silica, alcohol, tannins, glue, gelatine,
wax, etc), feed, pharmaceuticals (Iycogenin, antibiotics, vitamins, etc.), fertilizers,
energy, construction materials, paper pulp, handicraft materials etc. Residues
from fruit and vegetable industries, fish and marine industries and slaughter
o straw decrease their efficiency without pretreatment.'''
- source_sentence: What is the purpose of using pectolytic enzymes in fruit juice
processing?
sentences:
- '''Aggregating producers into collectives is one of the best mechanism to improve
access of small producers to investment, technology and market. The facilitating
agency should however keep the following factors in view: a. Types of small
scale producers in the target area, volume of production, socioeconomic status,
marketing arrangement b. Sufficient demand in the existing market to absorb the
additional production without significantly affecting the prices c. Willingness
of producers to invest and adopt new technology, if identified, to increase productivity
or quality of produce d. Challenges in the market chain and market environment
e. Vulnerability of the market to shocks, trends and seasonality f. Previous
experience of collective action (of any kind) in the community g. Key commodities,
processed products or semi-finished goods demanded by major retailers or processing
companies in the surrounding areas/districts h. Support from Government Departments,
NGOs, specialist support agencies and private companies for enterprise development i.
Incentives for members (also disincentives) for joining the PO Keeping in view
the sustainability of a Producer Organisation, a flow chart of activities along
with timeline, verifiable indicators and risk factors is provided at Attachment-5.'''
- '''2. Sampling method to be adopted – Random Size of the card including area for
label and other details = 20 x 30 cmm = 600 cm 2 No. of Grids = 30 Area of each
grid = 7 x 2 cm = 14 cm 2 Total No. of eggs / cm 2 to be accommodated = 96,000
– 1,08,000 Mean number of egg / cm 2 of the card in the grid area excluding area
for labeling = 200 – 250 Number of counts/ card of size 20 x 30 cm to be taken
No. of parasitised eggs = 12 • 3-4 days old parasitised egg card has to be selected
for examination • count the number of eggs and eggs parasitised in an area by
1 cm 2 • Per card of size 20 x 30 cm count randomly in 12 positions • Repeat the
process for three different cards of same age • Express the per cent parasitisation
. The result should fall in range of 85-90 per cent.'''
- '''Pectins are colloidal in nature, making solutions viscous and holding other
materials in suspension. Pectinesterase removes methyl groups from the pectin
molecules exposing carboxyl groups which in the presence of bi- or multivalent
cations, such as calcium, form insoluble salts which can readily be removed. At
the same time, polygalacturonase degrades macromolecular pectin, causing reduction
in viscosity and destroying the protective colloidal action so that suspended
materials will settle out. Extensive use of pectolytic enzymes is made in processing
fruit juices. Addition of pectic enzymes to grapes or other fruits during crushing
or grinding results in increased yields of juice on pressing. Wine from grapes
so treated will usually clear faster when fermentation is complete, and have better
color.'''
- source_sentence: What is the purpose of the PM-Kisan Portal?
sentences:
- ''' 2) In case of cultivable land in the State of Nagaland which is categorised
as Jhum land as per definition under Section–2(7) of the Nagaland Jhum Land Act,
1970 and which is owned by the community/clan/village council/village chieftan,
the identification of beneficiaries under PM-Kisan scheme, shall be on the basis
of certification of land holding by the village council/chief/head of the village,
duly verified by the administrative head of the circle/sub division and countersigned
by the Deputy Commissioner of the District. Provided that the name of the beneficiary
is included in the state of Nagaland''s Agriculture Census of 2015-16. This proviso
shall not be applicable in cases of succession and family partition. The list
of such beneficiaries shall be subject to the exclusions under the operational
guidelines. 5.6 For identification of *bona fide* beneficiary under PM-Kisan
Scheme in Jharkhand, the following proposal of Government of Jharkhand was considered
and approved by the Committee: \''The farmer will be asked to submit ''Vanshavali
(Lineage)'' linked to the entry of land record comprising his \\ her ancestor''s
name giving a chart of successor. This lineage chart shall be submitted before
the Gram Sabha for calling objections. After approval of the Gram Sabha, the village
level \\ circle level revenue officials will verify and authenticate the Vanshawali
and possession of holding. This authenticated list of farmers after due verification
of succession chart shall be countersigned by the District level revenue authority.
Farmers'' names, subject to the exclusion criterion after following the aforementioned
process, shall be uploaded on the PM-Kisan portal along with other required details
for this disbursement of benefit under the scheme.\'''''
- '''Deep summer ploughing should be done for field preparation for pulses,apply
FYM and compost @ 8-10 t/ha and mix well. Sowing of Pigeon pea should be done
by the end of June in rows at the spacing of 60-90x15-20 cm. Seed rate should
be 12-15 kg/ha Seed should be treated with Carbendazim or Thirum @3g/kg seed Fertilizer
dose should be scheduled as per the soil test results. In general, 20-25 kg N,
45-50 kg P and 15-20 kg K and 20 kg S should be given basal. Improved varieties
like Chhattisgarh Arhar -1, Chhattisgarh-2, Rajivlochan and TJT-501 should be
sown. Soybean and other pulse crops should be sown with proper drainage arrangement.
For this seed should be treated with culture before sowing. The quantity of Rhizobium
culture@5g + PSB @ 10 g/kg seed should be used for this seed treatment.'''
- '''Union Territory. The details of farmers are being maintained by the States
/ UTs either in electronic form or in manual register. To make integrated platform
available in the country to assist in benefit transfer, a platform named **PM-Kisan
Portal** available at URL (**http://pmkisan.gov.in**) has been be launched for
uploading the farmers'' details at a single web-portal in a uniform structure.
9.2 The PM-Kisan Portal has been created with the following objectives - i) To
provide verified and single source of truth on farmers'' details at the portal. ii) Timely
assistance to the farmers in farm operation iii) A unified e-platform for transferring
of cash benefits into farmer''s bank account through Public Financial Management
System (PFMS) integration. iv) Location wise availability of benefited farmers''
list. v) Ease of monitoring across the country on fund transaction details.'''
- source_sentence: What should be done before sowing pigeonpea in fields where it
is being sown for the first time after a long time?
sentences:
- '''The sole arbitrator shall be appointed by NABARD in case of dispute raised
by NABARD, from the panel of three persons nominated by RSA. Similarly, the sole
arbitrator shall be appointed by RSA if dispute is raised by RSA from the panel
of three persons nominated by NABARD. The language of the Arbitration shall be
English and the arbitrator shall be fluent in English. The arbitrator should be
person of repute and integrity and place of arbitration shall be Mumbai.\'' 9.
NABARD shall have the right to enter into similar MoU/agreements with any other RSA/Institution. 10.
Any notice required to be given under this MoU/Agreement shall be served on the
party at their respective address given below by hand delivery or by registered
post :'''
- '''y Firstly, Treat 1kg seeds with a mixture of 2 grams of thiram and one gram
of carbendazim or 4 grams of Trichoderma + 1 gram of carboxyne or carbendazim.
Before planting, treat each seed with a unique Rhizobium culture of pigeon pea.
A packet of this culture has to be sprinkled over 10 kg of seeds, then mix it
lightly with hands, so that a light layer is formed on the seeds. Sow this seed
immediately. There is a possibility of the death of culture organisms from strong
sunlight. In fields where pigeonpea is being sown for the first time after a long
time, it must use culture.'''
- '''Organic farming is one of the several approaches found to meet the objectives
of sustainable agriculture. Organic farming is often associated directly with,
\''Sustainable farming.\'' However, ‘organic farming’ and ‘sustainable farming’,
policy and ethics-wise are t wo different terms. Many techniques used in organic
farming like inter-cropping, mulching and integration of crops and livestock are
not alien to various agriculture systems including the traditional agriculture
practiced in old countries like India. However, organic farming is based on various
laws and certification programmes, which prohibit the use of almost all synthetic
inputs, and health of the soil is recognized as the central theme of the method.
Organic products are grown under a system of agriculture without the use of chemical
fertilizers and pesticides with an environmentally and socially responsible approach.
This is a method of farming that works at'''
model-index:
- name: SentenceTransformer based on BAAI/bge-small-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: val evaluator
type: val_evaluator
metrics:
- type: cosine_accuracy@1
value: 0.4680306905370844
name: Cosine Accuracy@1
- type: cosine_accuracy@5
value: 0.9092071611253197
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9603580562659847
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4680306905370844
name: Cosine Precision@1
- type: cosine_precision@5
value: 0.18184143222506394
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09603580562659846
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4680306905370844
name: Cosine Recall@1
- type: cosine_recall@5
value: 0.9092071611253197
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9603580562659847
name: Cosine Recall@10
- type: cosine_ndcg@5
value: 0.7079399335444153
name: Cosine Ndcg@5
- type: cosine_ndcg@10
value: 0.724527850349024
name: Cosine Ndcg@10
- type: cosine_ndcg@100
value: 0.732682390595948
name: Cosine Ndcg@100
- type: cosine_mrr@5
value: 0.6404518329070746
name: Cosine Mrr@5
- type: cosine_mrr@10
value: 0.6473191450493229
name: Cosine Mrr@10
- type: cosine_mrr@100
value: 0.649235332852707
name: Cosine Mrr@100
- type: cosine_map@100
value: 0.6492353328527082
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.46675191815856776
name: Dot Accuracy@1
- type: dot_accuracy@5
value: 0.9092071611253197
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9603580562659847
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.46675191815856776
name: Dot Precision@1
- type: dot_precision@5
value: 0.18184143222506394
name: Dot Precision@5
- type: dot_precision@10
value: 0.09603580562659846
name: Dot Precision@10
- type: dot_recall@1
value: 0.46675191815856776
name: Dot Recall@1
- type: dot_recall@5
value: 0.9092071611253197
name: Dot Recall@5
- type: dot_recall@10
value: 0.9603580562659847
name: Dot Recall@10
- type: dot_ndcg@5
value: 0.7074679767075504
name: Dot Ndcg@5
- type: dot_ndcg@10
value: 0.7240558935121589
name: Dot Ndcg@10
- type: dot_ndcg@100
value: 0.7322104337590828
name: Dot Ndcg@100
- type: dot_mrr@5
value: 0.6398124467178163
name: Dot Mrr@5
- type: dot_mrr@10
value: 0.6466797588600646
name: Dot Mrr@10
- type: dot_mrr@100
value: 0.6485959466634487
name: Dot Mrr@100
- type: dot_map@100
value: 0.6485959466634499
name: Dot Map@100
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("SamagraDataGov/embedding_finetuned")
# Run inference
sentences = [
'What should be done before sowing pigeonpea in fields where it is being sown for the first time after a long time?',
"'y Firstly, Treat 1kg seeds with a mixture of 2 grams of thiram and one gram of carbendazim or 4 grams of Trichoderma + 1 gram of carboxyne or carbendazim. Before planting, treat each seed with a unique Rhizobium culture of pigeon pea. A packet of this culture has to be sprinkled over 10 kg of seeds, then mix it lightly with hands, so that a light layer is formed on the seeds. Sow this seed immediately. There is a possibility of the death of culture organisms from strong sunlight. In fields where pigeonpea is being sown for the first time after a long time, it must use culture.'",
"'Organic farming is one of the several approaches found to meet the objectives of sustainable agriculture. Organic farming is often associated directly with, \\'Sustainable farming.\\' However, ‘organic farming’ and ‘sustainable farming’, policy and ethics-wise are t wo different terms. Many techniques used in organic farming like inter-cropping, mulching and integration of crops and livestock are not alien to various agriculture systems including the traditional agriculture practiced in old countries like India. However, organic farming is based on various laws and certification programmes, which prohibit the use of almost all synthetic inputs, and health of the soil is recognized as the central theme of the method. Organic products are grown under a system of agriculture without the use of chemical fertilizers and pesticides with an environmentally and socially responsible approach. This is a method of farming that works at'",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `val_evaluator`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.468 |
| cosine_accuracy@5 | 0.9092 |
| cosine_accuracy@10 | 0.9604 |
| cosine_precision@1 | 0.468 |
| cosine_precision@5 | 0.1818 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.468 |
| cosine_recall@5 | 0.9092 |
| cosine_recall@10 | 0.9604 |
| cosine_ndcg@5 | 0.7079 |
| cosine_ndcg@10 | 0.7245 |
| cosine_ndcg@100 | 0.7327 |
| cosine_mrr@5 | 0.6405 |
| cosine_mrr@10 | 0.6473 |
| cosine_mrr@100 | 0.6492 |
| cosine_map@100 | 0.6492 |
| dot_accuracy@1 | 0.4668 |
| dot_accuracy@5 | 0.9092 |
| dot_accuracy@10 | 0.9604 |
| dot_precision@1 | 0.4668 |
| dot_precision@5 | 0.1818 |
| dot_precision@10 | 0.096 |
| dot_recall@1 | 0.4668 |
| dot_recall@5 | 0.9092 |
| dot_recall@10 | 0.9604 |
| dot_ndcg@5 | 0.7075 |
| dot_ndcg@10 | 0.7241 |
| dot_ndcg@100 | 0.7322 |
| dot_mrr@5 | 0.6398 |
| dot_mrr@10 | 0.6467 |
| dot_mrr@100 | 0.6486 |
| **dot_map@100** | **0.6486** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `gradient_accumulation_steps`: 4
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 1.0
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | val_evaluator_dot_map@100 |
|:----------:|:-------:|:-------------:|:---------:|:-------------------------:|
| 0.0682 | 15 | 0.6463 | 0.3498 | 0.6152 |
| 0.1364 | 30 | 0.3071 | 0.1975 | 0.6212 |
| 0.2045 | 45 | 0.2023 | 0.1576 | 0.6248 |
| 0.2727 | 60 | 0.1457 | 0.1357 | 0.6321 |
| 0.3409 | 75 | 0.2456 | 0.1228 | 0.6370 |
| 0.4091 | 90 | 0.1407 | 0.1130 | 0.6365 |
| 0.4773 | 105 | 0.1727 | 0.1042 | 0.6393 |
| 0.5455 | 120 | 0.1311 | 0.0975 | 0.6428 |
| 0.6136 | 135 | 0.13 | 0.0910 | 0.6433 |
| 0.6818 | 150 | 0.0919 | 0.0872 | 0.6466 |
| 0.75 | 165 | 0.1587 | 0.0851 | 0.6490 |
| 0.8182 | 180 | 0.1098 | 0.0834 | 0.6481 |
| 0.8864 | 195 | 0.1013 | 0.0824 | 0.6461 |
| **0.9545** | **210** | **0.1144** | **0.082** | **0.6486** |
| 1.0 | 220 | - | 0.0820 | 0.6486 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.4.1+cu121
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Lekhansh/Llama-3.2-1B-Instruct-corrector-nonjson_overtrained-merged-16bit
|
Lekhansh
| 2024-11-05T14:39:29Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T14:37:07Z |
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Lekhansh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ihughes15234/phi35_kp_dpo2epoch
|
ihughes15234
| 2024-11-05T14:39:02Z | 85 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:ihughes15234/phi_3_5_mini_kp_12k_cfr_sft",
"base_model:finetune:ihughes15234/phi_3_5_mini_kp_12k_cfr_sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-04T11:04:01Z |
---
base_model: ihughes15234/phi_3_5_mini_kp_12k_cfr_sft
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi_3_5_mini_kp_12k_cfr_sft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Damysus-2.7B-Chat-i1-GGUF
|
mradermacher
| 2024-11-05T14:38:08Z | 256 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"phi",
"phi-2",
"instruct",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:prince-canuma/TinyOrca",
"base_model:prince-canuma/Damysus-2.7B-Chat",
"base_model:quantized:prince-canuma/Damysus-2.7B-Chat",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-05T13:11:17Z |
---
base_model: prince-canuma/Damysus-2.7B-Chat
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- nlp
- phi
- phi-2
- instruct
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prince-canuma/Damysus-2.7B-Chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 1.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 1.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF/resolve/main/Damysus-2.7B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 2.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Damysus-2.7B-Chat-GGUF
|
mradermacher
| 2024-11-05T14:38:08Z | 28 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"phi",
"phi-2",
"instruct",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:prince-canuma/TinyOrca",
"base_model:prince-canuma/Damysus-2.7B-Chat",
"base_model:quantized:prince-canuma/Damysus-2.7B-Chat",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T03:00:09Z |
---
base_model: prince-canuma/Damysus-2.7B-Chat
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- nlp
- phi
- phi-2
- instruct
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prince-canuma/Damysus-2.7B-Chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Damysus-2.7B-Chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q3_K_S.gguf) | Q3_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Damysus-2.7B-Chat-GGUF/resolve/main/Damysus-2.7B-Chat.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
milato/new02
|
milato
| 2024-11-05T14:30:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"qwen2.5",
"finetune",
"dpo",
"orpo",
"chat",
"conversational",
"instruct",
"storywriting",
"roleplay",
"novelwriting",
"en",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:Qwen/Qwen2.5-14B-Instruct",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T14:30:15Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- qwen
- qwen2.5
- finetune
- dpo
- orpo
- qwen2
- chat
- conversational
- instruct
- storywriting
- roleplay
- novelwriting
base_model:
- Qwen/Qwen2.5-14B-Instruct
- v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- tanliboy/lambda-qwen2.5-14b-dpo-test
datasets:
- jondurbin/gutenberg-dpo-v0.1
- Qwen/Qwen2.5-14B-Instruct
- HuggingFaceH4/ultrafeedback_binarized
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-Lumen-14B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 80.64
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 48.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.4
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.29
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.36
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
name: Open LLM Leaderboard
---
# Qwen2.5-Lumen-14B
* *Qwen direct preference optimization finetuned for ~3 epochs.*

<b>A qwen2.5 preference finetune, targeting prompt adherence, storywriting and roleplay.</b>
-------------------------------------------------------------------------------
* *Llama.cpp*
# (GGUF) Thanks QuantFactory
* static - [GGUF](https://huggingface.co/QuantFactory/Qwen2.5-Lumen-14B-GGUF)
# (GGUF) Thanks mradermacher
* static - [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Lumen-14B-GGUF)
* imatrix - [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Lumen-14B-i1-GGUF)
# (GGUF) Thanks Triangle104
* static - [Q8_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q8_0-GGUF) - [Q6_K](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q6_K-GGUF) - [Q5_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_M-GGUF) - [Q5_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_S-GGUF) - [Q5_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_0-GGUF) - [Q4_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_M-GGUF) - [Q4_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_S-GGUF) - [Q4_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_0-GGUF)
*Other quant repositories also exist on huggingface and can be searched for.*
## Training Notes
Trained [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for 2 epochs on NVidia A100, and on dataset [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), saving different checkpoints along the way (completely different runs at varying epochs and learning rates).
[Tanliboy](https://huggingface.co/tanliboy) trained [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for 1 epoch on [HuggingFaceH4/ultrafeedback_binarized](HuggingFaceH4/ultrafeedback_binarized), (Credit to Tanliboy! *Check out the model [here](https://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test)*)
*Mass checkpoint merged, Based on Qwen2.5-14B-Instruct (Base Model).*
## Merge
* Merged with a sophosympatheia's <b>SLERP</b> gradient *"Ultrafeedback-Binarized DPO"* and *"Gutenberg DPO"*
* Merged with a sophosympatheia's <b>SLERP</b> gradient *"Qwen2.5-14B-Instruct"* and *"Gutenberg DPO"*
* Merged all <b>DPO checkpoints</b> and <b>SLERP</b> variations with <b>MODEL_STOCK</b> to analyze geometric properties and get the most *performant* aspects of all runs/merges. *Model Stock* was chosen due to the similarity between the merged models.
* This was chosen due to the fact that evaluation for *ORPO* is unclear, so it's hard to know which runs are the best.
## One-Attempt generated example:
* *Temp 1.3 [1], Min_P 0.012 [4], TFS 0.97 [3], Smooth_Factor 0.3 [2], Smoothing_Curve 1.1, Rep 1.1, Rep Range 1000*
<details>
<summary> >[CLICK ME] Write the opening chapter of a romance novel set in the late 1800s, where the protagonist, a war-weary music star returning to England after an injury in Afghanistan service...</summary>
**Title: A Harp for the Heart**
*Chapter One: Homebound*
England, Winter 1893—
As the steam-powered ferry made its way across the turbulent waters of the Channel, each tremor that reverberated through the vessel felt like a calloused reminder to Edgar Thornton. His mind was already conjuring visions of his beloved London—the sprawling city draped in an ever-present blanket of melancholic grey. Yet within him lingered more pressing memories: the whizzing sound of artillery shells as they streaked across Afghan deserts, leaving behind billowing black smoke and devastating wreckage.
Edgar's injuries were visible, the ghostly scar of a saber cut marring one cheek; less evident, however, was the toll that service took on his spirit. Each warbling note from an opera played by some amateur on the ferry sliced into Edgar's hollow heart, echoing through him with a bitter lament—the song was *Adagio* and it meant *Slow down*. Only now did he dare allow himself to slow—decades since music had flowed through the seams of his body in this way.
For years he had been sought out, from England to New York, as the country's foremost harpist, not just for his fingers nimble on a range of scales but also for how each piece conveyed an insurmountable melancholy—a reflection of his solitary life and unending loneliness. Each recital saw thousands of faces that remembered his smiles or silences; yet today he had no one to return them to.
The ferry's horn blared as they approached Dover, cutting through Edgar’s reverie much the same as cannonade did in Kabul. He thought about London with mixed emotions—longing for the grandeur and splendor of its concert halls, a city vibrant under lamplight but devoid of comfort or real life; a home where he was no less alien than anywhere else.
A soft murmur drew his gaze to a young woman standing near the window, her back to him. Her blonde hair cascaded in waves, shimmering under the lantern light as if lit from within. Despite the chill of the approaching winter night, she appeared oblivious—entirely entranced by something on the far horizon or lost in thought.
His breath caught at a sensation that had been uncommon for years but familiar to his boyhood—a light flutter tickling just behind his ribcage. And as her shoulders drew back and her chin tipped upwards, he saw a hint of the same unspoken story reflected in her eyes: dreams big enough to leave them scarred by distance and time.
And then she spoke without ever turning around—
"Do you hear that?" she asked into the gathering darkness outside their window. "Is that London?"[...]
</details>
-----------------------------------
<details>
<summary> >[CLICK ME] Write the opening chapter of a dark fantasy book during the darkest age, where the protagonist, a war-weary woman is introduced returning to stand against the orcs and fight in what they falsely believe is the last battle. Introduce us as she is riding into her army to talk to some friends before they strike... </summary>
---
The night was pitch black with only the thinnest sliver of a moon slicing across the sky, barely enough to paint the horizon a sickly shade of greyish-blue. Below it stretched an expanse of rolling hills and forests that swayed like shadows in an eternal nightmare. The wind carried with it not the scents of living earth but the taint of the bloodied battlegrounds below, where for too long had men, women, and beasts clashed and bled.
Ariath rode amidst this gloomy spectacle atop a pale stallion named Ebon, a beast as weary and war-weary as she was. Her dark cloak billowed around her like an inky cloud, contrasting starkly with the barren landscape through which they traveled. She had not chosen such darkness—nor any choice at all; tonight, every decision from waking to fighting lay in someone else's cold, indifferent hands.
Ebon’s hooves stirred the heavy mud of long trampled fields, sinking silently into the earth as he treaded towards her rallying army. It was said that this night they would fight, not for honor nor country—no, those noble phantoms had vanished many a moon past—but to stand once more between the realm and its approaching destruction.
Her chestnut mare, a beauty in years past before every bone seemed like it could break from its own weight, whinnied quietly under Ariath's calming voice. Her friends awaited not far ahead; they were an eclectic bunch—half-brother Raen, the steadfast knight who fought by her side for almost all their battles, Elaeril the mage known more as a wielder of shadow spells than life-giving magic, and finally Meryn the roguish, whose sword was as quick as his wit.[...]
</details>
-----------------------------------
* *Temp 1.3, Min_P 0.012, Rep 1.1*
<details>
<summary> >[CLICK ME (<b>Disclaimer: highly developed prompt from dataset - best case scenario)</b>] Summary of previous portion of the novel: In the previous chapter, Anna Pavlovna is overwhelmed by a deep sense of fear and dread after her partner, Count Vronsky, has left. Despite trying to reach out to him with a note pleading for his return, she is informed that the count had driven off, and her message did not reach him. Struggling with her thoughts and barely able to find solace, Anna decides to send a telegram and visit Darya Alexandrovna to avoid succumbing to her overwhelming feelings. Her distress is palpable as she attempts to take action to distract herself from her anxiety and looks for any form of support or distraction to keep her from unraveling. Write the next chapter of a story in novel format that focuses on the internal and external conflicts of a woman named Anna. Detail her poignant reflections on her past, her struggle with her current sense of self-worth, and her interactions with family and society. Include flashbacks to her younger days, comparisons between her past and present, and vivid descriptions of her environment and society's norms that have led to her feelings of alienation. This chapter should showcase Anna's complex emotional state as she contemplates her next steps in life, hinting at recent life-altering decisions and their repercussions. It should contain interactions with characters such as Dolly and Kitty, reflecting the tensions and societal judgments she is grappling with.</summary>
### Chapter Seven
It was early morning, but the heavy clouds of melancholy still clung tightly to Anna's heart like a shroud over her spirit. The atmosphere outside mirrored this burden; rain fell softly, soaking the flower beds as mournful streaks rolled lazily across the gray sky in muted tones of despair.
Anna could not have taken her steps outwards to face the world until she had come to some kind of resolution within herself, even if it were no more than acceptance. She reflected deeply over how far away everything now seemed from what was once close and familiar. In every leaf trembling on a wet branch, in the low sighs of a wind that wandered lost without a purpose across the gray fields, she felt echoes of her inner turmoil.
She remembered her younger days. Life had then been filled with smiles, warmth and acceptance — her friends called her Annaarina, the spirit they saw in her so vividly, bright-eyed and kind. Their little gathering at Kitty’s had been an annual celebration for all four families: Oblonskys’, Shtcherbats’ys’, Karenins’, and Count Vronsky’s parents — each as familiar and loved a presence in that room as Anna.
But where were their faces now? Where was her happiness from those times, which only seemed to taunt the present?
"Here we are again,” she murmured in self-accusation with a mocking bitterness. The first snow had fallen softly upon them last winter then — and today it reminded her of something too lovely and cruel; something not allowed now in the rigid confines that life had chosen as its own path.
Determined to avoid being weighed down further, Anna prepared herself mentally to face Dolly, her dearest friend amongst women who seemed like a beacon among dim lights. In moments such as these she believed fully that somehow they could ease or even rid this dark fog from her heart — yet Anna felt apprehension nevertheless. Would their kinship and bond stand unchanged in light of recent circumstances? Perhaps now it would reveal cracks that Dolly was oblivious to, since the old Annaarina had been seen so rarely even then.
---
The carriage jolted through wet cobblestones as Anna finally made her way to Darya Alexandrovna’s house. The house stood there as warmly and welcomingly familiar with smoke still rising from chimneys, signaling signs of comfort within. Anna pulled off a shivering hand — dampness and cold seeped in merciless through fine silks.
Sneezing lightly into lace-lined white gloves, Anna steadied herself before rapping at the door. There would be no more room for her to hide; any attempt at self-preservation now lay broken between her heart and truth.
Anna stepped inside, greeted by the sight of Dolly busying about an unswept floor. For a fleeting moment, those familiarly busy hands paused — brows furrowed gently in concern even before their full attention met Anna.
"Why so early? Is there something...?" began Dolly warmly yet tentatively as she approached to take the delicate gloved hand.
Anna bit her lip at Darya's solicitous face. Would she now have to divulge her secrets of how utterly unhinged everything had become, or simply hint enough for suspicion?
"I — I need your advice," Anna whispered shyly, for there was no courage left to handle the world straight-up that day.
Dolly hugged the fragile figure and replied earnestly, "Speak with me here freely. No matter what is troubling you..."
That simple reassurance gave Anna courage. The despairing weight lessened as though carried a little by this soft-hearted soul.
“I have changed more than you would ever understand...” began Anna softly, but the words broke as unshed tears finally welled.
"Your eyes tell their story alone," soothed Dolly, patting gently her friend’s hand. “Do not be ashamed.”
In the wake of support and open hearts, Anna continued her revelation, pouring out all fears and hesitations — she who had been so content to live life sheltered inside, was suddenly exposed in full force by necessity alone; fragile glass cast into a brutal world.
---
At intervals throughout that day, Anna could scarcely believe how swiftly and faithfully Darya’s love proved the eternal rock to stand against crumbling shores. Amidst shared tea and comfort that night though shadows still lingered darkly between them.
When asked about Count Vronsky's presence with the family at gatherings or personal times now, Anna met this inquiry with a breathy sorrowful silence which spoke volumes to both the depth of their new estrangement, as well as how truly beloved the name was yet by Anna still in spite all waning prospects for resolution.
Daria continued to encourage her friend steadfastly; even shared letters and small consolations to penultimate messages destined to Count Vronsky alone. Each offered another grain's worth of hope for peace - both with him, herself, and beyond her personal dilemmas within that cloaked society so mercilessly turning its back against any form of irregularities in married life.
By the end of their meeting late evening shadows were upon them once more, and Anna reluctantly made to depart — a newly found strength (but still sorely tested) rippling across the troubled soul. However, as Anna prepared herself for returning to her solitary chambers, Dolly rose swiftly off the settee with concern laced in her tone.
“But we have barely scratched the surface of what might truly be affecting your heart.” The sympathy etched upon her face almost spoke a plea for understanding.
Anna reached over and took hands of compassionate warmth — eyes wide with gratefulness.
"I think perhaps it is all right to stay here for the night… while I gather myself," she confided quietly into Darya’s caring ears.[...]
</details>
-----------------------------------
<b>*As you can see the model has mostly adapted to the intended response style from Gutenberg dataset.*</b>
## Recipe
```yaml
models:
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- model: v000000/Qwen2.5-14B-Gutenberg-0.6e-Sequential
- model: v000000/Qwen2.5-14B-Gutenberg-0.25e-Early
- model: v000000/Qwen2.5-14B-Gutenberg-2e-Sequential
- model: v000000/Qwen2.5-14B-Gutenberg-0.37e-Early
- model: v000000/Qwen2.5-14B-Gutenberg-2e-Zeta
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Theta
- model: tanliboy/lambda-qwen2.5-14b-dpo-test
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- model: tanliboy/lambda-qwen2.5-14b-dpo-test
- model: v000000/Qwen2.5-14B-Gutenberg-UltraLambda-Slerpeno
- model: v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
base_model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
merge_method: model_stock
dtype: bfloat16
```
*If your use case is character-based roleplay, please consider using the prompts below for an enhanced experience*
* [For realistic RP/non-RPGs - MarinaraSpaghetti ChatML Customized](https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized/ChatML)
* [For freeform RP/RPGs - MarinaraSpaghetti ChatML Basic](https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Basic/ChatML)
### Finetune and merge
This is a merge and finetune of pre-trained language models.
### Models Merged
[Arxiv 2403.19522](https://arxiv.org/abs/2403.19522)
The following models were included in the merge:
* v000000/Qwen2.5-14B-Gutenberg-1e-Delta
* v000000/Qwen2.5-14B-Gutenberg-0.6e-Sequential
* v000000/Qwen2.5-14B-Gutenberg-0.25e-Early
* v000000/Qwen2.5-14B-Gutenberg-2e-Sequential
* v000000/Qwen2.5-14B-Gutenberg-0.37e-Early
* v000000/Qwen2.5-14B-Gutenberg-2e-Zeta
* v000000/Qwen2.5-14B-Gutenberg-1e-Theta
* v000000/Qwen2.5-14B-Gutenberg-UltraLambda-Slerpeno
* v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
* tanliboy/lambda-qwen2.5-14b-dpo-test
-------------------------------------------------------------------------------
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Qwen2(ChatML) Prompt format
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_v000000__Qwen2.5-Lumen-14B)
| Metric |Value|
|-------------------|----:|
|Avg. |32.20|
|IFEval (0-Shot) |80.64|
|BBH (3-Shot) |48.51|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) |10.40|
|MuSR (0-shot) |10.29|
|MMLU-PRO (5-shot) |43.36|
|
jae-xe/fae_ink_flux_lora
|
jae-xe
| 2024-11-05T14:26:59Z | 66 | 3 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-03T19:14:36Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: fae_ink
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# fae_ink




<Gallery />
## Model description
Colorful & impactful fantasy ink illustrations.
## Trigger words
You should use `fae_ink` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/jae-xe/fae_ink/tree/main) them in the Files & versions tab.
|
janzuromski/bert-finetuned-archeology
|
janzuromski
| 2024-11-05T14:21:36Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T13:55:17Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-archeology
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-archeology
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Precision: 0.5038
- Recall: 0.6903
- F1: 0.5825
- Accuracy: 0.9453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.2127 | 0.4992 | 0.6379 | 0.5601 | 0.9468 |
| No log | 2.0 | 498 | 0.2323 | 0.5254 | 0.6811 | 0.5932 | 0.9477 |
| 0.0737 | 3.0 | 747 | 0.2525 | 0.5038 | 0.6903 | 0.5825 | 0.9453 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.2
- Datasets 3.1.0
- Tokenizers 0.20.1
|
janbakker/o-conbart-cochraneauto
|
janbakker
| 2024-11-05T14:20:59Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"context-bart",
"doc_simp",
"cochrane-auto",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-07-09T15:35:33Z |
---
language:
- en
tags:
- doc_simp
- cochrane-auto
---
# O->ConBART document simplification model
This is a plan-guided context-aware BART model pretrained on Cochrane-auto.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model.
|
dariast/prism
|
dariast
| 2024-11-05T14:19:59Z | 10 | 1 | null |
[
"safetensors",
"prism",
"ar",
"bg",
"bn",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"eo",
"fi",
"fr",
"he",
"hr",
"hu",
"id",
"it",
"ja",
"kk",
"lt",
"lv",
"mk",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sq",
"sr",
"sv",
"tr",
"uk",
"vi",
"zh",
"license:mit",
"region:us"
] | null | 2024-10-26T17:35:25Z |
---
license: mit
language:
- ar
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- es
- et
- eo
- fi
- fr
- he
- hr
- hu
- id
- it
- ja
- kk
- lt
- lv
- mk
- nl
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- tr
- uk
- vi
- zh
---
# PRISM Model for Multilingual Machine Translation
This repository contains the `Prism` model, a multilingual neural machine translation (NMT) system. The `Prism` model supports translation across 39 languages.
The model was trained with a focus on multilingual performance, excelling in tasks such as translation quality estimation and evaluation, making it a versatile choice for research and practical use in various language pairs.
It was introduced in this [paper](https://aclanthology.org/2020.emnlp-main.8.pdf) and first released in [this](https://github.com/thompsonb/prism/tree/master) repository.
## Model Description
The `Prism` model was designed to be a lexically/syntactically unbiased paraphraser. The core idea is to treat paraphrasing as a zero-shot translation task, which allows the model to cover a wide range of languages effectively.
### BLEU Score Performance
Based on the research paper, the `Prism` model achieved competitive or superior performance across various language pairs in the WMT 2019 shared metrics task. It outperformed existing evaluation metrics in many cases, showing robustness in both high-resource and low-resource settings.
## Installation
To use `PrismTokenizer`, ensure that the `sentencepiece` package is installed, as it is a required dependency for handling multilingual tokenization.
```bash
pip install sentencepiece
```
## Usage Example
```python
from transformers import PrismForConditionalGeneration, PrismTokenizer
uk_text = "Життя як коробка шоколаду"
ja_text = "人生はチョコレートの箱のようなもの。"
model = PrismForConditionalGeneration.from_pretrained("dariast/prism")
tokenizer = PrismTokenizer.from_pretrained("dariast/prism")
# Translate Ukrainian to French
tokenizer.src_lang = "uk"
encoded_uk = tokenizer(uk_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_uk, forced_bos_token_id=tokenizer.get_lang_id("fr"), max_new_tokens=20)
print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True))
# => 'La vie comme une boîte de chocolat.'
# Translate Japanese to English
tokenizer.src_lang = "ja"
encoded_ja = tokenizer(ja_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_ja, forced_bos_token_id=tokenizer.get_lang_id("en"), max_new_tokens=20)
print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True))
# => 'Life is like a box of chocolate.'
```
## Languages Covered
Albanian (sq), Arabic (ar), Bengali (bn), Bulgarian (bg), Catalan; Valencian (ca), Chinese (zh), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Esperanto (eo), Estonian (et), Finnish (fi), French (fr), German (de), Greek, Modern (el), Hebrew (modern) (he), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Kazakh (kk), Latvian (lv), Lithuanian (lt), Macedonian (mk), Norwegian (no), Polish (pl), Portuguese (pt), Romanian, Moldovan (ro), Russian (ru), Serbian (sr), Slovak (sk), Slovene (sl), Spanish; Castilian (es), Swedish (sv), Turkish (tr), Ukrainian (uk), Vietnamese (vi).
## Citation
If you use this model in your research, please cite the original paper:
```
@inproceedings{thompson-post-2020-automatic,
title={Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing},
author={Brian Thompson and Matt Post},
year={2020},
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF
|
HuggingFaceTB
| 2024-11-05T14:19:42Z | 1,186 | 34 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-31T20:35:06Z |
---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- llama-cpp
- gguf-my-repo
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
pipeline_tag: text-generation
---
# ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-1.7B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -c 2048
```
|
moreh/MoMo-72B-lora-1.8.7-DPO
|
moreh
| 2024-11-05T14:17:08Z | 2,629 | 68 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2305.18290",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T13:13:18Z |
---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
---
# 24/04/05 update
We introduce [Moreh AI Model Hub with AMD GPU](https://model-hub.moreh.io/), an ai model host platform powered by AMD MI250 GPUs.
You can now test live-inference of this model at Moreh AI Model Hub.
# **Introduction**
MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
#
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- [slimorca](Open-Orca/SlimOrca)
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.8.7(result < 0.1, %)**| TBU |TBU | 0.44 | 0.47 |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.7-DPO")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-72B-lora-1.8.7-DPO"
)
```
|
Atharva26/tiny-bert-finetuned-misselling
|
Atharva26
| 2024-11-05T14:06:54Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-04T06:53:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
raved99/flux-lora-argos1
|
raved99
| 2024-11-05T14:02:34Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-05T14:02:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ARG1
---
# Flux Lora Argos1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ARG1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('raved99/flux-lora-argos1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
DanJoshua/estudiante_MC318_VIOPERU
|
DanJoshua
| 2024-11-05T13:57:18Z | 51 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-11-05T12:32:50Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: estudiante_MC318_VIOPERU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# estudiante_MC318_VIOPERU
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5975
- Accuracy: 0.75
- F1: 0.7499
- Precision: 0.7503
- Recall: 0.75
- Roc Auc: 0.8064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 21
- training_steps: 210
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.6651 | 1.0095 | 10 | 0.7186 | 0.3214 | 0.2432 | 0.1957 | 0.3214 | 0.3023 |
| 0.6966 | 2.0190 | 20 | 0.7151 | 0.375 | 0.3267 | 0.3247 | 0.375 | 0.3724 |
| 0.6205 | 3.0286 | 30 | 0.7197 | 0.4643 | 0.4385 | 0.4562 | 0.4643 | 0.4554 |
| 0.6278 | 4.0381 | 40 | 0.7118 | 0.4821 | 0.4614 | 0.4789 | 0.4821 | 0.4994 |
| 0.5792 | 6.0095 | 50 | 0.6974 | 0.5536 | 0.5465 | 0.5571 | 0.5536 | 0.5357 |
| 0.5719 | 7.0190 | 60 | 0.6983 | 0.5893 | 0.5860 | 0.5922 | 0.5893 | 0.5638 |
| 0.5582 | 8.0286 | 70 | 0.6902 | 0.6071 | 0.6026 | 0.6123 | 0.6071 | 0.5912 |
| 0.5299 | 9.0381 | 80 | 0.6977 | 0.5893 | 0.5828 | 0.5952 | 0.5893 | 0.6059 |
| 0.5262 | 11.0095 | 90 | 0.6972 | 0.6429 | 0.6424 | 0.6436 | 0.6429 | 0.6224 |
| 0.4854 | 12.0190 | 100 | 0.6974 | 0.625 | 0.6239 | 0.6265 | 0.625 | 0.6416 |
| 0.463 | 13.0286 | 110 | 0.6561 | 0.5893 | 0.5892 | 0.5894 | 0.5893 | 0.6556 |
| 0.4543 | 14.0381 | 120 | 0.6376 | 0.625 | 0.6239 | 0.6265 | 0.625 | 0.6696 |
| 0.4127 | 16.0095 | 130 | 0.6716 | 0.6964 | 0.6916 | 0.7095 | 0.6964 | 0.6849 |
| 0.4125 | 17.0190 | 140 | 0.6945 | 0.6429 | 0.6424 | 0.6436 | 0.6429 | 0.6926 |
| 0.3985 | 18.0286 | 150 | 0.6841 | 0.6786 | 0.6769 | 0.6823 | 0.6786 | 0.7092 |
| 0.3954 | 19.0381 | 160 | 0.6239 | 0.6786 | 0.6786 | 0.6786 | 0.6786 | 0.7105 |
| 0.3474 | 21.0095 | 170 | 0.6424 | 0.7143 | 0.7139 | 0.7154 | 0.7143 | 0.7092 |
| 0.3339 | 22.0190 | 180 | 0.6594 | 0.7143 | 0.7139 | 0.7154 | 0.7143 | 0.7117 |
| 0.3295 | 23.0286 | 190 | 0.7352 | 0.6429 | 0.6424 | 0.6436 | 0.6429 | 0.7079 |
| 0.3323 | 24.0381 | 200 | 0.6903 | 0.6607 | 0.6606 | 0.6609 | 0.6607 | 0.7117 |
| 0.2761 | 26.0095 | 210 | 0.6788 | 0.6607 | 0.6606 | 0.6609 | 0.6607 | 0.7181 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.2
|
GiovaDag/donut-base-sroie
|
GiovaDag
| 2024-11-05T13:57:09Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:AdamCodd/donut-receipts-extract",
"base_model:finetune:AdamCodd/donut-receipts-extract",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-08T12:55:24Z |
---
license: cc-by-nc-4.0
base_model: AdamCodd/donut-receipts-extract
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [AdamCodd/donut-receipts-extract](https://huggingface.co/AdamCodd/donut-receipts-extract) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cpu
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MarketLLM/krx_Qwen2-7B-Instruct-market_mod_004
|
MarketLLM
| 2024-11-05T13:49:08Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"krx",
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T10:45:13Z |
---
license: apache-2.0
tags:
- krx
---
|
nhatminh/e5-pos-aware-fixed
|
nhatminh
| 2024-11-05T13:43:45Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-05T13:42:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Anteia/Qwen2-7B-Instruct-v3
|
Anteia
| 2024-11-05T13:39:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T13:16:40Z |
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/KunoichiLake-2x7b-i1-GGUF
|
mradermacher
| 2024-11-05T13:38:09Z | 117 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:macadeliccc/KunoichiLake-2x7b",
"base_model:quantized:macadeliccc/KunoichiLake-2x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T11:24:46Z |
---
base_model: macadeliccc/KunoichiLake-2x7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/macadeliccc/KunoichiLake-2x7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.4 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.4 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.4 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_0.gguf) | i1-Q4_0 | 7.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF/resolve/main/KunoichiLake-2x7b.i1-Q6_K.gguf) | i1-Q6_K | 10.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
glif-loradex-trainer/i12bp8_appelsiensam_light_art_v1
|
glif-loradex-trainer
| 2024-11-05T13:26:25Z | 39 | 3 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2024-11-05T13:25:47Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730813086984__000003000_0.jpg
text: LGHTRT_PPLSNSM skeleton
- output:
url: samples/1730813111889__000003000_1.jpg
text: LGHTRT_PPLSNSM a cat
- output:
url: samples/1730813136528__000003000_2.jpg
text: LGHTRT_PPLSNSM a bike
base_model: black-forest-labs/FLUX.1-dev
trigger: LGHTRT_PPLSNSM
instance_prompt: LGHTRT_PPLSNSM
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# appelsiensam_light_art_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `i12bp8`.
<Gallery />
## Trigger words
You should use `LGHTRT_PPLSNSM` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/i12bp8_appelsiensam_light_art_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
jw-hf-test/jw-14B-215
|
jw-hf-test
| 2024-11-05T13:22:06Z | 463 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T13:18:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lelopees/opt-125m-gptq
|
lelopees
| 2024-11-05T13:18:51Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-05T13:18:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
janbakker/o-conbart-wikiauto
|
janbakker
| 2024-11-05T13:14:42Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"context-bart",
"doc_simp",
"wiki-auto",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T14:19:32Z |
---
language:
- en
tags:
- doc_simp
- wiki-auto
---
# O->ConBART document simplification system
This is the plan-guided context-aware BART model pretrained on wiki-auto.
It belongs to the paper [Beyond Sentence-level Text Simplification](https://aclanthology.org/2024.determit-1.3/): Reproducibility Study of Context-Aware Document Simplification.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model.
|
danigambit/M_ep0_run0_llama3.1-8b_wiki_doc1000_tok25
|
danigambit
| 2024-11-05T13:14:19Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-05T13:11:37Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
janbakker/o-bartpara-wikiauto
|
janbakker
| 2024-11-05T13:13:15Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"doc_simp",
"wiki-auto",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-12T14:18:43Z |
---
language:
- en
tags:
- doc_simp
- wiki-auto
---
# O->BART_para document simplification system
This is the plan-guided paragraph-level BART model pretrained on wiki-auto.
It belongs to the paper [Beyond Sentence-level Text Simplification](https://aclanthology.org/2024.determit-1.3/): Reproducibility Study of Context-Aware Document Simplification.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model.
|
janbakker/pg-dyn-wikiauto
|
janbakker
| 2024-11-05T13:12:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"doc_simp",
"wiki-auto",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-12T14:18:17Z |
---
language:
- en
tags:
- doc_simp
- wiki-auto
---
# PG_dyn document simplification system
This is the plan-guided sentence-level BART model pretrained on wiki-auto.
It belongs to the paper [Beyond Sentence-level Text Simplification](https://aclanthology.org/2024.determit-1.3/): Reproducibility Study of Context-Aware Document Simplification.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model.
|
danigambit/M_ep0_run0_llama3.1-8b_wiki_doc100_tok25
|
danigambit
| 2024-11-05T13:10:59Z | 130 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-05T13:08:14Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
janbakker/leddoc-wikiauto
|
janbakker
| 2024-11-05T13:06:25Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"doc_simp",
"wiki-auto",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-12T13:34:47Z |
---
language:
- en
tags:
- doc_simp
- wiki-auto
---
# LED_doc document simplification model
This is a document-level LED model pretrained on wiki-auto.
It belongs to the paper [Beyond Sentence-level Text Simplification](https://aclanthology.org/2024.determit-1.3/): Reproducibility Study of Context-Aware Document Simplification.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model.
|
GustawB/albert-finetuned-ner
|
GustawB
| 2024-11-05T12:59:21Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T12:59:10Z |
---
library_name: transformers
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-finetuned-ner-gbgb
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.5032151387102701
- name: Recall
type: recall
value: 0.46095590710198586
- name: F1
type: f1
value: 0.4811594202898551
- name: Accuracy
type: accuracy
value: 0.8898127980220168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-finetuned-ner-gbgb
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3371
- Precision: 0.5032
- Recall: 0.4610
- F1: 0.4812
- Accuracy: 0.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5379 | 1.0 | 1756 | 0.4843 | 0.4079 | 0.2740 | 0.3278 | 0.8502 |
| 0.3491 | 2.0 | 3512 | 0.3726 | 0.4903 | 0.3837 | 0.4305 | 0.8778 |
| 0.26 | 3.0 | 5268 | 0.3371 | 0.5032 | 0.4610 | 0.4812 | 0.8898 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.2
|
kharshita590/qwen-careerr
|
kharshita590
| 2024-11-05T12:57:56Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T12:57:28Z |
---
base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** kharshita590
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MLking2/data_helper
|
MLking2
| 2024-11-05T12:55:38Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T12:53:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KR-X-AI/krx-qwen2-7b-instruct-v2-m
|
KR-X-AI
| 2024-11-05T12:54:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-04T11:55:39Z |
---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KR-X-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KR-X-AI/krx-qwen2-7b-instruct-v2-m-20k
|
KR-X-AI
| 2024-11-05T12:54:24Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:KR-X-AI/krx-qwen2-7b-instruct-v2-m",
"base_model:finetune:KR-X-AI/krx-qwen2-7b-instruct-v2-m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-04T15:34:24Z |
---
base_model: KR-X-AI/krx-qwen2-7b-instruct-v2-m
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
---
# Uploaded model
- **Developed by:** KR-X-AI
- **License:** apache-2.0
- **Finetuned from model :** KR-X-AI/krx-qwen2-7b-instruct-v2-m
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shreyasdesaisuperU/whisper-large-attempt1
|
shreyasdesaisuperU
| 2024-11-05T12:53:58Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large",
"base_model:finetune:openai/whisper-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-05T10:14:16Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large SSD superU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large SSD superU
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2685
- Wer: 166.6349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 4.1121 | 3.125 | 100 | 3.5671 | 154.6120 |
| 2.6613 | 6.25 | 200 | 2.8860 | 158.7150 |
| 1.8679 | 9.375 | 300 | 2.8342 | 143.7977 |
| 1.1096 | 12.5 | 400 | 3.0283 | 167.7163 |
| 0.563 | 15.625 | 500 | 3.2773 | 167.3982 |
| 0.2032 | 18.75 | 600 | 3.4815 | 167.4618 |
| 0.0899 | 21.875 | 700 | 3.6164 | 151.9720 |
| 0.0431 | 25.0 | 800 | 3.7659 | 154.4211 |
| 0.0262 | 28.125 | 900 | 3.8327 | 188.4860 |
| 0.0264 | 31.25 | 1000 | 3.8547 | 173.1234 |
| 0.0118 | 34.375 | 1100 | 3.9458 | 184.9237 |
| 0.0076 | 37.5 | 1200 | 4.0480 | 178.3079 |
| 0.0036 | 40.625 | 1300 | 4.1518 | 159.7964 |
| 0.0014 | 43.75 | 1400 | 4.1739 | 164.6310 |
| 0.0011 | 46.875 | 1500 | 4.2014 | 173.6641 |
| 0.001 | 50.0 | 1600 | 4.2262 | 147.2646 |
| 0.001 | 53.125 | 1700 | 4.2510 | 159.1921 |
| 0.0009 | 56.25 | 1800 | 4.2570 | 168.0025 |
| 0.0009 | 59.375 | 1900 | 4.2650 | 166.7621 |
| 0.0008 | 62.5 | 2000 | 4.2685 | 166.6349 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF
|
mfuntowicz
| 2024-11-05T12:51:19Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T12:51:15Z |
---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- llama-cpp
- gguf-my-repo
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
---
# mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF --hf-file smollm2-360m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF --hf-file smollm2-360m-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF --hf-file smollm2-360m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mfuntowicz/SmolLM2-360M-Instruct-Q4_K_M-GGUF --hf-file smollm2-360m-instruct-q4_k_m.gguf -c 2048
```
|
featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF
|
featherless-ai-quants
| 2024-11-05T12:49:53Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T11:41:16Z |
---
base_model: adonlee-Mistral_7B_SFT_DPO_v0
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# adonlee-Mistral_7B_SFT_DPO_v0 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [adonlee-Mistral_7B_SFT_DPO_v0-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [adonlee-Mistral_7B_SFT_DPO_v0-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [adonlee-Mistral_7B_SFT_DPO_v0-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [adonlee-Mistral_7B_SFT_DPO_v0-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [adonlee-Mistral_7B_SFT_DPO_v0-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [adonlee-Mistral_7B_SFT_DPO_v0-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [adonlee-Mistral_7B_SFT_DPO_v0-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [adonlee-Mistral_7B_SFT_DPO_v0-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/adonlee-Mistral_7B_SFT_DPO_v0-GGUF/blob/main/adonlee-Mistral_7B_SFT_DPO_v0-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
OleehyO/TexTeller_en
|
OleehyO
| 2024-11-05T12:44:09Z | 53 | 0 | null |
[
"safetensors",
"vision-encoder-decoder",
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T09:51:50Z |
---
license: apache-2.0
---
|
mradermacher/Wiederchat-7b-dpo-i1-GGUF
|
mradermacher
| 2024-11-05T12:41:07Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/OmniTruthyBeagle-7B-v0",
"mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"cognitivecomputations/openchat-3.5-0106-laser",
"en",
"base_model:mayflowergmbh/Wiederchat-7b-dpo",
"base_model:quantized:mayflowergmbh/Wiederchat-7b-dpo",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-05T09:56:42Z |
---
base_model: mayflowergmbh/Wiederchat-7b-dpo
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/OmniTruthyBeagle-7B-v0
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
- cognitivecomputations/openchat-3.5-0106-laser
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mayflowergmbh/Wiederchat-7b-dpo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Wiederchat-7b-dpo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Wiederchat-7b-dpo-i1-GGUF/resolve/main/Wiederchat-7b-dpo.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
winstonallo/e8-bert-prod-data-87938_rows-augmented-hardcore-preprocessed
|
winstonallo
| 2024-11-05T12:39:14Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T09:39:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ASIF-Mahmud1/finetuned_far_bert_model
|
ASIF-Mahmud1
| 2024-11-05T12:21:11Z | 575 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-05T12:13:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF
|
mradermacher
| 2024-11-05T12:20:30Z | 168 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sophosympatheia/Midnight-Rose-70B-v1.0",
"base_model:quantized:sophosympatheia/Midnight-Rose-70B-v1.0",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T01:00:41Z |
---
base_model: sophosympatheia/Midnight-Rose-70B-v1.0
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Rose-70B-v1.0-i1-GGUF/resolve/main/Midnight-Rose-70B-v1.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.