modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
iceman2434/xlm-roberta-base-ft-udpos213-top5langrandom | iceman2434 | 2024-06-30T20:33:54Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:31:31Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Randomized training order of languages
- Training Data:
- Combined Afrikaans, Norwegian, Vietnamese, Hebrew, & Bulgarian corpora (Top 5 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 79.01\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB |
Litzy619/MIS0630T3 | Litzy619 | 2024-06-30T20:44:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:31:54Z | Entry not found |
DimensionSTP/Llama-3-KoEn-8B-scientificQA | DimensionSTP | 2024-06-30T23:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T20:32:33Z | ---
language:
- en
- ko
license: cc-by-nc-sa-4.0
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
## Model Details
**This model is fine-tuned by beomi/Llama-3-KoEn-8B**
**Fine-tuning dataset: Scientific QA dataset**
|
lit9003code/melotts310 | lit9003code | 2024-06-30T20:33:10Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:32:50Z | Entry not found |
lit9003code/melotts311 | lit9003code | 2024-06-30T20:35:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:34:25Z | Entry not found |
asafi/Meta-Llama-3-medical-8B | asafi | 2024-06-30T20:35:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T20:34:50Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** asafi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lit9003code/melotts312 | lit9003code | 2024-06-30T20:37:27Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:37:03Z | Entry not found |
lit9003code/melotts313 | lit9003code | 2024-06-30T20:38:59Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:38:39Z | Entry not found |
lit9003code/melotts314 | lit9003code | 2024-06-30T20:41:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:40:17Z | Entry not found |
iceman2434/roberta-tagalog-base-ft-udpos213-top2langrandom | iceman2434 | 2024-06-30T20:47:29Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:41:11Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: RoBERTa Tagalog Base (Jan Christian Blaise Cruz)
- Randomized training order of languages
- Training Data:
- Combined English & Serbian corpora (Top 2 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 73.99\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB |
iceman2434/roberta-tagalog-base-ft-udpos213-top3langrandom | iceman2434 | 2024-06-30T20:47:42Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:43:44Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: RoBERTa Tagalog Base (Jan Christian Blaise Cruz)
- Randomized training order of languages
- Training Data:
- Combined English, Serbian, & Slovenian corpora (Top 3 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 71.91\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB |
psimm/llama-3-8B-semeval2014-task | psimm | 2024-06-30T20:48:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T20:45:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/67797950 | habulaj | 2024-06-30T20:46:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:46:10Z | Entry not found |
iceman2434/roberta-tagalog-base-ft-udpos213-top4langrandom | iceman2434 | 2024-06-30T20:50:11Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:48:01Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: RoBERTa Tagalog Base (Jan Christian Blaise Cruz)
- Randomized training order of languages
- Training Data:
- Combined English, Serbian, Slovenian, & Naija corpora (Top 4 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 72.97\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB |
Litzy619/MIS0630T4 | Litzy619 | 2024-06-30T22:56:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:48:06Z | Entry not found |
habulaj/9917682106 | habulaj | 2024-06-30T20:50:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:50:17Z | Entry not found |
Loren85/Dick-Van-Dyke-2024-2023-voice | Loren85 | 2024-06-30T20:51:27Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T20:50:29Z | ---
license: openrail
---
|
iceman2434/roberta-tagalog-base-ft-udpos213-top5langrandom | iceman2434 | 2024-06-30T20:52:43Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:50:55Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: RoBERTa Tagalog Base (Jan Christian Blaise Cruz)
- Randomized training order of languages
- Training Data:
- Combined English, Serbian, Slovenian, Naija, & Manx-Cadhan corpora (Top 5 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 72.52\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB |
eriho/MobileNetV4_TensorFlow.js_feature_vector_small.e2400_r224 | eriho | 2024-06-30T21:03:19Z | 0 | 0 | null | [
"tensorflow.js",
"transfer learning",
"feature-extraction",
"license:cc-by-sa-4.0",
"region:us"
] | feature-extraction | 2024-06-30T20:51:29Z | ---
license: cc-by-sa-4.0
pipeline_tag: feature-extraction
tags:
- tensorflow.js
- transfer learning
---
Original model: https://huggingface.co/timm/mobilenetv4_conv_small.e2400_r224_in1k/tree/main
shape 1,224,224,3
hf gl |
steja/whisper-medium-english | steja | 2024-06-30T20:52:48Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:52:48Z | Entry not found |
habulaj/2003219747 | habulaj | 2024-06-30T20:52:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:52:51Z | Entry not found |
habulaj/195519346132 | habulaj | 2024-06-30T20:54:08Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:53:55Z | Entry not found |
42Antonio/Acb | 42Antonio | 2024-06-30T20:55:15Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:55:14Z | Entry not found |
odelz/hindi_fb1mms_timebalancedreg2 | odelz | 2024-06-30T20:55:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:55:18Z | Entry not found |
variante/llava-1.5-7b-llara-D-inBC-VIMA-80k | variante | 2024-07-01T04:42:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"llara",
"robotics",
"vlm",
"image-text-to-text",
"dataset:VIMA/VIMA-Data",
"arxiv:2406.20095",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-06-30T20:57:42Z | ---
inference: false
pipeline_tag: image-text-to-text
license: apache-2.0
datasets:
- VIMA/VIMA-Data
tags:
- llara
- llava
- robotics
- vlm
---
<br>
<be>
# LLaRA Model Card
This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
[Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
<sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
## Model details
**Model type:**
LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-inBC`, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
**Model date:**
llava-1.5-7b-llara-D-inBC-VIMA-80k was trained in June 2024.
**Paper or resources for more information:**
https://github.com/LostXine/LLaRA
**Where to send questions or comments about the model:**
https://github.com/LostXine/LLaRA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaRA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
mohamedemam/Em2-Mistral-7b | mohamedemam | 2024-07-02T09:36:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"autograding",
"essay quetion",
"sentence similarity",
"en",
"dataset:mohamedemam/Essay-quetions-auto-grading",
"license:gpl",
"region:us"
] | null | 2024-06-30T20:57:51Z | ---
language:
- en
license: gpl
tags:
- autograding
- essay quetion
- sentence similarity
metrics:
- accuracy
library_name: peft
datasets:
- mohamedemam/Essay-quetions-auto-grading
---
# Model Card for Model ID
fine tuned version of Mistral on Essay-quetions-auto-grading
### Model Description
<!-- Provide a longer summary of what this model is. -->
We are thrilled to introduce our graduation project, the EM2 model, designed for automated essay grading in both Arabic and English. 📝✨
To develop this model, we first created a custom dataset for training. We adapted the QuAC and OpenOrca datasets to make them suitable for our automated essay grading application.
Our model utilizes the following impressive models:
Mistral: 96%
LLaMA: 93%
FLAN-T5: 93%
BLOOMZ (Arabic): 86%
MT0 (Arabic): 84%
You can try our models for auto-grading on Hugging Face! 🌐
We then deployed these models for practical use. We are proud of our team's hard work and the potential impact of the EM2 model in the field of education. 🌟
#MachineLearning #AI #Education #EssayGrading #GraduationProject
- **Developed by:** mohamed emam
- **Model type:** decoder only
- **Language(s) (NLP):** English
- **License:** gpl
- **Finetuned from model :** Mistral
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/mohamed-em2m/Automatic-Grading-AI
-
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
auto grading for essay quetions
### Explain how it work
- model take three inputs first context or perfect answer + quetion on context + student answer
then model output the result

### Training Data
- **mohamedemam/Essay-quetions-auto-grading-arabic**
### Training Procedure
using Trl
### Pipline
```python
from transformers import Pipeline
import torch.nn.functional as F
class MyPipeline:
def __init__(self,model,tokenizer):
self.model=model
self.tokenizer=tokenizer
def chat_Format(self,context, quetion, answer):
return "Instruction:/n check answer is true or false of next quetion using context below:\n" + "#context: " + context + f".\n#quetion: " + quetion + f".\n#student answer: " + answer + ".\n#response:"
def __call__(self, context, quetion, answer,generate=1,max_new_tokens=4, num_beams=2, do_sample=False,num_return_sequences=1):
inp=self.chat_Format(context, quetion, answer)
w = self.tokenizer(inp, add_special_tokens=True,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt')
response=""
if(generate):
outputs = self.tokenizer.batch_decode(self.model.generate(input_ids=w['input_ids'].cuda(), attention_mask=w['attention_mask'].cuda(), max_new_tokens=max_new_tokens, num_beams=num_beams, do_sample=do_sample, num_return_sequences=num_return_sequences), skip_special_tokens=True)
response = outputs
s =self.model(input_ids=w['input_ids'].cuda(), attention_mask=w['attention_mask'].cuda())['logits'][0][-1]
s = F.softmax(s, dim=-1)
yes_token_id = self.tokenizer.convert_tokens_to_ids(self.tokenizer.tokenize("True")[0])
no_token_id = self.tokenizer.convert_tokens_to_ids(self.tokenizer.tokenize("False")[0])
for i in ["Yes", "yes", "True", "true","صحيح"]:
for word in self.tokenizer.tokenize(i):
s[yes_token_id] += s[self.tokenizer.convert_tokens_to_ids(word)]
for i in ["No", "no", "False", "false","خطأ"]:
for word in self.tokenizer.tokenize(i):
s[no_token_id] += s[self.tokenizer.convert_tokens_to_ids(word)]
true = (s[yes_token_id] / (s[no_token_id] + s[yes_token_id])).item()
return {"response": response, "true": true}
context="""Large language models, such as GPT-4, are trained on vast amounts of text data to understand and generate human-like text. The deployment of these models involves several steps:
Model Selection: Choosing a pre-trained model that fits the application's needs.
Infrastructure Setup: Setting up the necessary hardware and software infrastructure to run the model efficiently, including cloud services, GPUs, and necessary libraries.
Integration: Integrating the model into an application, which can involve setting up APIs or embedding the model directly into the software.
Optimization: Fine-tuning the model for specific tasks or domains and optimizing it for performance and cost-efficiency.
Monitoring and Maintenance: Ensuring the model performs well over time, monitoring for biases, and updating the model as needed."""
quetion="What are the key considerations when choosing a cloud service provider for deploying a large language model like GPT-4?"
answer="""When choosing a cloud service provider for deploying a large language model like GPT-4, the key considerations include:
Compute Power: Ensure the provider offers high-performance GPUs or TPUs capable of handling the computational requirements of the model.
Scalability: The ability to scale resources up or down based on the application's demand to handle varying workloads efficiently.
Cost: Analyze the pricing models to understand the costs associated with compute time, storage, data transfer, and any other services.
Integration and Support: Availability of tools and libraries that support easy integration of the model into your applications, along with robust technical support and documentation.
Security and Compliance: Ensure the provider adheres to industry standards for security and compliance, protecting sensitive data and maintaining privacy.
Latency and Availability: Consider the geographical distribution of data centers to ensure low latency and high availability for your end-users.
By evaluating these factors, you can select a cloud service provider that aligns with your deployment needs, ensuring efficient and cost-effective operation of your large language model."""
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM,AutoTokenizer
config = PeftConfig.from_pretrained("mohamedemam/Em2-llama-7b")
base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-hf")
model = PeftModel.from_pretrained(base_model, "mohamedemam/Em2-llama-7b")
tokenizer = AutoTokenizer.from_pretrained("mohamedemam/Em2-llama-7b", trust_remote_code=True)
pipe=MyPipeline(model,tokenizer)
print(pipe(context,quetion,answer,generate=True,max_new_tokens=4, num_beams=2, do_sample=False,num_return_sequences=1))
```
- **output:**{'response': ["Instruction:/n check answer is true or false of next quetion using context below:\n#context: Large language models, such as GPT-4, are trained on vast amounts of text data to understand and generate human-like text. The deployment of these models involves several steps:\n\n Model Selection: Choosing a pre-trained model that fits the application's needs.\n Infrastructure Setup: Setting up the necessary hardware and software infrastructure to run the model efficiently, including cloud services, GPUs, and necessary libraries.\n Integration: Integrating the model into an application, which can involve setting up APIs or embedding the model directly into the software.\n Optimization: Fine-tuning the model for specific tasks or domains and optimizing it for performance and cost-efficiency.\n Monitoring and Maintenance: Ensuring the model performs well over time, monitoring for biases, and updating the model as needed..\n#quetion: What are the key considerations when choosing a cloud service provider for deploying a large language model like GPT-4?.\n#student answer: When choosing a cloud service provider for deploying a large language model like GPT-4, the key considerations include:\n Compute Power: Ensure the provider offers high-performance GPUs or TPUs capable of handling the computational requirements of the model.\n Scalability: The ability to scale resources up or down based on the application's demand to handle varying workloads efficiently.\n Cost: Analyze the pricing models to understand the costs associated with compute time, storage, data transfer, and any other services.\n Integration and Support: Availability of tools and libraries that support easy integration of the model into your applications, along with robust technical support and documentation.\n Security and Compliance: Ensure the provider adheres to industry standards for security and compliance, protecting sensitive data and maintaining privacy.\n Latency and Availability: Consider the geographical distribution of data centers to ensure low latency and high availability for your end-users.\n\nBy evaluating these factors, you can select a cloud service provider that aligns with your deployment needs, ensuring efficient and cost-effective operation of your large language model..\n#response: true the answer is"], 'true': 0.943033754825592}
### Chat Format Function
This function formats the input context, question, and answer into a specific structure for the model to process.
```python
def chat_Format(self, context, question, answer):
return "Instruction:/n check answer is true or false of next question using context below:\n" + "#context: " + context + f".\n#question: " + question + f".\n#student answer: " + answer + ".\n#response:"
```
## Configuration
### Dropout Probability for LoRA Layers
- **lora_dropout:** 0.05
### Quantization Settings
- **use_4bit:** True
- **bnb_4bit_compute_dtype:** "float16"
- **bnb_4bit_quant_type:** "nf4"
- **use_nested_quant:** False
### Output Directory
- **output_dir:** "./results"
### Training Parameters
- **num_train_epochs:** 1
- **fp16:** False
- **bf16:** False
- **per_device_train_batch_size:** 1
- **per_device_eval_batch_size:** 4
- **gradient_accumulation_steps:** 8
- **gradient_checkpointing:** True
- **max_grad_norm:** 0.3
- **learning_rate:** 5e-5
- **weight_decay:** 0.001
- **optim:** "paged_adamw_8bit"
- **lr_scheduler_type:** "constant"
- **max_steps:** -1
- **warmup_ratio:** 0.03
- **group_by_length:** True
### Logging and Saving
- **save_steps:** 100
- **logging_steps:** 25
- **max_seq_length:** False |
variante/llava-1.5-7b-llara-D-inBC-Aux-B-VIMA-80k | variante | 2024-07-01T04:42:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"llara",
"robotics",
"vlm",
"image-text-to-text",
"dataset:VIMA/VIMA-Data",
"arxiv:2406.20095",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-06-30T20:58:23Z | ---
inference: false
pipeline_tag: image-text-to-text
license: apache-2.0
datasets:
- VIMA/VIMA-Data
tags:
- llara
- llava
- robotics
- vlm
---
<br>
<be>
# LLaRA Model Card
This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
[Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
<sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
## Model details
**Model type:**
LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-inBC` and 4 auxiliary datasets, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
**Model date:**
llava-1.5-7b-llara-D-inBC-Aux-B-VIMA-80k was trained in June 2024.
**Paper or resources for more information:**
https://github.com/LostXine/LLaRA
**Where to send questions or comments about the model:**
https://github.com/LostXine/LLaRA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaRA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
variante/llava-1.5-7b-llara-D-inBC-Aux-D-VIMA-80k | variante | 2024-07-01T04:43:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"llara",
"robotics",
"vlm",
"image-text-to-text",
"dataset:VIMA/VIMA-Data",
"arxiv:2406.20095",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-06-30T20:58:37Z | ---
inference: false
pipeline_tag: image-text-to-text
license: apache-2.0
datasets:
- VIMA/VIMA-Data
tags:
- llara
- llava
- robotics
- vlm
---
<br>
<be>
# LLaRA Model Card
This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
[Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
<sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
## Model details
**Model type:**
LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-inBC` and 6 auxiliary datasets, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
**Model date:**
llava-1.5-7b-llara-D-inBC-Aux-D-VIMA-80k was trained in June 2024.
**Paper or resources for more information:**
https://github.com/LostXine/LLaRA
**Where to send questions or comments about the model:**
https://github.com/LostXine/LLaRA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaRA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
Renee0v0/NeuralPipe-7B-slerp | Renee0v0 | 2024-06-30T21:05:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T21:00:59Z | ---
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Renee0v0/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
habulaj/297619439478 | habulaj | 2024-06-30T21:01:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:01:12Z | Entry not found |
habulaj/269179427694 | habulaj | 2024-06-30T21:03:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:02:54Z | Entry not found |
osouza/bert-large-ambiguidade-v3 | osouza | 2024-06-30T21:03:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T21:03:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/347252492909 | habulaj | 2024-06-30T21:06:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:06:13Z | Entry not found |
Megnis/qdora2 | Megnis | 2024-06-30T21:06:45Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:06:45Z | Entry not found |
Sirok/sirok2 | Sirok | 2024-06-30T21:07:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:07:07Z | Entry not found |
habulaj/6877062242 | habulaj | 2024-06-30T21:08:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:08:43Z | Entry not found |
habulaj/11003984903 | habulaj | 2024-06-30T21:10:08Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:10:03Z | Entry not found |
odelz/eng_fb1mms_unbalanced | odelz | 2024-06-30T21:12:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:12:17Z | Entry not found |
habulaj/98715236960 | habulaj | 2024-06-30T21:15:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:15:01Z | Entry not found |
selvaa/segformer-b1-finetuned-cityscapes-1024-1024-with-after-demo-ds | selvaa | 2024-06-30T21:36:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T21:16:50Z | ---
license: other
base_model: nvidia/segformer-b1-finetuned-cityscapes-1024-1024
tags:
- generated_from_trainer
model-index:
- name: segformer-b1-finetuned-cityscapes-1024-1024-with-after-demo-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-finetuned-cityscapes-1024-1024-with-after-demo-ds
This model is a fine-tuned version of [nvidia/segformer-b1-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b1-finetuned-cityscapes-1024-1024) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0153
- Mean Iou: 0.9689
- Mean Accuracy: 0.9858
- Overall Accuracy: 0.9947
- Accuracy Default: 1e-06
- Accuracy Pipe: 0.9729
- Accuracy Floor: 0.9861
- Accuracy Background: 0.9985
- Iou Default: 1e-06
- Iou Pipe: 0.9305
- Iou Floor: 0.9802
- Iou Background: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Default | Accuracy Pipe | Accuracy Floor | Accuracy Background | Iou Default | Iou Pipe | Iou Floor | Iou Background |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:-------------:|:--------------:|:-------------------:|:-----------:|:--------:|:---------:|:--------------:|
| 0.333 | 1.0 | 55 | 0.1193 | 0.8358 | 0.8688 | 0.9725 | 1e-06 | 0.6617 | 0.9467 | 0.9981 | 1e-06 | 0.5954 | 0.9420 | 0.9700 |
| 0.0978 | 2.0 | 110 | 0.0734 | 0.8938 | 0.9399 | 0.9817 | 1e-06 | 0.8567 | 0.9709 | 0.9921 | 1e-06 | 0.7472 | 0.9523 | 0.9818 |
| 0.0647 | 3.0 | 165 | 0.0529 | 0.9169 | 0.9580 | 0.9860 | 1e-06 | 0.9093 | 0.9696 | 0.9951 | 1e-06 | 0.8023 | 0.9617 | 0.9866 |
| 0.0519 | 4.0 | 220 | 0.0455 | 0.9175 | 0.9445 | 0.9861 | 1e-06 | 0.8663 | 0.9692 | 0.9979 | 1e-06 | 0.8031 | 0.9638 | 0.9855 |
| 0.0457 | 5.0 | 275 | 0.0413 | 0.9198 | 0.9687 | 0.9866 | 1e-06 | 0.9356 | 0.9786 | 0.9919 | 1e-06 | 0.8098 | 0.9614 | 0.9881 |
| 0.0407 | 6.0 | 330 | 0.0360 | 0.9283 | 0.9584 | 0.9882 | 1e-06 | 0.9010 | 0.9780 | 0.9962 | 1e-06 | 0.8320 | 0.9632 | 0.9897 |
| 0.0363 | 7.0 | 385 | 0.0318 | 0.9399 | 0.9698 | 0.9897 | 1e-06 | 0.9385 | 0.9737 | 0.9973 | 1e-06 | 0.8614 | 0.9680 | 0.9904 |
| 0.0335 | 8.0 | 440 | 0.0295 | 0.9423 | 0.9727 | 0.9904 | 1e-06 | 0.9443 | 0.9770 | 0.9969 | 1e-06 | 0.8652 | 0.9702 | 0.9915 |
| 0.0318 | 9.0 | 495 | 0.0288 | 0.9425 | 0.9746 | 0.9905 | 1e-06 | 0.9492 | 0.9784 | 0.9963 | 1e-06 | 0.8664 | 0.9694 | 0.9918 |
| 0.0292 | 10.0 | 550 | 0.0262 | 0.9478 | 0.9752 | 0.9912 | 1e-06 | 0.9510 | 0.9769 | 0.9976 | 1e-06 | 0.8803 | 0.9710 | 0.9922 |
| 0.0291 | 11.0 | 605 | 0.0270 | 0.9466 | 0.9720 | 0.9909 | 1e-06 | 0.9415 | 0.9765 | 0.9979 | 1e-06 | 0.8774 | 0.9708 | 0.9916 |
| 0.0275 | 12.0 | 660 | 0.0249 | 0.9496 | 0.9793 | 0.9916 | 1e-06 | 0.9625 | 0.9784 | 0.9971 | 1e-06 | 0.8835 | 0.9723 | 0.9929 |
| 0.0264 | 13.0 | 715 | 0.0246 | 0.9514 | 0.9716 | 0.9915 | 1e-06 | 0.9383 | 0.9782 | 0.9984 | 1e-06 | 0.8901 | 0.9720 | 0.9920 |
| 0.0255 | 14.0 | 770 | 0.0242 | 0.9500 | 0.9812 | 0.9917 | 1e-06 | 0.9677 | 0.9792 | 0.9967 | 1e-06 | 0.8846 | 0.9723 | 0.9932 |
| 0.0248 | 15.0 | 825 | 0.0230 | 0.9534 | 0.9785 | 0.9921 | 1e-06 | 0.9598 | 0.9777 | 0.9980 | 1e-06 | 0.8940 | 0.9732 | 0.9931 |
| 0.0241 | 16.0 | 880 | 0.0233 | 0.9523 | 0.9806 | 0.9920 | 1e-06 | 0.9666 | 0.9778 | 0.9975 | 1e-06 | 0.8906 | 0.9731 | 0.9932 |
| 0.023 | 17.0 | 935 | 0.0215 | 0.9562 | 0.9778 | 0.9925 | 1e-06 | 0.9553 | 0.9801 | 0.9982 | 1e-06 | 0.9015 | 0.9738 | 0.9934 |
| 0.0223 | 18.0 | 990 | 0.0212 | 0.9562 | 0.9780 | 0.9925 | 1e-06 | 0.9546 | 0.9816 | 0.9979 | 1e-06 | 0.9011 | 0.9737 | 0.9937 |
| 0.022 | 19.0 | 1045 | 0.0205 | 0.9558 | 0.9810 | 0.9927 | 1e-06 | 0.9640 | 0.9813 | 0.9975 | 1e-06 | 0.8995 | 0.9737 | 0.9941 |
| 0.0213 | 20.0 | 1100 | 0.0207 | 0.9582 | 0.9764 | 0.9926 | 1e-06 | 0.9504 | 0.9801 | 0.9986 | 1e-06 | 0.9069 | 0.9745 | 0.9932 |
| 0.0213 | 21.0 | 1155 | 0.0211 | 0.9566 | 0.9801 | 0.9927 | 1e-06 | 0.9624 | 0.9796 | 0.9981 | 1e-06 | 0.9014 | 0.9746 | 0.9937 |
| 0.0206 | 22.0 | 1210 | 0.0202 | 0.9589 | 0.9799 | 0.9929 | 1e-06 | 0.9608 | 0.9804 | 0.9983 | 1e-06 | 0.9078 | 0.9752 | 0.9938 |
| 0.0199 | 23.0 | 1265 | 0.0194 | 0.9596 | 0.9813 | 0.9931 | 1e-06 | 0.9644 | 0.9812 | 0.9981 | 1e-06 | 0.9096 | 0.9750 | 0.9942 |
| 0.0192 | 24.0 | 1320 | 0.0194 | 0.9590 | 0.9831 | 0.9932 | 1e-06 | 0.9710 | 0.9803 | 0.9981 | 1e-06 | 0.9070 | 0.9754 | 0.9945 |
| 0.019 | 25.0 | 1375 | 0.0189 | 0.9608 | 0.9834 | 0.9933 | 1e-06 | 0.9703 | 0.9820 | 0.9978 | 1e-06 | 0.9124 | 0.9754 | 0.9945 |
| 0.0189 | 26.0 | 1430 | 0.0195 | 0.9602 | 0.9822 | 0.9932 | 1e-06 | 0.9675 | 0.9808 | 0.9983 | 1e-06 | 0.9103 | 0.9758 | 0.9943 |
| 0.0185 | 27.0 | 1485 | 0.0204 | 0.9577 | 0.9804 | 0.9930 | 1e-06 | 0.9617 | 0.9815 | 0.9981 | 1e-06 | 0.9035 | 0.9754 | 0.9942 |
| 0.0185 | 28.0 | 1540 | 0.0188 | 0.9625 | 0.9808 | 0.9935 | 1e-06 | 0.9616 | 0.9822 | 0.9986 | 1e-06 | 0.9167 | 0.9766 | 0.9944 |
| 0.0178 | 29.0 | 1595 | 0.0186 | 0.9626 | 0.9801 | 0.9935 | 1e-06 | 0.9588 | 0.9829 | 0.9985 | 1e-06 | 0.9166 | 0.9768 | 0.9943 |
| 0.0176 | 30.0 | 1650 | 0.0192 | 0.9622 | 0.9802 | 0.9935 | 1e-06 | 0.9594 | 0.9826 | 0.9986 | 1e-06 | 0.9156 | 0.9766 | 0.9945 |
| 0.0175 | 31.0 | 1705 | 0.0175 | 0.9631 | 0.9839 | 0.9937 | 1e-06 | 0.9710 | 0.9827 | 0.9981 | 1e-06 | 0.9176 | 0.9769 | 0.9948 |
| 0.017 | 32.0 | 1760 | 0.0183 | 0.9615 | 0.9852 | 0.9936 | 1e-06 | 0.9761 | 0.9814 | 0.9981 | 1e-06 | 0.9130 | 0.9765 | 0.9949 |
| 0.0172 | 33.0 | 1815 | 0.0173 | 0.9646 | 0.9834 | 0.9938 | 1e-06 | 0.9690 | 0.9830 | 0.9984 | 1e-06 | 0.9218 | 0.9772 | 0.9948 |
| 0.0167 | 34.0 | 1870 | 0.0175 | 0.9625 | 0.9857 | 0.9938 | 1e-06 | 0.9768 | 0.9822 | 0.9981 | 1e-06 | 0.9156 | 0.9769 | 0.9951 |
| 0.0164 | 35.0 | 1925 | 0.0170 | 0.9643 | 0.9854 | 0.9940 | 1e-06 | 0.9749 | 0.9832 | 0.9981 | 1e-06 | 0.9200 | 0.9776 | 0.9952 |
| 0.016 | 36.0 | 1980 | 0.0166 | 0.9657 | 0.9844 | 0.9941 | 1e-06 | 0.9710 | 0.9837 | 0.9984 | 1e-06 | 0.9237 | 0.9782 | 0.9952 |
| 0.0161 | 37.0 | 2035 | 0.0169 | 0.9661 | 0.9830 | 0.9941 | 1e-06 | 0.9668 | 0.9834 | 0.9987 | 1e-06 | 0.9254 | 0.9780 | 0.9949 |
| 0.0156 | 38.0 | 2090 | 0.0172 | 0.9648 | 0.9840 | 0.9939 | 1e-06 | 0.9706 | 0.9829 | 0.9984 | 1e-06 | 0.9220 | 0.9774 | 0.9949 |
| 0.0156 | 39.0 | 2145 | 0.0170 | 0.9640 | 0.9857 | 0.9940 | 1e-06 | 0.9769 | 0.9817 | 0.9985 | 1e-06 | 0.9192 | 0.9774 | 0.9953 |
| 0.0152 | 40.0 | 2200 | 0.0164 | 0.9667 | 0.9845 | 0.9942 | 1e-06 | 0.9710 | 0.9839 | 0.9985 | 1e-06 | 0.9267 | 0.9783 | 0.9952 |
| 0.0153 | 41.0 | 2255 | 0.0164 | 0.9663 | 0.9854 | 0.9942 | 1e-06 | 0.9748 | 0.9830 | 0.9985 | 1e-06 | 0.9256 | 0.9780 | 0.9953 |
| 0.016 | 42.0 | 2310 | 0.0162 | 0.9662 | 0.9854 | 0.9942 | 1e-06 | 0.9744 | 0.9833 | 0.9985 | 1e-06 | 0.9254 | 0.9778 | 0.9954 |
| 0.0157 | 43.0 | 2365 | 0.0162 | 0.9670 | 0.9849 | 0.9943 | 1e-06 | 0.9724 | 0.9837 | 0.9986 | 1e-06 | 0.9269 | 0.9786 | 0.9953 |
| 0.0148 | 44.0 | 2420 | 0.0167 | 0.9671 | 0.9850 | 0.9943 | 1e-06 | 0.9719 | 0.9849 | 0.9983 | 1e-06 | 0.9273 | 0.9786 | 0.9953 |
| 0.0149 | 45.0 | 2475 | 0.0165 | 0.9660 | 0.9853 | 0.9943 | 1e-06 | 0.9730 | 0.9846 | 0.9983 | 1e-06 | 0.9235 | 0.9789 | 0.9955 |
| 0.0144 | 46.0 | 2530 | 0.0154 | 0.9670 | 0.9870 | 0.9945 | 1e-06 | 0.9784 | 0.9844 | 0.9983 | 1e-06 | 0.9260 | 0.9791 | 0.9958 |
| 0.0142 | 47.0 | 2585 | 0.0150 | 0.9685 | 0.9865 | 0.9946 | 1e-06 | 0.9762 | 0.9847 | 0.9985 | 1e-06 | 0.9302 | 0.9794 | 0.9957 |
| 0.0142 | 48.0 | 2640 | 0.0154 | 0.9672 | 0.9870 | 0.9945 | 1e-06 | 0.9784 | 0.9841 | 0.9984 | 1e-06 | 0.9268 | 0.9792 | 0.9957 |
| 0.0144 | 49.0 | 2695 | 0.0152 | 0.9677 | 0.9862 | 0.9945 | 1e-06 | 0.9754 | 0.9847 | 0.9985 | 1e-06 | 0.9284 | 0.9791 | 0.9957 |
| 0.0141 | 50.0 | 2750 | 0.0154 | 0.9681 | 0.9857 | 0.9946 | 1e-06 | 0.9729 | 0.9857 | 0.9984 | 1e-06 | 0.9289 | 0.9796 | 0.9957 |
| 0.0136 | 51.0 | 2805 | 0.0153 | 0.9690 | 0.9855 | 0.9947 | 1e-06 | 0.9728 | 0.9850 | 0.9987 | 1e-06 | 0.9317 | 0.9797 | 0.9957 |
| 0.0138 | 52.0 | 2860 | 0.0150 | 0.9691 | 0.9866 | 0.9947 | 1e-06 | 0.9767 | 0.9846 | 0.9986 | 1e-06 | 0.9320 | 0.9796 | 0.9957 |
| 0.014 | 53.0 | 2915 | 0.0158 | 0.9673 | 0.9853 | 0.9945 | 1e-06 | 0.9720 | 0.9855 | 0.9984 | 1e-06 | 0.9266 | 0.9798 | 0.9956 |
| 0.0136 | 54.0 | 2970 | 0.0154 | 0.9693 | 0.9857 | 0.9948 | 1e-06 | 0.9725 | 0.9863 | 0.9985 | 1e-06 | 0.9319 | 0.9802 | 0.9958 |
| 0.0138 | 55.0 | 3025 | 0.0154 | 0.9692 | 0.9853 | 0.9947 | 1e-06 | 0.9717 | 0.9855 | 0.9986 | 1e-06 | 0.9323 | 0.9798 | 0.9956 |
| 0.0134 | 56.0 | 3080 | 0.0153 | 0.9689 | 0.9857 | 0.9947 | 1e-06 | 0.9728 | 0.9860 | 0.9984 | 1e-06 | 0.9312 | 0.9797 | 0.9957 |
| 0.0135 | 57.0 | 3135 | 0.0154 | 0.9695 | 0.9863 | 0.9948 | 1e-06 | 0.9747 | 0.9855 | 0.9986 | 1e-06 | 0.9325 | 0.9800 | 0.9958 |
| 0.0133 | 58.0 | 3190 | 0.0154 | 0.9689 | 0.9859 | 0.9947 | 1e-06 | 0.9739 | 0.9854 | 0.9985 | 1e-06 | 0.9313 | 0.9798 | 0.9957 |
| 0.0134 | 59.0 | 3245 | 0.0152 | 0.9696 | 0.9862 | 0.9948 | 1e-06 | 0.9745 | 0.9856 | 0.9986 | 1e-06 | 0.9328 | 0.9801 | 0.9958 |
| 0.0138 | 60.0 | 3300 | 0.0153 | 0.9689 | 0.9858 | 0.9947 | 1e-06 | 0.9729 | 0.9861 | 0.9985 | 1e-06 | 0.9305 | 0.9802 | 0.9958 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
habulaj/112492325026 | habulaj | 2024-06-30T21:18:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:18:43Z | Entry not found |
DraughtMonkeKZ/TibetanMacaque | DraughtMonkeKZ | 2024-06-30T21:19:00Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | 2024-06-30T21:19:00Z | ---
license: gpl-3.0
---
|
ramy21/yolounder | ramy21 | 2024-06-30T21:32:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:19:47Z | # My YOLO Model
This model is trained using PyTorch Lightning. |
habulaj/2388242475 | habulaj | 2024-06-30T21:21:20Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:21:13Z | Entry not found |
mgh6/TCS_MLM_SaProt | mgh6 | 2024-07-01T20:54:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-06-30T21:22:34Z | Entry not found |
DewEfresh/Neo_7b-merge10 | DewEfresh | 2024-06-30T21:23:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DewEfresh/neo_7b",
"conversational",
"base_model:DewEfresh/neo_7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T21:22:53Z | ---
base_model:
- DewEfresh/neo_7b
- DewEfresh/neo_7b
tags:
- merge
- mergekit
- lazymergekit
- DewEfresh/neo_7b
---
# Neo_7b-merge10
Neo_7b-merge10 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: DewEfresh/neo_7b
layer_range: [0, 0]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [1, 1]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [2, 2]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [4, 4]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [5, 5]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [6, 6]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [8, 8]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [9, 9]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [10, 10]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [12, 12]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [13, 13]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [14, 14]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [16, 16]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [17, 17]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [18, 18]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [20, 20]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [21, 21]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [22, 22]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [24, 24]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [25, 25]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [26, 26]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
merge_method: slerp
base_model: DewEfresh/neo_7b
parameters:
t: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge10"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Marcelojtc/bart-cnn-samsum-peft | Marcelojtc | 2024-06-30T21:38:35Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:ingeniumacademy/bart-cnn-samsum-finetuned",
"license:mit",
"region:us"
] | null | 2024-06-30T21:24:22Z | ---
base_model: ingeniumacademy/bart-cnn-samsum-finetuned
datasets:
- samsum
library_name: peft
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-peft
This model is a fine-tuned version of [ingeniumacademy/bart-cnn-samsum-finetuned](https://huggingface.co/ingeniumacademy/bart-cnn-samsum-finetuned) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.078 | 1.0 | 19 | 0.1345 |
| 0.0865 | 2.0 | 38 | 0.1345 |
| 0.0768 | 3.0 | 57 | 0.1345 |
| 0.079 | 4.0 | 76 | 0.1345 |
| 0.0916 | 5.0 | 95 | 0.1345 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
habulaj/54258170221 | habulaj | 2024-06-30T21:26:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:26:38Z | Entry not found |
silveroxides/AnimagineXL31_X_AutismMixPony_mergeproject | silveroxides | 2024-06-30T22:17:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:30:21Z | Entry not found |
DewEfresh/Neo_7b-merge11 | DewEfresh | 2024-06-30T21:32:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DewEfresh/neo_7b",
"conversational",
"base_model:DewEfresh/neo_7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T21:31:20Z | ---
base_model:
- DewEfresh/neo_7b
- DewEfresh/neo_7b
tags:
- merge
- mergekit
- lazymergekit
- DewEfresh/neo_7b
---
# Neo_7b-merge11
Neo_7b-merge11 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: DewEfresh/neo_7b
layer_range: [0, 0]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [1, 1]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [2, 2]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [4, 4]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [5, 5]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [6, 6]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [8, 8]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [9, 9]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [10, 10]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [12, 12]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [13, 13]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [14, 14]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [16, 16]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [17, 17]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [18, 18]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [20, 20]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [21, 21]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [22, 22]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [24, 24]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [25, 25]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [26, 26]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
merge_method: slerp
base_model: DewEfresh/neo_7b
parameters:
t: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge11"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ramy21/newyolos | ramy21 | 2024-06-30T21:35:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:35:18Z | Entry not found |
variante/llara-maskrcnn | variante | 2024-07-01T04:43:20Z | 0 | 0 | null | [
"llara",
"robotics",
"vlm",
"object-detection",
"dataset:VIMA/VIMA-Data",
"arxiv:2406.20095",
"license:apache-2.0",
"region:us"
] | object-detection | 2024-06-30T21:36:03Z | ---
inference: false
license: apache-2.0
datasets:
- VIMA/VIMA-Data
tags:
- llara
- robotics
- vlm
pipeline_tag: object-detection
---
<br>
<be>
# Model Card
This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
[Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
<sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
## Model details
**Model type:**
This repository contains three models trained on three subsets respectively, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
**Paper or resources for more information:**
https://github.com/LostXine/LLaRA
**Where to send questions or comments about the model:**
https://github.com/LostXine/LLaRA/issues
|
Frixi/ROA_PR | Frixi | 2024-06-30T21:36:20Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T21:36:05Z | ---
license: openrail
---
|
tom1-ll/Lilyallroundv2 | tom1-ll | 2024-06-30T21:39:00Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T21:37:24Z | ---
license: openrail
---
|
imelike/new-turkishReviews-tokenizer | imelike | 2024-06-30T21:37:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T21:37:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/3855139265 | habulaj | 2024-06-30T21:39:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:39:49Z | Entry not found |
gkMSDA/FinChat_Mistral7B_DPO_V2 | gkMSDA | 2024-06-30T21:52:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T21:46:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/368315333833 | habulaj | 2024-06-30T21:49:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:49:01Z | Entry not found |
Adam3/Michael-Kranz | Adam3 | 2024-06-30T21:50:27Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium",
"region:us"
] | text-to-image | 2024-06-30T21:49:02Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1001125444.jpg
base_model: stabilityai/stable-diffusion-3-medium
instance_prompt: null
---
# Michael kranz
<Gallery />
## Download model
[Download](/Adam3/Michael-Kranz/tree/main) them in the Files & versions tab.
|
Mome757mome/moka | Mome757mome | 2024-06-30T21:49:51Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-06-30T21:49:51Z | ---
license: bigscience-bloom-rail-1.0
---
|
habulaj/135763111248 | habulaj | 2024-06-30T21:50:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:50:16Z | Entry not found |
MHRDYN7/dnv2-base | MHRDYN7 | 2024-06-30T21:55:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"dnv2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T21:52:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/5320041046 | habulaj | 2024-06-30T21:53:56Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:53:49Z | Entry not found |
habulaj/137258123550 | habulaj | 2024-06-30T21:57:21Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T21:57:16Z | Entry not found |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e4_dec1e3_bs16 | nsugianto | 2024-07-02T14:34:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-06-30T21:58:38Z | ---
license: mit
base_model: nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12
tags:
- generated_from_trainer
model-index:
- name: tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e4_dec1e3_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e4_dec1e3_bs16
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e4_dec1e3_bs16 | nsugianto | 2024-07-03T01:30:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-06-30T21:59:38Z | ---
license: mit
base_model: nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12
tags:
- generated_from_trainer
model-index:
- name: tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e4_dec1e3_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e4_dec1e3_bs16
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
khanhnn55/naschainv8 | khanhnn55 | 2024-07-02T23:19:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:03:31Z | Entry not found |
NiluferUcar/CNN_Convolutional_Neural_Network_Models | NiluferUcar | 2024-06-30T22:08:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T22:03:50Z | ---
license: apache-2.0
---
|
nicolebar/3d | nicolebar | 2024-06-30T22:04:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T22:04:38Z | ---
license: apache-2.0
---
|
JEFFERSONMUSIC/JKGOLDENERAV3 | JEFFERSONMUSIC | 2024-06-30T22:08:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T22:07:09Z | ---
license: apache-2.0
---
|
chaanks/UTMOS | chaanks | 2024-06-30T22:14:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:13:36Z | Entry not found |
Coolwowsocoolwow/Random_Assertive_EGirl | Coolwowsocoolwow | 2024-06-30T22:20:01Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T22:16:56Z | ---
license: openrail
---
|
habulaj/7282053427 | habulaj | 2024-06-30T22:18:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:18:20Z | Entry not found |
Intaa/Lucas | Intaa | 2024-07-02T22:51:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:18:35Z | Entry not found |
TheDima/resnet50-dog-breed-identification | TheDima | 2024-06-30T22:20:03Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-06-30T22:19:27Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
chopchopchuck/mts100 | chopchopchuck | 2024-06-30T22:21:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:20:47Z | Entry not found |
habulaj/2436026491 | habulaj | 2024-06-30T22:21:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:20:52Z | Entry not found |
chopchopchuck/mts101 | chopchopchuck | 2024-06-30T22:22:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:22:27Z | Entry not found |
chopchopchuck/mts102 | chopchopchuck | 2024-06-30T22:25:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:24:03Z | Entry not found |
habulaj/7712156082 | habulaj | 2024-06-30T22:24:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:24:48Z | Entry not found |
rambaldi47/q-FrozenLake-v1-4x4-noSlippery | rambaldi47 | 2024-06-30T22:24:58Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-30T22:24:55Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rambaldi47/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
slattybenzo/checkpoint | slattybenzo | 2024-06-30T22:26:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T22:26:09Z | ---
license: apache-2.0
---
|
chopchopchuck/mts105 | chopchopchuck | 2024-06-30T22:27:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:27:21Z | Entry not found |
habulaj/367577333123 | habulaj | 2024-06-30T22:28:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:28:24Z | Entry not found |
rambaldi47/Taxi-v3 | rambaldi47 | 2024-06-30T23:32:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-30T22:28:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rambaldi47/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chopchopchuck/mts106 | chopchopchuck | 2024-06-30T22:29:32Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:29:10Z | Entry not found |
chopchopchuck/mts107 | chopchopchuck | 2024-06-30T22:31:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:30:58Z | Entry not found |
habulaj/200255390394 | habulaj | 2024-06-30T22:32:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:31:50Z | Entry not found |
chopchopchuck/mts108 | chopchopchuck | 2024-06-30T22:32:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:32:34Z | Entry not found |
martimfasantos/tinyllama-1.1b-sum-sft-full_LR4e-5 | martimfasantos | 2024-07-01T00:26:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:martimfasantos/openai-summarize-tldr",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T22:33:27Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- martimfasantos/openai-summarize-tldr
model-index:
- name: tinyllama-1.1b-sum-sft-full_LR4e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-sft-full_LR4e-5
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-summarize-tldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1044 | 0.9997 | 1476 | 2.1087 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
chopchopchuck/mts109 | chopchopchuck | 2024-06-30T22:34:38Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:34:15Z | Entry not found |
chopchopchuck/mts110 | chopchopchuck | 2024-06-30T22:36:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:35:52Z | Entry not found |
habulaj/1109616775 | habulaj | 2024-06-30T22:36:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:36:11Z | Entry not found |
AIGym/fast-mini-webtext-65536 | AIGym | 2024-06-30T22:36:47Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T22:36:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikedata/q-FrozenLake-v1-4x4-noSlippery | mikedata | 2024-06-30T22:40:14Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-30T22:39:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mikedata/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Homiebear/R2-D2 | Homiebear | 2024-06-30T22:44:51Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T22:44:30Z | ---
license: openrail
---
|
Spbou4-hilma/HILMA-FIN-7B | Spbou4-hilma | 2024-06-30T23:03:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T22:45:31Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
habulaj/244479215760 | habulaj | 2024-06-30T22:46:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T22:46:18Z | Entry not found |
EmoHugAI/xlm-roberta-base-finetuned-panx-de | EmoHugAI | 2024-06-30T23:34:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-30T22:46:34Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3159
- F1: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3578 | 1.0 | 525 | 0.2204 | 0.7870 |
| 0.1946 | 2.0 | 1050 | 0.2063 | 0.8072 |
| 0.1314 | 3.0 | 1575 | 0.2037 | 0.8318 |
| 0.0924 | 4.0 | 2100 | 0.2161 | 0.8363 |
| 0.0641 | 5.0 | 2625 | 0.2472 | 0.8418 |
| 0.046 | 6.0 | 3150 | 0.2754 | 0.8409 |
| 0.0306 | 7.0 | 3675 | 0.2718 | 0.8509 |
| 0.0205 | 8.0 | 4200 | 0.3045 | 0.8563 |
| 0.0128 | 9.0 | 4725 | 0.3148 | 0.8568 |
| 0.0091 | 10.0 | 5250 | 0.3159 | 0.8543 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mikedata/q-Taxi-V3 | mikedata | 2024-06-30T22:48:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-30T22:47:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mikedata/q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.