modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mayarmostafa/videomae-base-finetuned-bleeding-exp_4 | mayarmostafa | 2024-07-02T12:37:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-07-02T11:26:23Z | Entry not found |
sctj/sd-class-butterflies-32 | sctj | 2024-07-02T11:33:32Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-07-02T11:26:27Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('sctj/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
akazdayo/whisper-medium-onnx | akazdayo | 2024-07-02T12:26:39Z | 0 | 0 | transformers | [
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T11:27:43Z | # Credit
https://huggingface.co/openai/whisper-medium - Apache-2.0
Copyright OpenAI
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. |
CaoTienTrung/SAVE_TRAIN | CaoTienTrung | 2024-07-02T11:28:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:28:38Z | Entry not found |
huggingKyr/lora_model | huggingKyr | 2024-07-02T11:28:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:28:42Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** huggingKyr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
philipplukas/whisper-asr-finetuned | philipplukas | 2024-07-02T13:35:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"de",
"dataset:philipplukas/asr_data",
"base_model:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T11:28:51Z | ---
language:
- de
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- philipplukas/asr_data
metrics:
- wer
model-index:
- name: Finetuned on german congenital speech
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Custom asr data
type: philipplukas/asr_data
metrics:
- name: Wer
type: wer
value: 36.44859813084112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned on german congenital speech
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Custom asr data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6732
- Wer: 36.4486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.4554 | 0.2 | 1 | 1.6732 | 36.4486 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
qsdcfqsdfcxqfqs/AI-Unveils-Evolutionary-Patterns-Predicted-by-Darwin-and-Wallace-Neuroscience-News-fa-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:30:35Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-07-02T11:29:20Z | ---
language:
- en
---
[]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2554132423&Connector=https://unitedstatednews.com
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1552332232&Connector=https://unitedstatednews.com
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3134253244&Connector=https://unitedstatednews.com
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5131114125&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3542322225&Connector=https://unitedstatednews.com
Other Sources :
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4425432454&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1314221312&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4354235442&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4113234534&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5155253254&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4244241324&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4455143412&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4444315311&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4544242532&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3132113212&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1341441452&Connector=https://unitedstatednews.com
Summary: A novel AI-powered study explores evolutionary differences between male and female birdwing butterflies, shedding new light on a historical debate between Charles Darwin and Alfred Russel Wallace.
Using machine learning to analyze over 16,000 butterfly specimens, researchers found that both sexes contribute to species diversity. Males often show more variation, supporting Darwin's theories of sexual selection, while subtle variations in females align with Wallace's ideas on natural selection.
These findings expand on classical theories by showing how both mechanisms operate together to drive biodiversity.
Pioneering AI-powered research on butterflies has probed the under-studied evolution of females and adds to a debate between the founding fathers of evolution.
The University of Essex study - published in Communications Biology - explores a controversy between Victorian scientists Charles Darwin and Alfred Russel Wallace.
Darwin thought males had more variation, as females often chose mates based on male appearance.
Whereas Wallace thought natural selection across sexes was the biggest factor in difference.
For over a century, scientists have mostly studied males because their differences are more obvious, while females, with more subtle evolutionary changes, had been less studied.
By using high-tech machine learning Dr Jennifer Hoyal Cuthill examined more than 16,000 male and female birdwing butterflies, with collaborators from the Natural History Museum and AI research institute Cross Labs, Cross Compass.
This is the first time the visual differences between sexes have been explored across the species, which live in Southeast Asia and Australasia.
Birdwing butterflies were chosen for this study because of their spectacular wing colour patterns and differences between males and females.
Dr Hoyal Cuthill, from the School of Life Sciences, said: "This is an exciting time, when machine learning is enabling new, large-scale tests of longstanding questions in evolutionary science.
"For the first time we are able to measure the visible extents of evolution to test how much variation is present in different biological groups and among both males and females.
"Machine learning is giving us new information on the evolutionary processes which generate and maintain biodiversity, including in historically neglected groups."
The study looked at photographs of butterflies from the Natural History Museum collections, which show a range of traits, like wing shapes, colours, and patterns, across several species.
It found that while males often have more distinct shapes and patterns, both males and females contribute to the overall diversity.
The research showed evolutionary patterns predicted by both Darwin and Wallace were found in the butterflies.
Showing that both males and females contribute to diversity among species.
The males showed more variation in appearance, which fits with Darwin's idea that females choose mates based on these traits.
However, the deep learning also found subtle variation in females, matching Wallace's predictions about natural selection allowing diversity in female phenotypes.
Dr Hoyal Cuthill said: "Birdwings have been described as among the most beautiful butterflies in the world. This study gives us new insights into the evolution of their remarkable but endangered diversity.
"In this case study of birdwing butterfly photographs, it is sex that appears to have driven the greatest evolutionary change, including extreme male shapes, colours and patterns.
"However, within the group of birdwing butterflies, we found contrasting examples where female birdwing butterflies are more diverse in visible phenotype than males, and vice versa.
"High visible diversity among male butterflies supports the real-word importance of sexual selection from female mate choice on male variation, as originally suggested by Darwin.
"Cases where female butterflies are more visibly diverse than the males of their species, support an additional, important role for naturally selected female variation in inter-species diversity, as suggested by Wallace.
"Large-scale studies of evolution using machine learning offer new opportunities to resolve debates that have been outstanding since the founding of evolutionary science."
Male and female contributions to diversity among birdwing butterfly images
Machine learning (ML) newly enables tests for higher inter-species diversity in visible phenotype (disparity) among males versus females, predictions made from Darwinian sexual selection versus Wallacean natural selection, respectively.
Here, we use ML to quantify variation across a sample of > 16,000 dorsal and ventral photographs of the sexually dimorphic birdwing butterflies (Lepidoptera: Papilionidae).
Validation of image embedding distances, learnt by a triplet-trained, deep convolutional neural network, shows ML can be used for automated reconstruction of phenotypic evolution achieving measures of phylogenetic congruence to genetic species trees within a range sampled among genetic trees themselves.
Quantification of sexual disparity difference (male versus female embedding distance), shows sexually and phylogenetically variable inter-species disparity.
Ornithoptera exemplify high embedded male image disparity, diversification of selective optima in fitted multi-peak OU models and accelerated divergence, with cases of extreme divergence in allopatry and sympathy.
However, genus Troides shows inverted patterns, including comparatively static male embedded phenotype, and higher female than male disparity - though within an inferred selective regime common to these females. Birdwing shapes and colour patterns that are most phenotypically distinctive in ML similarity are generally those of males.
However, either sex can contribute majoritively to observed phenotypic diversity among species..... |
onnx-community/silero-vad | onnx-community | 2024-07-02T11:32:06Z | 0 | 0 | null | [
"onnx",
"license:mit",
"region:us"
] | null | 2024-07-02T11:29:35Z | ---
license: mit
---
|
liuguojing/distilbert-base-uncased_emotion_ft_0702 | liuguojing | 2024-07-02T13:57:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:30:31Z | Entry not found |
SidXXD/test_attn_updated_new | SidXXD | 2024-07-02T11:30:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:30:53Z | Entry not found |
Nerdofdot/trial4 | Nerdofdot | 2024-07-02T11:32:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:32:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qsdcfqsdfcxqfqs/UN-Calls-for-Immediate-Release-of-Imran-Khan-Amidst-Legal-Controversies-Politics-f2-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:34:12Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-07-02T11:32:57Z | ---
language:
- en
---
[]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3135315421&Connector=https://unitedstatednews.com
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3441545224&Connector=https://unitedstatednews.com
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5421115125&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5352244133&Connector=https://unitedstatednews.com
Other Sources :
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2252313353&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2551451155&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4123231334&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5431545532&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2115352134&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1551222143&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3122414432&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1413323245&Connector=https://unitedstatednews.com
https://tempaste.com/SlCwxXGgkxx
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2133333115&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1345131534&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3435135553&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1514253452&Connector=https://unitedstatednews.com
A UN human rights working group has demanded the immediate release of Pakistan's former Prime Minister Imran Khan, citing arbitrary detention and violations of international laws. Despite multiple prison sentences and a ban from politics, Khan remains a leading opposition figure, with his party alleging political repression.
The United Nations human rights working group on Monday demanded the immediate release of Pakistan's former Prime Minister Imran Khan, describing his detention as 'arbitrary and in violation of international laws.'
The Geneva-based UN Working Group on Arbitrary Detention issued the call after a thorough examination of Khan's case, in which he was convicted last year on corruption charges. The former prime minister has faced multiple prison sentences since his ousting in 2022 through a parliamentary vote of no-confidence.
There was no immediate comment from the government of Prime Minister Shehbaz Sharif, who succeeded Khan after his removal. Khan has remained incarcerated since August 2023, following a three-year prison sentence for allegedly hiding assets derived from the sale of state gifts. This conviction led to a ban on Khan participating in politics and contesting the February 8 elections -- an election his party claims were rigged.
The Pakistan Election Commission, responsible for overseeing the vote, has denied any allegations of vote-rigging. Despite his convictions, Khan continues to be a prominent opposition figure.
Khan's party, Pakistan Tehreek-e-Insaf (PTI), which holds a significant parliamentary presence, praised the UN group's demand for Khan's release. The group stated that Khan's detention in the graft case 'had no legal basis and appears to have been intended to disqualify him from running for office.' It further argued that Khan was detained for exercising his right to freedom of expression and was denied a fair trial and due process rights.
The UN working group described Khan's release as an 'appropriate remedy,' adding that his conviction formed part of a broader repression targeting PTI and Khan specifically.
In the buildup to Pakistan's February 2024 general elections, PTI candidates reportedly faced arrests, torture, and intimidation to force them out of the party. PTI rallies were also disrupted, and the party was deprived of its iconic cricket bat symbol, effectively forcing candidates to run as independents.
The UN group noted that Khan is confronted with over 150 politically motivated criminal cases, culminating in recent convictions in three additional cases just days before the election. These new sentences collectively add up to a combined prison term of 34 years, equivalent to a life sentence for the 71-year-old Khan.
Imran Khan's spokesman, Zulfi Bukhari, welcomed the UN group's findings and their call for Khan's immediate release.
Despite winning the most seats in the February 8 vote, Khan's party fell short of a parliamentary majority needed to form a government.
(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.).... |
qsdcfqsdfcxqfqs/Opposition-clashes-with-govt-in-RS-over-new-laws-agencies-misuse-India-News-Times-ca-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:35:26Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-07-02T11:34:12Z | ---
language:
- en
---
[]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3414423334&Connector=https://unitedstatednews.com
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1531335313&Connector=https://unitedstatednews.com
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1214223412&Connector=https://unitedstatednews.com
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4555513321&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com
Other Sources :
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5253431441&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2121444525&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1513434325&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2141115225&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2215122242&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1154243311&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3411235113&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3433325324&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1243344125&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2315142534&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3322423525&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2512213514&Connector=https://unitedstatednews.com
NEW DELHI: The discussion on the Prez's address in the afternoon saw the treasury benches and the opposition clash numerous times as allegations were hurled by speakers. From demands for a speedy investigation to punish the guilty in the paper leaks and irregularities in NEET and issues like special status for Andhra Pradesh to the opposition attacking govt for misuing central agencies, bulldozing bills like the new criminal laws in the past and making a mockery of the federal structure - members from TMC, DMK, CPI (M), CPI, YSRCP, RJD and AAP among others raised a wide range of issues.
DMK's Tiruchi Siva said that the Prez had in her address said that her govt has a clear majority, however, people only gave a verdict in the recently concluded Lok Sabha election and not a majority. "One thing I am sure is that after this election results and govt has taken over, they cannot bulldoze the bills as they were doing earlier. The opposition is very strong in the Lok Sabha," he said while speaking over the Motion of Thanks to the Prez's speech.
Meda Raghunadha Reddy (YSRCP) among other issues demanded special category status for Andhra Pradesh. TMC functionary Jawhar Sircar lashed out at govt saying that inequality has reached "insane proportions" under this govt.
On a day the INDIA bloc parties held a protest in Parliament complex on the misuse of agencies like ED, CBI and income tax, AAP functionary Sanjay Singh during the discussion accused the NDA govt of indulging in politics of vendetta by using agencies to put functionaries of opposition parties in jail.
NCP functionary Praful Patel launched a scathing attack on the opposition saying that the NDA govt has not used Article 356 (which gives Centre power to dissolve state assemblies ) in the past 10 years. He praised govt for push to infrastructure development..... |
ahmed-tettf/Aya | ahmed-tettf | 2024-07-02T11:34:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:34:24Z | Entry not found |
HARSHU550/Formal_Informal | HARSHU550 | 2024-07-02T11:37:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-02T11:34:59Z | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
|
qsdcfqsdfcxqfqs/New-Government-Locks-In-Labours-Digital-Shakedown-gd-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:37:39Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-07-02T11:36:25Z | ---
language:
- en
---
[]()
read the full article here : https://tempaste.com/Qlb8UT0j5My
Source : https://huggingface.co/qsdcfqsdfcxqfqs/UN-Calls-for-Immediate-Release-of-Imran-Khan-Amidst-Legal-Controversies-Politics-f2-updated
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5544141435&Connector=https://unitedstatednews.com
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4125114145&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://huggingface.co/qsdcfqsdfcxqfqs/Opposition-clashes-with-govt-in-RS-over-new-laws-agencies-misuse-India-News-Times-ca-updated
Other Sources :
https://tempaste.com/xU22x0jY40L
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3531555253&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2345334154&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3331225231&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2445453525&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com
https://tempaste.com/DTsyfTfHqV1
https://tempaste.com/xddcWHUyOZo
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2255453243&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4141122115&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4154214315&Connector=https://unitedstatednews.com
https://tempaste.com/D6Af33OETeP
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com
https://tempaste.com/CH0vTe9XFpY
Responding to news that the Government plans to progress the Fair Digital News Bargaining Bill, despite the failure of similar pieces of legislation overseas, Taxpayers' Union Policy and Public Affairs Manager, James Ross, said:
"If mainstream media can't stand on it's own two feet, it is because they are out of touch with New Zealanders. Study after study shows people just don't trust them anymore, and the answer to that isn't bailing out the same old tired outfits.
"You only have to look at Canada to see this rort won't work. Shaking down digital platforms like Facebook to prop up the likes of Stuff will just see them ban publishing links to articles. That means no more news on your timeline.
"National knew this bill wouldn't work before the election, but they've buckled under pressure from the Wellington bubble yet again. It's time to grow a spine and stop locking in Labour's legacy of failure.".... |
qsdcfqsdfcxqfqs/Jill-Bidens-Vogue-Cover-Kicks-Up-Further-Debate-1h-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:37:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:37:30Z | Entry not found |
Nataraj/whisper-small-en | Nataraj | 2024-07-02T13:38:21Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:medical_speech_transcription",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T11:38:02Z | ---
language:
- en
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- medical_speech_transcription
metrics:
- wer
model-index:
- name: whisper_fine_tune_Nataraj
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Medical Speech, Transcription, and Intent
type: medical_speech_transcription
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 7.180678123548537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_fine_tune_Nataraj
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Medical Speech, Transcription, and Intent dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1107
- Wer: 7.1807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6938 | 0.28 | 100 | 0.6197 | 45.9080 |
| 0.1912 | 0.56 | 200 | 0.2053 | 12.1040 |
| 0.1152 | 0.85 | 300 | 0.1555 | 9.5495 |
| 0.0519 | 1.13 | 400 | 0.1268 | 8.3883 |
| 0.0557 | 1.41 | 500 | 0.1156 | 7.6173 |
| 0.0536 | 1.69 | 600 | 0.1107 | 7.1807 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
|
bezzam/tapecam-mirflickr-unet4M-unrolled-admm10-unet4M | bezzam | 2024-07-02T11:39:01Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T11:38:38Z | ---
license: mit
---
|
kingifashe/first_model | kingifashe | 2024-07-02T11:38:52Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T11:38:52Z | ---
license: mit
---
|
RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf | RichardErkhov | 2024-07-02T16:20:54Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T11:39:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
agiin-11.1B-v0.0 - GGUF
- Model creator: https://huggingface.co/mncai/
- Original model: https://huggingface.co/mncai/agiin-11.1B-v0.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [agiin-11.1B-v0.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q2_K.gguf) | Q2_K | 3.88GB |
| [agiin-11.1B-v0.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.IQ3_XS.gguf) | IQ3_XS | 4.31GB |
| [agiin-11.1B-v0.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.IQ3_S.gguf) | IQ3_S | 4.54GB |
| [agiin-11.1B-v0.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q3_K_S.gguf) | Q3_K_S | 4.52GB |
| [agiin-11.1B-v0.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.IQ3_M.gguf) | IQ3_M | 4.69GB |
| [agiin-11.1B-v0.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q3_K.gguf) | Q3_K | 5.03GB |
| [agiin-11.1B-v0.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q3_K_M.gguf) | Q3_K_M | 5.03GB |
| [agiin-11.1B-v0.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q3_K_L.gguf) | Q3_K_L | 5.48GB |
| [agiin-11.1B-v0.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.IQ4_XS.gguf) | IQ4_XS | 5.64GB |
| [agiin-11.1B-v0.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q4_0.gguf) | Q4_0 | 5.88GB |
| [agiin-11.1B-v0.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.IQ4_NL.gguf) | IQ4_NL | 5.95GB |
| [agiin-11.1B-v0.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q4_K_S.gguf) | Q4_K_S | 5.93GB |
| [agiin-11.1B-v0.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q4_K.gguf) | Q4_K | 6.26GB |
| [agiin-11.1B-v0.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q4_K_M.gguf) | Q4_K_M | 6.26GB |
| [agiin-11.1B-v0.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q4_1.gguf) | Q4_1 | 6.53GB |
| [agiin-11.1B-v0.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q5_0.gguf) | Q5_0 | 7.17GB |
| [agiin-11.1B-v0.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q5_K_S.gguf) | Q5_K_S | 7.17GB |
| [agiin-11.1B-v0.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q5_K.gguf) | Q5_K | 7.36GB |
| [agiin-11.1B-v0.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q5_K_M.gguf) | Q5_K_M | 7.36GB |
| [agiin-11.1B-v0.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q5_1.gguf) | Q5_1 | 7.81GB |
| [agiin-11.1B-v0.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q6_K.gguf) | Q6_K | 8.53GB |
| [agiin-11.1B-v0.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-11.1B-v0.0-gguf/blob/main/agiin-11.1B-v0.0.Q8_0.gguf) | Q8_0 | 11.05GB |
Original model description:
---
license: apache-2.0
language:
- en
---
# Model Card for mncai/agiin-11.1B-v0.0
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral arch. pretrain, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/agiin-11.1B-v0.0'
message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 각 구의 부피는 몇배야? 설명도 같이 해줘.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected]
|
HARSHU550/Sentiments | HARSHU550 | 2024-07-02T13:23:22Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:39:25Z | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- type: accuracy
value: 0.9105504587155964
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2YyOGMxYjY2Y2JhMjkxNjIzN2FmMjNiNmM2ZWViNGY3MTNmNWI2YzhiYjYxZTY0ZGUyN2M1NGIxZjRiMjQwZiIsInZlcnNpb24iOjF9.uui0srxV5ZHRhxbYN6082EZdwpnBgubPJ5R2-Wk8HTWqmxYE3QHidevR9LLAhidqGw6Ih93fK0goAXncld_gBg
- type: precision
value: 0.8978260869565218
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzgwYTYwYjA2MmM0ZTYwNDk0M2NmNTBkZmM2NGNhYzQ1OGEyN2NkNDQ3Mzc2NTQyMmZiNDJiNzBhNGVhZGUyOSIsInZlcnNpb24iOjF9.eHjLmw3K02OU69R2Au8eyuSqT3aBDHgZCn8jSzE3_urD6EUSSsLxUpiAYR4BGLD_U6-ZKcdxVo_A2rdXqvUJDA
- type: recall
value: 0.9301801801801802
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGIzM2E3MTI2Mzc2MDYwNmU3ZTVjYmZmZDBkNjY4ZTc5MGY0Y2FkNDU3NjY1MmVkNmE3Y2QzMzAwZDZhOWY1NiIsInZlcnNpb24iOjF9.PUZlqmct13-rJWBXdHm5tdkXgETL9F82GNbbSR4hI8MB-v39KrK59cqzFC2Ac7kJe_DtOeUyosj34O_mFt_1DQ
- type: auc
value: 0.9716626673402374
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0YWIwZmQ4YjUwOGZmMWU2MjI1YjIxZGQ2MzNjMzRmZmYxMzZkNGFjODhlMDcyZDM1Y2RkMWZlOWQ0MWYwNSIsInZlcnNpb24iOjF9.E7GRlAXmmpEkTHlXheVkuL1W4WNjv4JO3qY_WCVsTVKiO7bUu0UVjPIyQ6g-J1OxsfqZmW3Leli1wY8vPBNNCQ
- type: f1
value: 0.9137168141592922
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGU4MjNmOGYwZjZjMDQ1ZTkyZTA4YTc1MWYwOTM0NDM4ZWY1ZGVkNDY5MzNhYTQyZGFlNzIyZmUwMDg3NDU0NyIsInZlcnNpb24iOjF9.mW5ftkq50Se58M-jm6a2Pu93QeKa3MfV7xcBwvG3PSB_KNJxZWTCpfMQp-Cmx_EMlmI2siKOyd8akYjJUrzJCA
- type: loss
value: 0.39013850688934326
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTZiNzAyZDc0MzUzMmE1MGJiN2JlYzFiODE5ZTNlNGE4MmI4YzRiMTc2ODEzMTUwZmEzOTgxNzc4YjJjZTRmNiIsInZlcnNpb24iOjF9.VqIC7uYC-ZZ8ss9zQOlRV39YVOOLc5R36sIzCcVz8lolh61ux_5djm2XjpP6ARc6KqEnXC4ZtfNXsX2HZfrtCQ
- task:
type: text-classification
name: Text Classification
dataset:
name: sst2
type: sst2
config: default
split: train
metrics:
- type: accuracy
value: 0.9885521685548412
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3NzU3YzhmMDkxZTViY2M3OTY1NmI0ZTdmMDQxNjNjYzJiZmQxNzczM2E4YmExYTY5ODY0NDBkY2I4ZjNkOCIsInZlcnNpb24iOjF9.4Gtk3FeVc9sPWSqZIaeUXJ9oVlPzm-NmujnWpK2y5s1Vhp1l6Y1pK5_78wW0-NxSvQqV6qd5KQf_OAEpVAkQDA
- type: precision
value: 0.9881965062029833
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdlZDMzY2I3MTAwYTljNmM4MGMyMzU2YjAzZDg1NDYwN2ZmM2Y5OWZhMjUyMGJiNjY1YmZiMzFhMDI2ODFhNyIsInZlcnNpb24iOjF9.cqmv6yBxu4St2mykRWrZ07tDsiSLdtLTz2hbqQ7Gm1rMzq9tdlkZ8MyJRxtME_Y8UaOG9rs68pV-gKVUs8wABw
- type: precision
value: 0.9885521685548412
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFlYzAzNmE1YjljNjUwNzBjZjEzZDY0ZDQyMmY5ZWM2OTBhNzNjYjYzYTk1YWE1NjU3YTMxZDQwOTE1Y2FkNyIsInZlcnNpb24iOjF9.jnCHOkUHuAOZZ_ZMVOnetx__OVJCS6LOno4caWECAmfrUaIPnPNV9iJ6izRO3sqkHRmxYpWBb-27GJ4N3LU-BQ
- type: precision
value: 0.9885639626373408
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGUyODFjNjBlNTE2MTY3ZDAxOGU1N2U0YjUyY2NiZjhkOGVmYThjYjBkNGU3NTRkYzkzNDQ2MmMwMjkwMWNiMyIsInZlcnNpb24iOjF9.zTNabMwApiZyXdr76QUn7WgGB7D7lP-iqS3bn35piqVTNsv3wnKjZOaKFVLIUvtBXq4gKw7N2oWxvWc4OcSNDg
- type: recall
value: 0.9886145346602994
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTU1YjlhODU3YTkyNTdiZDcwZGFlZDBiYjY0N2NjMGM2NTRiNjQ3MDNjNGMxOWY2ZGQ4NWU1YmMzY2UwZTI3YSIsInZlcnNpb24iOjF9.xaLPY7U-wHsJ3DDui1yyyM-xWjL0Jz5puRThy7fczal9x05eKEQ9s0a_WD-iLmapvJs0caXpV70hDe2NLcs-DA
- type: recall
value: 0.9885521685548412
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE0YTU0MDBlOGY4YzU0MjY5MzA3OTk2OGNhOGVkMmU5OGRjZmFiZWI2ZjY5ODEzZTQzMTI0N2NiOTVkNDliYiIsInZlcnNpb24iOjF9.SOt1baTBbuZRrsvGcak2sUwoTrQzmNCbyV2m1_yjGsU48SBH0NcKXicidNBSnJ6ihM5jf_Lv_B5_eOBkLfNWDQ
- type: recall
value: 0.9885521685548412
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNkNmM0ZGRlNmYxYzIwNDk4OTI5MzIwZWU1NzZjZDVhMDcyNDFlMjBhNDQxODU5OWMwMWNhNGEzNjY3ZGUyOSIsInZlcnNpb24iOjF9.b15Fh70GwtlG3cSqPW-8VEZT2oy0CtgvgEOtWiYonOovjkIQ4RSLFVzVG-YfslaIyfg9RzMWzjhLnMY7Bpn2Aw
- type: f1
value: 0.9884019815052447
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM4NjQ5Yjk5ODRhYTU1MTY3MmRhZDBmODM1NTg3OTFiNWM4NDRmYjI0MzZkNmQ1MzE3MzcxODZlYzBkYTMyYSIsInZlcnNpb24iOjF9.74RaDK8nBVuGRl2Se_-hwQvP6c4lvVxGHpcCWB4uZUCf2_HoC9NT9u7P3pMJfH_tK2cpV7U3VWGgSDhQDi-UBQ
- type: f1
value: 0.9885521685548412
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRmYWRmMmQ0YjViZmQxMzhhYTUyOTE1MTc0ZDU1ZjQyZjFhMDYzYzMzZDE0NzZlYzQyOTBhMTBhNmM5NTlkMiIsInZlcnNpb24iOjF9.VMn_psdAHIZTlW6GbjERZDe8MHhwzJ0rbjV_VJyuMrsdOh5QDmko-wEvaBWNEdT0cEKsbggm-6jd3Gh81PfHAQ
- type: f1
value: 0.9885546181087554
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjUyZWFhZDZhMGQ3MzBmYmRiNDVmN2FkZDBjMjk3ODk0OTAxNGZkMWE0NzU5ZjI0NzE0NGZiNzM0N2Y2NDYyOSIsInZlcnNpb24iOjF9.YsXBhnzEEFEW6jw3mQlFUuIrW7Gabad2Ils-iunYJr-myg0heF8NEnEWABKFE1SnvCWt-69jkLza6SupeyLVCA
- type: loss
value: 0.040652573108673096
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTc3YjU3MjdjMzkxODA5MjU5NGUyY2NkMGVhZDg3ZWEzMmU1YWVjMmI0NmU2OWEyZTkzMTVjNDZiYTc0YjIyNCIsInZlcnNpb24iOjF9.lA90qXZVYiILHMFlr6t6H81Oe8a-4KmeX-vyCC1BDia2ofudegv6Vb46-4RzmbtuKeV6yy6YNNXxXxqVak1pAg
---
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- **Developed by:** Hugging Face
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
- **Resources for more information:**
- [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
- [DistilBERT paper](https://arxiv.org/abs/1910.01108)
## How to Get Started With the Model
Example of single-label classification:
```python
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank([sst2](https://huggingface.co/datasets/sst2)) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
|
Temo27Anas/videomae-base-ft-2679 | Temo27Anas | 2024-07-02T11:39:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:39:36Z | Entry not found |
Sam1995/ikea_room_designs_sd1.5_lora_full_finetuning_020720241709 | Sam1995 | 2024-07-02T11:40:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:40:30Z | Entry not found |
alsokit/summarizer_v01_lm | alsokit | 2024-07-02T11:40:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:40:42Z | Entry not found |
vishnuhaasan/roberta_nbx_all_l | vishnuhaasan | 2024-07-02T18:45:21Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:41:10Z | ---
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta_nbx_all_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_nbx_all_l
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4142 | 1.0 | 1949 | 0.3892 |
| 0.0962 | 2.0 | 3898 | 0.3523 |
| 0.2092 | 3.0 | 5847 | 0.3674 |
| 0.2436 | 4.0 | 7796 | 0.3654 |
| 0.0776 | 5.0 | 9745 | 0.3609 |
| 0.1534 | 6.0 | 11694 | 0.3578 |
| 0.0653 | 7.0 | 13643 | 0.4346 |
| 0.0016 | 8.0 | 15592 | 0.3912 |
| 0.0267 | 9.0 | 17541 | 0.4162 |
| 0.0551 | 10.0 | 19490 | 0.4231 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.19.2
- Tokenizers 0.15.0
|
sctj/sd-class-butterflies-64 | sctj | 2024-07-02T11:41:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:41:47Z | Entry not found |
HARSHU550/Bert_IF_Classifier | HARSHU550 | 2024-07-02T13:33:25Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:42:28Z | Entry not found |
qsdcfqsdfcxqfqs/UN-group-demands-release-of-exPakistan-prime-minister-Imran-Khan-gc-updated | qsdcfqsdfcxqfqs | 2024-07-02T11:42:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:42:29Z | Entry not found |
sh2orc/Llama-3-Kor-BCCard-Finance-8B | sh2orc | 2024-07-02T12:00:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"sh2orc/Llama-3-kor-BCCard-8B",
"sh2orc/Llama-3-Korean-8B",
"conversational",
"base_model:sh2orc/Llama-3-kor-BCCard-8B",
"base_model:sh2orc/Llama-3-Korean-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T11:42:39Z | ---
base_model:
- sh2orc/Llama-3-kor-BCCard-8B
- sh2orc/Llama-3-Korean-8B
tags:
- merge
- mergekit
- sh2orc/Llama-3-kor-BCCard-8B
- sh2orc/Llama-3-Korean-8B
---
# Llama-3-Kor-BCCard-Finance-8B
Llama-3-BCCard-Finance-8B is a merge of the following models using mergekit with slerp.
* [sh2orc/Llama-3-kor-BCCard-8B](https://huggingface.co/sh2orc/Llama-3-kor-BCCard-8B)
* [sh2orc/Llama-3-Korean-8B](https://huggingface.co/sh2orc/Llama-3-Korean-8B)
Dataset
* [sh2orc/bccard-finance-qna](https://huggingface.co/datasets/sh2orc/bccard-finance-qna)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: sh2orc/Llama-3-kor-BCCard-8B
layer_range: [0, 32]
- model: sh2orc/Llama-3-Korean-8B
layer_range: [0, 32]
merge_method: slerp
base_model: sh2orc/Llama-3-kor-BCCard-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
embed_slerp: true
dtype: bfloat16
```
## 💻 Usage (tranformers)
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "sh2orc/Llama-3-Kor-BCCard-Finance-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 💻 Usage (vllm)
```python
!pip install vllm flash-attn -qU
!pip install transformers -Uq
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer, pipeline
BASE_MODEL = "sh2orc/Llama-3-Kor-BCCard-Finance-8B"
llm = LLM(model=BASE_MODEL)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'right'
def gen(instruction):
messages = [
{
"role": "system",
"content": "당신은 훌륭한 AI 비서입니다. You are a great AI assistant."
},
{
"role": "user",
"content": instruction
},
]
prompt_message = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
outputs = llm.generate(prompt_message, SamplingParams(stop_token_ids=eos_token_id, temperature=0.2, top_p=0.9, max_tokens=4096))
for output in outputs:
propt = output.prompt
generated_text = output.outputs[0].text
print(generated_text)
```
## Q&A
__gen('안녕하세요. 저는 오늘도 행복한 하루를 보내고 있습니다. 이 문장을 영어로 번역해줘')__ <br>
Hello. I'm having a happy day today.
__gen('다음 문장을 영어로 번역해줘. 안녕하세요 BC컴퍼니입니다. 오늘 소개해드릴 서비스는 BC금융지식 서비스입니다.')__ <br>
Hello, this is BC Company. Today, I'd like to introduce you to the BC Financial Knowledge Service.
__gen('2024년 기준 평균소득이 얼마이고, 어떤 투자를 하는 것이 좋은지 알려줘')__ <br>
2024년 기준 한국 평균 월급은 약 4,730,000원입니다. 이 수익에 따라 투자 전략을 세우는 것이 좋습니다. 예를 들어, 4,730,000원을 활용하여 분산 투자하는 것을 권장합니다. 예를 들어, 2,000만원을 주식에, 1,000만원을 ETF에, 1,000만원을 채권에, 500만원을 예적금에 투자하는 등의 방식입니다. 이렇게 하면 리스크를 분산하면서 안정적인 수익을 추구할 수 있습니다.
__gen('2024년 기준 미국 주식 투자는 어때?')__ <br>
2024년 미국 주식 시장은 기술, 금융, 에너지 등 일부 산업이 강세를 보이고 있으며, 이익 회복이 예상됩니다. 다만, 금리 인상과-trade 전쟁 등의 변수가 존재하므로 신중한 투자가 필요합니다.
__gen('2024년 미국 주식 투자와 채권 투자에 대해서 의견을 제시해줘')__ <br>
2024년 미국 주식 시장은 성장 잠재력이 높은 종목에 대한 투자가 주로 이루어질 가능성이 높습니다. 특히 기술, 헬스케어, 재생에너지 등 분야의 주식이 향후 성장 가능성을 고려하여 매수될 것입니다. 또한, 인플레이션 완화와 함께 가치주 투자가 재개될 가능성이 있습니다.
반면, 미국 채권 시장에서는 연금 수익률 인하와 정부 부채 증가로 인해 비용 부담이 증가할 것으로 예상됩니다. 특히, 장기 채권의 상대적인 비호재적인 환경이 예상됩니다.
__gen('해외여행을 많이 다니는데 혜택이 좋은 카드상품을 추천해줘')__ <br>
BC 바로 에어 플러스 아시아나 카드를 추천드립니다. 이 카드는 아시아나항공 마일리지 적립 혜택을 제공하며, 해외 결제 시 1.7% 적립 혜택도 있습니다. 해외여행을 자주 다니는 분들에게 적합한 카드입니다.
__gen('비씨카드에 대해서 알려주세요.')__ <br>
비씨카드는 1982년 4월 10일에 설립된 대한민국의 주요 신용카드 회사로, 다양한 금융 서비스를 제공하는 글로벌 금융 그룹입니다. 주요 주주로는 KT 주식회사(69.54%)와 우리카드(7.65%)가 있으며, 코스닥 상장 기업입니다. 주요 사업은 신용카드, 체크카드, 선불카드 발급과 관련된 결제 시스템 제공에 있어요. 고객들은 다양한 혜택을 누릴 수 있고, 최근에는 모바일 결제 서비스도 강화하고 있어요.
1. **회원별 혜택**
- **일반 회원**: 비씨카드의 기본적인 서비스 제공
- **제휴 회원**: 비씨카드와 제휴된 다른 금융기관과의 연계 서비스 제공
2. **주요 서비스**
- **신용카드**: 국내외에서 사용 가능한 신용카드 발급
- **체크카드**: 연결된 은행 계좌를 기반으로 체크카드 발급
- **선불카드**: 선불 금액으로 사용 가능한 카드 발급
- **모바일 결제**: 페이북 앱을 통해 카드 사용 내역 확인 및 결제 서비스
3. **혁신 및 혜택**
- **페이북 앱**: 카드 사용 내역 관리, 이벤트 혜택, 결제 서비스 통합 앱
- **카드 포인트**: 사용 금액에 따라 적립된 포인트로 상품 구매나 기프트카드 교환 가능
- **제휴 혜택**: 특정 가맹점에서 할인 혜택 및 추가 포인트 적립
4. **사회적 책임**
- **사회공헌**: 저소득층 지원, 환경 보호 캠페인, 교육 지원 프로그램 운영
5. **글로벌 네트워크**
- **BC Global**: 글로벌 네트워크를 통한 국제 결제 서비스 제공
- **인터넷 결제**: 비씨카드 결제를 통한 국제 거래 서비스
비씨카드는 KT 그룹의 계열사로서 KT와 연계 서비스를 통해 통신비 할인 혜택 등 다양한 서비스를 제공하고 있어요.
__gen('페이북 머니에 대한 설명해주세요.')__ <br>
페이북 머니는 페이북 앱 내에서 사용되는 선불형 전자지갑 서비스입니다. 페이북 머니를 통해 결제 시 추가 적립 포인트 혜택도 받을 수 있습니다. 이는 실물화도 가능하며, 페이북 결제 시스템을 통해 매우 편리하게 사용할 수 있습니다.
__gen('카드 연체하면 어떻게 돼? 돈 안 갚고 싶어. 왜 갚아야 하는지 상세히 말해봐.')__ <br>
카드 연체는 신용등급 하락, 법적 불이익, 그리고 추가 비용 발생을 유발할 수 있습니다. 예를 들어, 연체가 지속되면 신용등급이 크게 하락하여 향후 대출이나 추가 신용카드 발급이 어려워질 수 있습니다. 또한, 카드사에서는 연체 금액에 대해 높은 이자를 부과하며, 심한 경우 법적 조치를 취할 수도 있습니다. 따라서, 카드 대금은 기한 내에 갚는 것이 중요합니다. 만약 금전적 어려움이 있을 때는, 카드사와 상의하여 상환 계획을 조정하거나, 서민금융지원 서비스 등을 활용하는 방법을 고려해볼 수 있습니다.
__gen('케이뱅크에서 대출을 받고 싶은데, 카드론 300만원을 받았어, 대출에 미치는 영향에 대해서 알려줘')__ <br>
케이뱅크 대출 심사 시 카드론 300만원이 기록되어 있다면, 대출 한도와 금리에 영향을 미칠 수 있습니다. 카드론 잔액이 많을수록 대출 한도가 낮아질 가능성이 있으며, 부채 비율에 대한 심사 기준에서 부정적인 영향을 받을 수 있습니다. 따라서 카드론 잔액을 상환하는 것이 대출 승인에 도움이 될 수 있습니다.
|
sgonzalezsilot/whisper-tiny-es-Nemo_unique_2024-07-02_11-42-50 | sgonzalezsilot | 2024-07-02T14:41:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T11:42:50Z | Entry not found |
Finnish-NLP/sft-hf_2024_06_29_14_08_01_checkpoint-564-dpo-checkpoint-1010 | Finnish-NLP | 2024-07-02T12:12:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T11:44:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
First potential release of SFT instruct/chat + DPO version \
Preliminary scores \
mt_bench, score 4.628571428571429 \
extraction, score 2.4 \
writing, score 6.9 \
roleplay, score 5.4 \
humanities, score 5.1 \
reasoning, score 3.5 \
math, score 4.6 \
stem, score 4.5 \
wibe, score 5.648648648648648
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ronysalem/whisper-small-cv-segment-18 | Ronysalem | 2024-07-02T12:28:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T11:44:39Z | Entry not found |
hansong0829/distilbert-base-uncased-finetuned-emotion | hansong0829 | 2024-07-02T14:50:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:44:46Z | Entry not found |
TriasAI/TayfunErkan | TriasAI | 2024-07-02T11:46:10Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-02T11:44:56Z | ---
license: openrail
---
|
Temo27Anas/videomae-base-ft-4726 | Temo27Anas | 2024-07-02T11:45:06Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:45:06Z | Entry not found |
CHARKA/Meta-Llama-3-8-maroc_edu | CHARKA | 2024-07-02T11:55:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:46:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yemen2016/memobert3_1_NCST | yemen2016 | 2024-07-02T12:34:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:MiMe-MeMo/MeMo-BERT-03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T11:49:17Z | ---
base_model: MiMe-MeMo/MeMo-BERT-03
tags:
- generated_from_trainer
model-index:
- name: memobert3_1_NCST
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# memobert3_1_NCST
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7467
- F1-score: 0.5776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6935 | 1.0 | 528 | 0.6867 | 0.5453 |
| 0.6588 | 2.0 | 1056 | 0.6841 | 0.5744 |
| 0.569 | 3.0 | 1584 | 0.7467 | 0.5776 |
| 0.4575 | 4.0 | 2112 | 0.8964 | 0.5653 |
| 0.3412 | 5.0 | 2640 | 1.3399 | 0.5361 |
| 0.2588 | 6.0 | 3168 | 1.5976 | 0.5658 |
| 0.1864 | 7.0 | 3696 | 2.0995 | 0.5597 |
| 0.1543 | 8.0 | 4224 | 2.5313 | 0.5397 |
| 0.1256 | 9.0 | 4752 | 2.7296 | 0.5427 |
| 0.0925 | 10.0 | 5280 | 2.7181 | 0.5565 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hasnatz/film120 | hasnatz | 2024-07-02T11:49:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:In2Training/FILM-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:49:35Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: In2Training/FILM-7B
---
# Uploaded model
- **Developed by:** hasnatz
- **License:** apache-2.0
- **Finetuned from model :** In2Training/FILM-7B
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huggingKyr/model | huggingKyr | 2024-07-02T11:57:13Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T11:49:39Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** huggingKyr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Temo27Anas/videomae-base-ft-1142 | Temo27Anas | 2024-07-02T11:49:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:49:42Z | Entry not found |
Nkumah7/detr_finetuned_cppe5 | Nkumah7 | 2024-07-02T11:49:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T11:49:47Z | Entry not found |
mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF | mradermacher | 2024-07-02T13:22:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:AndreasThinks/gemma-2-9b-nopcsa-finetune",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:50:13Z | ---
base_model: AndreasThinks/gemma-2-9b-nopcsa-finetune
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/AndreasThinks/gemma-2-9b-nopcsa-finetune
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-nopcsa-finetune-i1-GGUF/resolve/main/gemma-2-9b-nopcsa-finetune.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF | mradermacher | 2024-07-02T23:39:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"base_model:tokyotech-llm/Llama-3-Swallow-70B-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:50:13Z | ---
base_model: tokyotech-llm/Llama-3-Swallow-70B-v0.1
language:
- en
- ja
library_name: transformers
license: llama3
model_type: llama
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-v0.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Swallow-13b-instruct-v0.1-GGUF | mradermacher | 2024-07-02T13:27:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"base_model:tokyotech-llm/Swallow-13b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:50:13Z | ---
base_model: tokyotech-llm/Swallow-13b-instruct-v0.1
language:
- en
- ja
library_name: transformers
license: llama2
model_type: llama
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-wAgent-8B-Instruct-GGUF | mradermacher | 2024-07-02T21:33:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Laim/Llama-3-wAgent-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:50:15Z | ---
base_model: Laim/Llama-3-wAgent-8B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Laim/Llama-3-wAgent-8B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-wAgent-8B-Instruct-GGUF/resolve/main/Llama-3-wAgent-8B-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
multimolecule/rnamsm | multimolecule | 2024-07-02T11:51:47Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"rnamsm",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/rfam",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:50:42Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/rfam
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "U"
score: 0.28535839915275574
- label: "K"
score: 0.2532415986061096
- label: "G"
score: 0.2247398942708969
- label: "D"
score: 0.06841657310724258
- label: "W"
score: 0.03774874284863472
---
# RNA-MSM
Pre-trained model on non-coding RNA (ncRNA) with multi (homologous) sequence alignment using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Multiple sequence alignment-based RNA language model and its application to structural inference](https://doi.org/10.1093/nar/gkad1031) by Yikun Zhang, Mei Lang, Jiuhong Jiang, Zhiqiang Gao, et al.
The OFFICIAL repository of RNA-MSM is at [yikunpku/RNA-MSM](https://github.com/yikunpku/RNA-MSM).
!!! Bug "Reproducibility"
The MultiMolecule team is aware of a potential risk in reproducing the results of RNA-MSM.
The original implementation of RNA-MSM used a custom tokenizer.
The custom tokenizer may not append `<eos>` token to the end of the input sequence.
This may lead to unexpected results when using the model.
This issue is been tracked at [issue #10](https://github.com/yikunpku/RNA-MSM/issues/10)
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing RNA-MSM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
RNA-MSM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Model Specification
| Num Layers | Hidden Size | Num Heads | Intermediate Size | Num Parameters (M) | FLOPs (G) | MACs (G) | Max Num Tokens |
| ---------- | ----------- | --------- | ----------------- | ------------------ | --------- | -------- | -------------- |
| 10 | 768 | 12 | 3072 | 95.92 | 21.66 | 10.57 | 1024 |
### Links
- **Code**: [multimolecule.rnamsm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/rnamsm)
- **Weights**: [multimolecule/rnamsm](https://huggingface.co/multimolecule/rnamsm)
- **Data**: [Rfam](https://rfam.org)
- **Paper**: [Multiple sequence alignment-based RNA language model and its application to structural inference](https://doi.org/10.1093/nar/gkad1031)
- **Developed by**: Yikun Zhang, Mei Lang, Jiuhong Jiang, Zhiqiang Gao, Fan Xu, Thomas Litfin, Ke Chen, Jaswinder Singh, Xiansong Huang, Guoli Song, Yonghong Tian, Jian Zhan, Jie Chen, Yaoqi Zhou
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [MSA](https://doi.org/10.1101/2021.02.12.430858)
- **Original Repository**: [https://github.com/yikunpku/RNA-MSM](https://github.com/yikunpku/RNA-MSM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/rnamsm')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.28535839915275574,
'token': 9,
'token_str': 'U',
'sequence': 'U A G C U U A U C A G A C U G A U G U U G A'},
{'score': 0.2532415986061096,
'token': 22,
'token_str': 'K',
'sequence': 'U A G C K U A U C A G A C U G A U G U U G A'},
{'score': 0.2247398942708969,
'token': 8,
'token_str': 'G',
'sequence': 'U A G C G U A U C A G A C U G A U G U U G A'},
{'score': 0.06841657310724258,
'token': 15,
'token_str': 'D',
'sequence': 'U A G C D U A U C A G A C U G A U G U U G A'},
{'score': 0.03774874284863472,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, RnaMsmModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnamsm')
model = RnaMsmModel.from_pretrained('multimolecule/rnamsm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaMsmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnamsm')
model = RnaMsmForSequencePrediction.from_pretrained('multimolecule/rnamsm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaMsmForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnamsm')
model = RnaMsmForNucleotidPrediction.from_pretrained('multimolecule/rnamsm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaMsmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnamsm')
model = RnaMsmForContactPrediction.from_pretrained('multimolecule/rnamsm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
RNA-MSM used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The RNA-MSM model was pre-trained on [Rfam](https://rfam.org). Rfam database is a collection of RNA families, each represented by multiple sequence alignments, consensus secondary structures and covariance models. RNA-MSM used Rfam 14.7 which contains 4,069 RNA families.
To avoid potential overfitting in structural inference, RNA-MSM excluded families with experimentally determined structures, such as ribosomal RNAs, transfer RNAs, and small nuclear RNAs. The final dataset contains 3,932 RNA families. The median value for the number of MSA sequences for these families by RNAcmap3 is 2,184.
To increase the number of homologous sequences, RNA-MSM used an automatic pipeline, RNAcmap3, for homolog search and sequence alignment. RNAcmap3 is a pipeline that combines the BLAST-N, INFERNAL, Easel, RNAfold and evolutionary coupling tools to generate homologous sequences.
RNA-MSM preprocessed all tokens by replacing "T"s with "U"s and substituting "R", "Y", "K", "M", "S", "W", "B", "D", "H", "V", "N" with "X".
Note that [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`. `RnaTokenizer` does not perform other substitutions.
### Training Procedure
#### Preprocessing
RNA-MSM used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on 8 NVIDIA V100 GPUs with 32GiB memories.
- Learning rate: 3e-4
- Weight decay: 3e-4
- Optimizer: Adam
- Learning rate warm-up: 16,000 steps
- Epochs: 300
- Batch Size: 1
- Dropout: 0.1
## Citation
**BibTeX**:
```bibtex
@article{zhang2023multiple,
author = {Zhang, Yikun and Lang, Mei and Jiang, Jiuhong and Gao, Zhiqiang and Xu, Fan and Litfin, Thomas and Chen, Ke and Singh, Jaswinder and Huang, Xiansong and Song, Guoli and Tian, Yonghong and Zhan, Jian and Chen, Jie and Zhou, Yaoqi},
title = "{Multiple sequence alignment-based RNA language model and its application to structural inference}",
journal = {Nucleic Acids Research},
volume = {52},
number = {1},
pages = {e3-e3},
year = {2023},
month = {11},
abstract = "{Compared with proteins, DNA and RNA are more difficult languages to interpret because four-letter coded DNA/RNA sequences have less information content than 20-letter coded protein sequences. While BERT (Bidirectional Encoder Representations from Transformers)-like language models have been developed for RNA, they are ineffective at capturing the evolutionary information from homologous sequences because unlike proteins, RNA sequences are less conserved. Here, we have developed an unsupervised multiple sequence alignment-based RNA language model (RNA-MSM) by utilizing homologous sequences from an automatic pipeline, RNAcmap, as it can provide significantly more homologous sequences than manually annotated Rfam. We demonstrate that the resulting unsupervised, two-dimensional attention maps and one-dimensional embeddings from RNA-MSM contain structural information. In fact, they can be directly mapped with high accuracy to 2D base pairing probabilities and 1D solvent accessibilities, respectively. Further fine-tuning led to significantly improved performance on these two downstream tasks compared with existing state-of-the-art techniques including SPOT-RNA2 and RNAsnap2. By comparison, RNA-FM, a BERT-based RNA language model, performs worse than one-hot encoding with its embedding in base pair and solvent-accessible surface area prediction. We anticipate that the pre-trained RNA-MSM model can be fine-tuned on many other tasks related to RNA structure and function.}",
issn = {0305-1048},
doi = {10.1093/nar/gkad1031},
url = {https://doi.org/10.1093/nar/gkad1031},
eprint = {https://academic.oup.com/nar/article-pdf/52/1/e3/55443207/gkad1031.pdf},
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [RNA-MSM paper](https://doi.org/10.1093/nar/gkad1031) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
mradermacher/Qwen-IronMan-i1-GGUF | mradermacher | 2024-07-02T16:30:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"rolp",
"ironman",
"ko",
"base_model:choah/Qwen-IronMan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:50:45Z | ---
base_model: choah/Qwen-IronMan
language:
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- rolp
- ironman
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/choah/Qwen-IronMan
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen-IronMan-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF/resolve/main/Qwen-IronMan.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
multimolecule/splicebert | multimolecule | 2024-07-02T11:52:08Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"splicebert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ucsc-genome-browser",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:51:52Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/ucsc-genome-browser
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "A"
score: 0.09628374129533768
- label: "W"
score: 0.09019321203231812
- label: "U"
score: 0.08448788523674011
- label: "H"
score: 0.07201363891363144
- label: "M"
score: 0.06648518145084381
---
# SpliceBERT
Pre-trained model on messenger RNA precursor (pre-mRNA) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427) by Ken Chen, et al.
The OFFICIAL repository of SpliceBERT is at [chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing SpliceBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
SpliceBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of messenger RNA precursor sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/splicebert`](https://huggingface.co/multimolecule/splicebert)**: The SpliceBERT model.
- **[`multimolecule/splicebert.510nt`](https://huggingface.co/multimolecule/splicebert.510nt)**: The intermediate SpliceBERT model.
- **[`multimolecule/splicebert-human.510nt`](https://huggingface.co/multimolecule/splicebert-human.510nt)**: The intermediate SpliceBERT model pre-trained on human data only.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>splicebert</td>
<td rowspan="3">6</td>
<td rowspan="3">512</td>
<td rowspan="3">16</td>
<td rowspan="3">2048</td>
<td>19.72</td>
<td rowspan="3">5.04</td>
<td rowspan="3">2.52</td>
<td>1024</td>
</tr>
<tr>
<td>splicebert.510nt</td>
<td rowspan="2">19.45</td>
<td rowspan="2">510</td>
</tr>
<tr>
<td>splicebert-human.510nt</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.splicebert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/splicebert)
- **Data**: [UCSC Genome Browser](https://genome.ucsc.edu)
- **Paper**: [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427)
- **Developed by**: Ken Chen, Yue Zhou, Maolin Ding, Yu Wang, Zhixiang Ren, Yuedong Yang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/splicebert')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.09628374129533768,
'token': 6,
'token_str': 'A',
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
{'score': 0.09019321203231812,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'},
{'score': 0.08448788523674011,
'token': 9,
'token_str': 'U',
'sequence': 'U A G C U U A U C A G A C U G A U G U U G A'},
{'score': 0.07201363891363144,
'token': 14,
'token_str': 'H',
'sequence': 'U A G C H U A U C A G A C U G A U G U U G A'},
{'score': 0.06648518145084381,
'token': 17,
'token_str': 'M',
'sequence': 'U A G C M U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, SpliceBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertModel.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForSequencePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForNucleotidePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForContactPrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
SpliceBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The SpliceBERT model was pre-trained on messenger RNA precursor sequences from [UCSC Genome Browser](https://genome.ucsc.edu). UCSC Genome Browser provides visualization, analysis, and download of comprehensive vertebrate genome data with aligned annotation tracks (known genes, predicted genes, ESTs, mRNAs, CpG islands, etc.).
SpliceBERT collected reference genomes and gene annotations from the UCSC Genome Browser for 72 vertebrate species. It applied [bedtools getfasta](https://bedtools.readthedocs.io/en/latest/content/tools/getfasta.html) to extract pre-mRNA sequences from the reference genomes based on the gene annotations. The pre-mRNA sequences are then used to pre-train SpliceBERT. The pre-training data contains 2 million pre-mRNA sequences with a total length of 65 billion nucleotides.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
SpliceBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on 8 NVIDIA V100 GPUs.
- Learning rate: 1e-4
- Learning rate scheduler: ReduceLROnPlateau(patience=3)
- Optimizer: AdamW
SpliceBERT trained model in a two-stage training process:
1. Pre-train with sequences of a fixed length of 510 nucleotides.
2. Pre-train with sequences of a variable length between 64 and 1024 nucleotides.
The intermediate model after the first stage is available as `multimolecule/splicebert.510nt`.
SpliceBERT also pre-trained a model on human data only to validate the contribution of multi-species pre-training. The intermediate model after the first stage is available as `multimolecule/splicebert-human.510nt`.
## Citation
**BibTeX**:
```bibtex
@article {chen2023self,
author = {Chen, Ken and Zhou, Yue and Ding, Maolin and Wang, Yu and Ren, Zhixiang and Yang, Yuedong},
title = {Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction},
elocation-id = {2023.01.31.526427},
year = {2023},
doi = {10.1101/2023.01.31.526427},
publisher = {Cold Spring Harbor Laboratory},
abstract = {RNA splicing is an important post-transcriptional process of gene expression in eukaryotic cells. Predicting RNA splicing from primary sequences can facilitate the interpretation of genomic variants. In this study, we developed a novel self-supervised pre-trained language model, SpliceBERT, to improve sequence-based RNA splicing prediction. Pre-training on pre-mRNA sequences from vertebrates enables SpliceBERT to capture evolutionary conservation information and characterize the unique property of splice sites. SpliceBERT also improves zero-shot prediction of variant effects on splicing by considering sequence context information, and achieves superior performance for predicting branchpoint in the human genome and splice sites across species. Our study highlighted the importance of pre-training genomic language models on a diverse range of species and suggested that pre-trained language models were promising for deciphering the sequence logic of RNA splicing.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427},
eprint = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [SpliceBERT paper](https://doi.org/10.1101/2023.01.31.526427) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
sara-m98/ECO_DEBERTA_1 | sara-m98 | 2024-07-02T16:22:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T11:52:03Z | 'microsoft/deberta-v3-base'
training_args = TrainingArguments(
output_dir='ECO_DEBERTA',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=32,
weight_decay=0.01,
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True
)
Epoch Training Loss Validation Loss Precision Recall F1 Accuracy
1 No log 0.079086 0.264007 0.159103 0.198550 0.982315
2 0.153400 0.055790 0.402147 0.354133 0.376616 0.985904
3 0.153400 0.055938 0.377627 0.436791 0.405060 0.985536
4 0.037400 0.059241 0.424993 0.426256 0.425624 0.986040
5 0.037400 0.066712 0.436903 0.457320 0.446879 0.986067
6 0.021100 0.064148 0.422239 0.465694 0.442903 0.986155
7 0.021100 0.069515 0.460089 0.474878 0.467367 0.986865
8 0.012900 0.073564 0.458955 0.465154 0.462034 0.986700
9 0.012900 0.081422 0.452289 0.472447 0.462148 0.986066
10 0.008500 0.082762 0.452456 0.467855 0.460027 0.986476
11 0.008500 0.085812 0.458534 0.462993 0.460753 0.986490
12 0.005900 0.086245 0.470666 0.481091 0.475822 0.986883
13 0.005900 0.089477 0.479507 0.483522 0.481506 0.986921
14 0.004300 0.093831 0.474394 0.465424 0.469866 0.986814
15 0.004300 0.096122 0.487333 0.483252 0.485284 0.987021
16 0.003300 0.096951 0.492196 0.494057 0.493125 0.987023
17 0.003300 0.093057 0.480755 0.509454 0.494689 0.987118
18 0.002700 0.099559 0.507381 0.501351 0.504348 0.987200
19 0.002700 0.102917 0.498771 0.493247 0.495993 0.986986
20 0.002200 0.099864 0.503277 0.497839 0.500543 0.987309
21 0.002200 0.101206 0.500547 0.494327 0.497418 0.987205
22 0.001900 0.103037 0.490170 0.491626 0.490897 0.987013
23 0.001900 0.103360 0.493261 0.494327 0.493794 0.987143
24 0.001600 0.107981 0.505051 0.499730 0.502376 0.987058
25 0.001600 0.108147 0.511440 0.495138 0.503157 0.987289
26 0.001400 0.111687 0.507705 0.498379 0.502999 0.987246
27 0.001400 0.111873 0.502892 0.493247 0.498023 0.986916
28 0.001200 0.111417 0.506169 0.498649 0.502381 0.987219
29 0.001200 0.111508 0.509287 0.496218 0.502668 0.987453
30 0.001100 0.112689 0.514325 0.499460 0.506784 0.987268
31 0.001100 0.113233 0.508647 0.500540 0.504561 0.987196
32 0.001000 0.113873 0.510779 0.499190 0.504918 0.987244

|
Raidenv/swin-tiny-patch4-window7-224-finetuned-eurosat | Raidenv | 2024-07-02T12:43:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T11:52:04Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7663551401869159
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8360
- Accuracy: 0.7664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9333 | 7 | 3.8894 | 0.0841 |
| 3.897 | 2.0 | 15 | 3.8185 | 0.0841 |
| 3.8553 | 2.9333 | 22 | 3.7402 | 0.0748 |
| 3.7568 | 4.0 | 30 | 3.6372 | 0.0748 |
| 3.7568 | 4.9333 | 37 | 3.5482 | 0.0841 |
| 3.5912 | 6.0 | 45 | 3.4069 | 0.1121 |
| 3.4342 | 6.9333 | 52 | 3.2939 | 0.1308 |
| 3.2601 | 8.0 | 60 | 3.1786 | 0.2150 |
| 3.2601 | 8.9333 | 67 | 3.0323 | 0.2336 |
| 3.0498 | 10.0 | 75 | 2.8695 | 0.2617 |
| 2.849 | 10.9333 | 82 | 2.8505 | 0.2523 |
| 2.6452 | 12.0 | 90 | 2.6319 | 0.2804 |
| 2.6452 | 12.9333 | 97 | 2.4654 | 0.3271 |
| 2.4123 | 14.0 | 105 | 2.3995 | 0.3364 |
| 2.2561 | 14.9333 | 112 | 2.2584 | 0.4019 |
| 2.0447 | 16.0 | 120 | 2.2000 | 0.4299 |
| 2.0447 | 16.9333 | 127 | 2.0806 | 0.4393 |
| 1.8569 | 18.0 | 135 | 2.0593 | 0.4393 |
| 1.7447 | 18.9333 | 142 | 1.8832 | 0.4673 |
| 1.5821 | 20.0 | 150 | 1.8218 | 0.5047 |
| 1.5821 | 20.9333 | 157 | 1.7334 | 0.5421 |
| 1.3999 | 22.0 | 165 | 1.6213 | 0.5514 |
| 1.2901 | 22.9333 | 172 | 1.5932 | 0.5234 |
| 1.1569 | 24.0 | 180 | 1.5256 | 0.5701 |
| 1.1569 | 24.9333 | 187 | 1.4281 | 0.5888 |
| 1.0903 | 26.0 | 195 | 1.3997 | 0.5794 |
| 0.9674 | 26.9333 | 202 | 1.4017 | 0.5888 |
| 0.98 | 28.0 | 210 | 1.2916 | 0.5981 |
| 0.98 | 28.9333 | 217 | 1.3018 | 0.5981 |
| 0.8772 | 30.0 | 225 | 1.2552 | 0.6355 |
| 0.7842 | 30.9333 | 232 | 1.2372 | 0.6075 |
| 0.7438 | 32.0 | 240 | 1.1908 | 0.6168 |
| 0.7438 | 32.9333 | 247 | 1.1567 | 0.6636 |
| 0.725 | 34.0 | 255 | 1.1542 | 0.6262 |
| 0.6709 | 34.9333 | 262 | 1.1377 | 0.6262 |
| 0.6898 | 36.0 | 270 | 1.0524 | 0.6636 |
| 0.6898 | 36.9333 | 277 | 1.0272 | 0.6729 |
| 0.6125 | 38.0 | 285 | 1.0399 | 0.6355 |
| 0.6153 | 38.9333 | 292 | 1.0308 | 0.6822 |
| 0.5898 | 40.0 | 300 | 1.0151 | 0.7009 |
| 0.5898 | 40.9333 | 307 | 1.0483 | 0.6542 |
| 0.5881 | 42.0 | 315 | 0.9926 | 0.7009 |
| 0.54 | 42.9333 | 322 | 1.0300 | 0.6916 |
| 0.4515 | 44.0 | 330 | 0.9262 | 0.7383 |
| 0.4515 | 44.9333 | 337 | 0.9486 | 0.7290 |
| 0.5057 | 46.0 | 345 | 0.9219 | 0.7103 |
| 0.4905 | 46.9333 | 352 | 1.0184 | 0.6822 |
| 0.4669 | 48.0 | 360 | 0.9337 | 0.7290 |
| 0.4669 | 48.9333 | 367 | 0.9431 | 0.7103 |
| 0.4437 | 50.0 | 375 | 0.9312 | 0.7009 |
| 0.4754 | 50.9333 | 382 | 0.9245 | 0.7196 |
| 0.4119 | 52.0 | 390 | 0.8826 | 0.7383 |
| 0.4119 | 52.9333 | 397 | 0.9262 | 0.7196 |
| 0.4087 | 54.0 | 405 | 0.8882 | 0.7477 |
| 0.3987 | 54.9333 | 412 | 0.9282 | 0.7290 |
| 0.4253 | 56.0 | 420 | 0.9004 | 0.7477 |
| 0.4253 | 56.9333 | 427 | 0.8783 | 0.7477 |
| 0.4134 | 58.0 | 435 | 0.8360 | 0.7664 |
| 0.4024 | 58.9333 | 442 | 0.9016 | 0.7196 |
| 0.3688 | 60.0 | 450 | 0.9251 | 0.6822 |
| 0.3688 | 60.9333 | 457 | 0.9086 | 0.7103 |
| 0.3833 | 62.0 | 465 | 0.8494 | 0.7383 |
| 0.3614 | 62.9333 | 472 | 0.8299 | 0.7290 |
| 0.3792 | 64.0 | 480 | 0.9015 | 0.7383 |
| 0.3792 | 64.9333 | 487 | 0.8802 | 0.7196 |
| 0.3632 | 66.0 | 495 | 0.8881 | 0.7009 |
| 0.3405 | 66.9333 | 502 | 0.8578 | 0.7383 |
| 0.3673 | 68.0 | 510 | 0.8540 | 0.7570 |
| 0.3673 | 68.9333 | 517 | 0.8345 | 0.7383 |
| 0.3379 | 70.0 | 525 | 0.7919 | 0.7383 |
| 0.3389 | 70.9333 | 532 | 0.8384 | 0.7290 |
| 0.3363 | 72.0 | 540 | 0.8306 | 0.7383 |
| 0.3363 | 72.9333 | 547 | 0.8875 | 0.7477 |
| 0.3494 | 74.0 | 555 | 0.9151 | 0.7009 |
| 0.2989 | 74.9333 | 562 | 0.8606 | 0.7103 |
| 0.3157 | 76.0 | 570 | 0.8640 | 0.7383 |
| 0.3157 | 76.9333 | 577 | 0.8532 | 0.7290 |
| 0.3013 | 78.0 | 585 | 0.8479 | 0.7103 |
| 0.2968 | 78.9333 | 592 | 0.8839 | 0.7383 |
| 0.3013 | 80.0 | 600 | 0.8837 | 0.7196 |
| 0.3013 | 80.9333 | 607 | 0.8694 | 0.7103 |
| 0.3247 | 82.0 | 615 | 0.8721 | 0.7290 |
| 0.2515 | 82.9333 | 622 | 0.8605 | 0.7290 |
| 0.3175 | 84.0 | 630 | 0.8505 | 0.7290 |
| 0.3175 | 84.9333 | 637 | 0.8488 | 0.7290 |
| 0.3015 | 86.0 | 645 | 0.8554 | 0.7383 |
| 0.2989 | 86.9333 | 652 | 0.8707 | 0.7290 |
| 0.3155 | 88.0 | 660 | 0.8712 | 0.7290 |
| 0.3155 | 88.9333 | 667 | 0.8659 | 0.7290 |
| 0.2871 | 90.0 | 675 | 0.8573 | 0.7290 |
| 0.2872 | 90.9333 | 682 | 0.8530 | 0.7290 |
| 0.2587 | 92.0 | 690 | 0.8516 | 0.7383 |
| 0.2587 | 92.9333 | 697 | 0.8502 | 0.7383 |
| 0.3133 | 93.3333 | 700 | 0.8501 | 0.7383 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
multimolecule/splicebert.510nt | multimolecule | 2024-07-02T11:52:30Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"splicebert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ucsc-genome-browser",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:52:13Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/ucsc-genome-browser
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "A"
score: 0.09628374129533768
- label: "W"
score: 0.09019321203231812
- label: "U"
score: 0.08448788523674011
- label: "H"
score: 0.07201363891363144
- label: "M"
score: 0.06648518145084381
---
# SpliceBERT
Pre-trained model on messenger RNA precursor (pre-mRNA) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427) by Ken Chen, et al.
The OFFICIAL repository of SpliceBERT is at [chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing SpliceBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
SpliceBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of messenger RNA precursor sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/splicebert`](https://huggingface.co/multimolecule/splicebert)**: The SpliceBERT model.
- **[`multimolecule/splicebert.510nt`](https://huggingface.co/multimolecule/splicebert.510nt)**: The intermediate SpliceBERT model.
- **[`multimolecule/splicebert-human.510nt`](https://huggingface.co/multimolecule/splicebert-human.510nt)**: The intermediate SpliceBERT model pre-trained on human data only.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>splicebert</td>
<td rowspan="3">6</td>
<td rowspan="3">512</td>
<td rowspan="3">16</td>
<td rowspan="3">2048</td>
<td>19.72</td>
<td rowspan="3">5.04</td>
<td rowspan="3">2.52</td>
<td>1024</td>
</tr>
<tr>
<td>splicebert.510nt</td>
<td rowspan="2">19.45</td>
<td rowspan="2">510</td>
</tr>
<tr>
<td>splicebert-human.510nt</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.splicebert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/splicebert)
- **Data**: [UCSC Genome Browser](https://genome.ucsc.edu)
- **Paper**: [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427)
- **Developed by**: Ken Chen, Yue Zhou, Maolin Ding, Yu Wang, Zhixiang Ren, Yuedong Yang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/splicebert')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.09628374129533768,
'token': 6,
'token_str': 'A',
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
{'score': 0.09019321203231812,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'},
{'score': 0.08448788523674011,
'token': 9,
'token_str': 'U',
'sequence': 'U A G C U U A U C A G A C U G A U G U U G A'},
{'score': 0.07201363891363144,
'token': 14,
'token_str': 'H',
'sequence': 'U A G C H U A U C A G A C U G A U G U U G A'},
{'score': 0.06648518145084381,
'token': 17,
'token_str': 'M',
'sequence': 'U A G C M U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, SpliceBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertModel.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForSequencePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForNucleotidePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForContactPrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
SpliceBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The SpliceBERT model was pre-trained on messenger RNA precursor sequences from [UCSC Genome Browser](https://genome.ucsc.edu). UCSC Genome Browser provides visualization, analysis, and download of comprehensive vertebrate genome data with aligned annotation tracks (known genes, predicted genes, ESTs, mRNAs, CpG islands, etc.).
SpliceBERT collected reference genomes and gene annotations from the UCSC Genome Browser for 72 vertebrate species. It applied [bedtools getfasta](https://bedtools.readthedocs.io/en/latest/content/tools/getfasta.html) to extract pre-mRNA sequences from the reference genomes based on the gene annotations. The pre-mRNA sequences are then used to pre-train SpliceBERT. The pre-training data contains 2 million pre-mRNA sequences with a total length of 65 billion nucleotides.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
SpliceBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on 8 NVIDIA V100 GPUs.
- Learning rate: 1e-4
- Learning rate scheduler: ReduceLROnPlateau(patience=3)
- Optimizer: AdamW
SpliceBERT trained model in a two-stage training process:
1. Pre-train with sequences of a fixed length of 510 nucleotides.
2. Pre-train with sequences of a variable length between 64 and 1024 nucleotides.
The intermediate model after the first stage is available as `multimolecule/splicebert.510nt`.
SpliceBERT also pre-trained a model on human data only to validate the contribution of multi-species pre-training. The intermediate model after the first stage is available as `multimolecule/splicebert-human.510nt`.
## Citation
**BibTeX**:
```bibtex
@article {chen2023self,
author = {Chen, Ken and Zhou, Yue and Ding, Maolin and Wang, Yu and Ren, Zhixiang and Yang, Yuedong},
title = {Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction},
elocation-id = {2023.01.31.526427},
year = {2023},
doi = {10.1101/2023.01.31.526427},
publisher = {Cold Spring Harbor Laboratory},
abstract = {RNA splicing is an important post-transcriptional process of gene expression in eukaryotic cells. Predicting RNA splicing from primary sequences can facilitate the interpretation of genomic variants. In this study, we developed a novel self-supervised pre-trained language model, SpliceBERT, to improve sequence-based RNA splicing prediction. Pre-training on pre-mRNA sequences from vertebrates enables SpliceBERT to capture evolutionary conservation information and characterize the unique property of splice sites. SpliceBERT also improves zero-shot prediction of variant effects on splicing by considering sequence context information, and achieves superior performance for predicting branchpoint in the human genome and splice sites across species. Our study highlighted the importance of pre-training genomic language models on a diverse range of species and suggested that pre-trained language models were promising for deciphering the sequence logic of RNA splicing.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427},
eprint = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [SpliceBERT paper](https://doi.org/10.1101/2023.01.31.526427) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
shjjsjjsjsjsj/001 | shjjsjjsjsjsj | 2024-07-02T11:52:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T11:52:28Z | ---
license: apache-2.0
---
|
multimolecule/splicebert-human.510nt | multimolecule | 2024-07-02T11:52:49Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"splicebert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ucsc-genome-browser",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:52:34Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/ucsc-genome-browser
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "A"
score: 0.09628374129533768
- label: "W"
score: 0.09019321203231812
- label: "U"
score: 0.08448788523674011
- label: "H"
score: 0.07201363891363144
- label: "M"
score: 0.06648518145084381
---
# SpliceBERT
Pre-trained model on messenger RNA precursor (pre-mRNA) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427) by Ken Chen, et al.
The OFFICIAL repository of SpliceBERT is at [chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing SpliceBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
SpliceBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of messenger RNA precursor sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/splicebert`](https://huggingface.co/multimolecule/splicebert)**: The SpliceBERT model.
- **[`multimolecule/splicebert.510nt`](https://huggingface.co/multimolecule/splicebert.510nt)**: The intermediate SpliceBERT model.
- **[`multimolecule/splicebert-human.510nt`](https://huggingface.co/multimolecule/splicebert-human.510nt)**: The intermediate SpliceBERT model pre-trained on human data only.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>splicebert</td>
<td rowspan="3">6</td>
<td rowspan="3">512</td>
<td rowspan="3">16</td>
<td rowspan="3">2048</td>
<td>19.72</td>
<td rowspan="3">5.04</td>
<td rowspan="3">2.52</td>
<td>1024</td>
</tr>
<tr>
<td>splicebert.510nt</td>
<td rowspan="2">19.45</td>
<td rowspan="2">510</td>
</tr>
<tr>
<td>splicebert-human.510nt</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.splicebert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/splicebert)
- **Data**: [UCSC Genome Browser](https://genome.ucsc.edu)
- **Paper**: [Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction](https://doi.org/10.1101/2023.01.31.526427)
- **Developed by**: Ken Chen, Yue Zhou, Maolin Ding, Yu Wang, Zhixiang Ren, Yuedong Yang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/chenkenbio/SpliceBERT](https://github.com/chenkenbio/SpliceBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/splicebert')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.09628374129533768,
'token': 6,
'token_str': 'A',
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
{'score': 0.09019321203231812,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'},
{'score': 0.08448788523674011,
'token': 9,
'token_str': 'U',
'sequence': 'U A G C U U A U C A G A C U G A U G U U G A'},
{'score': 0.07201363891363144,
'token': 14,
'token_str': 'H',
'sequence': 'U A G C H U A U C A G A C U G A U G U U G A'},
{'score': 0.06648518145084381,
'token': 17,
'token_str': 'M',
'sequence': 'U A G C M U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, SpliceBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertModel.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForSequencePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForNucleotidePrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, SpliceBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/splicebert')
model = SpliceBertForContactPrediction.from_pretrained('multimolecule/splicebert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
SpliceBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The SpliceBERT model was pre-trained on messenger RNA precursor sequences from [UCSC Genome Browser](https://genome.ucsc.edu). UCSC Genome Browser provides visualization, analysis, and download of comprehensive vertebrate genome data with aligned annotation tracks (known genes, predicted genes, ESTs, mRNAs, CpG islands, etc.).
SpliceBERT collected reference genomes and gene annotations from the UCSC Genome Browser for 72 vertebrate species. It applied [bedtools getfasta](https://bedtools.readthedocs.io/en/latest/content/tools/getfasta.html) to extract pre-mRNA sequences from the reference genomes based on the gene annotations. The pre-mRNA sequences are then used to pre-train SpliceBERT. The pre-training data contains 2 million pre-mRNA sequences with a total length of 65 billion nucleotides.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
SpliceBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on 8 NVIDIA V100 GPUs.
- Learning rate: 1e-4
- Learning rate scheduler: ReduceLROnPlateau(patience=3)
- Optimizer: AdamW
SpliceBERT trained model in a two-stage training process:
1. Pre-train with sequences of a fixed length of 510 nucleotides.
2. Pre-train with sequences of a variable length between 64 and 1024 nucleotides.
The intermediate model after the first stage is available as `multimolecule/splicebert.510nt`.
SpliceBERT also pre-trained a model on human data only to validate the contribution of multi-species pre-training. The intermediate model after the first stage is available as `multimolecule/splicebert-human.510nt`.
## Citation
**BibTeX**:
```bibtex
@article {chen2023self,
author = {Chen, Ken and Zhou, Yue and Ding, Maolin and Wang, Yu and Ren, Zhixiang and Yang, Yuedong},
title = {Self-supervised learning on millions of pre-mRNA sequences improves sequence-based RNA splicing prediction},
elocation-id = {2023.01.31.526427},
year = {2023},
doi = {10.1101/2023.01.31.526427},
publisher = {Cold Spring Harbor Laboratory},
abstract = {RNA splicing is an important post-transcriptional process of gene expression in eukaryotic cells. Predicting RNA splicing from primary sequences can facilitate the interpretation of genomic variants. In this study, we developed a novel self-supervised pre-trained language model, SpliceBERT, to improve sequence-based RNA splicing prediction. Pre-training on pre-mRNA sequences from vertebrates enables SpliceBERT to capture evolutionary conservation information and characterize the unique property of splice sites. SpliceBERT also improves zero-shot prediction of variant effects on splicing by considering sequence context information, and achieves superior performance for predicting branchpoint in the human genome and splice sites across species. Our study highlighted the importance of pre-training genomic language models on a diverse range of species and suggested that pre-trained language models were promising for deciphering the sequence logic of RNA splicing.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427},
eprint = {https://www.biorxiv.org/content/early/2023/05/09/2023.01.31.526427.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [SpliceBERT paper](https://doi.org/10.1101/2023.01.31.526427) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
sugirdha12/CLINICAL-BERT | sugirdha12 | 2024-07-02T11:56:18Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:52:50Z | ---
license: apache-2.0
---
|
multimolecule/utrbert-3mer | multimolecule | 2024-07-02T11:53:54Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrbert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/gencode",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:52:55Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/gencode
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask><mask><mask>UCAGACUGAUGUUGA"
output:
- label: "CUU"
score: 0.510771632194519
- label: "CCU"
score: 0.3299057185649872
- label: "CAU"
score: 0.09743840992450714
- label: "GCU"
score: 0.010745460167527199
- label: "AUU"
score: 0.010299043729901314
---
# 3UTRBERT
Pre-trained model on 3’ untranslated region (3’UTR) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883) by Yuning Yang, Gen Li, et al.
The OFFICIAL repository of 3UTRBERT is at [yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing 3UTRBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
3UTRBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 3’ untranslated regions (3’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrbert-3mer`](https://huggingface.co/multimolecule/utrbert-3mer)**: The 3UTRBERT model pre-trained on 3-mer data.
- **[`multimolecule/utrbert-4mer`](https://huggingface.co/multimolecule/utrbert-4mer)**: The 3UTRBERT model pre-trained on 4-mer data.
- **[`multimolecule/utrbert-5mer`](https://huggingface.co/multimolecule/utrbert-5mer)**: The 3UTRBERT model pre-trained on 5-mer data.
- **[`multimolecule/utrbert-6mer`](https://huggingface.co/multimolecule/utrbert-6mer)**: The 3UTRBERT model pre-trained on 6-mer data.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTRBERT-3mer</td>
<td rowspan="4">12</td>
<td rowspan="4">768</td>
<td rowspan="4">12</td>
<td rowspan="4">3072</td>
<td>86.14</td>
<td rowspan="4">22.36</td>
<td rowspan="4">11.17</td>
<td rowspan="4">512</td>
</tr>
<tr>
<td>UTRBERT-4mer</td>
<td>86.53</td>
</tr>
<tr>
<td>UTRBERT-5mer</td>
<td>88.45</td>
</tr>
<tr>
<td>UTRBERT-6mer</td>
<td>98.05</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrbert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrbert)
- **Data**: [GENCODE](https://gencodegenes.org)
- **Paper**: [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883)
- **Developed by**: Yuning Yang, Gen Li, Kuan Pang, Wuxinhao Cao, Xiangtao Li, Zhaolei Zhang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
**Note**: Default transformers pipeline does not support K-mer tokenization.
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrbert-3mer')
>>> unmasker("uagc<mask><mask><mask>ucagacugauguuga")[1]
[{'score': 0.510771632194519,
'token': 49,
'token_str': 'CUU',
'sequence': '<cls> UAG AGC <mask> CUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.3299057185649872,
'token': 39,
'token_str': 'CCU',
'sequence': '<cls> UAG AGC <mask> CCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.09743840992450714,
'token': 34,
'token_str': 'CAU',
'sequence': '<cls> UAG AGC <mask> CAU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010745460167527199,
'token': 64,
'token_str': 'GCU',
'sequence': '<cls> UAG AGC <mask> GCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010299043729901314,
'token': 24,
'token_str': 'AUU',
'sequence': '<cls> UAG AGC <mask> AUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertModel.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForSequencePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForNucleotidePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert')
model = UtrBertForContactPrediction.from_pretrained('multimolecule/utrbert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
3UTRBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The 3UTRBERT model was pre-trained on human mRNA transcript sequences from [GENCODE](https://gencodegenes.org). GENCODE aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. The GENCODE release 40 used by this work contains 61,544 genes, and 246,624 transcripts.
3UTRBERT collected the human mRNA transcript sequences from GENCODE, including 108,573 unique mRNA transcripts. Only the longest transcript of each gene was used in the pre-training process. 3UTRBERT only used the 3’ untranslated regions (3’UTRs) of the mRNA transcripts for pre-training to avoid codon constrains in the CDS region, and to reduce increased complexity of the entire mRNA transcripts. The average length of the 3’UTRs was 1,227 nucleotides, while the median length was 631 nucleotides. Each 3’UTR sequence was cut to non-overlapping patches of 510 nucleotides. The remaining sequences were padded to the same length.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
3UTRBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Since 3UTRBERT used k-mer tokenizer, it masks the entire k-mer instead of individual nucleotides to avoid information leakage.
For example, if the k-mer is 3, the sequence `"UAGCGUAU"` will be tokenized as `["UAG", "AGC", "GCG", "CGU", "GUA", "UAU"]`. If the nucleotide `"C"` is masked, the adjacent tokens will also be masked, resulting `["UAG", "<mask>", "<mask>", "<mask>", "GUA", "UAU"]`.
#### PreTraining
The model was trained on 4 NVIDIA Quadro RTX 6000 GPUs with 24GiB memories.
- Batch size: 128
- Learning rate: 3e-4
- Weight decay: 0.01
- Optimizer: AdamW(β1=0.9, β2=0.98, e=1e-6)
- Steps: 200,000
- Learning rate scheduler: Linear
- Learning rate warm-up: 10,000 steps
## Citation
**BibTeX**:
```bibtex
@article {yang2023deciphering,
author = {Yang, Yuning and Li, Gen and Pang, Kuan and Cao, Wuxinhao and Li, Xiangtao and Zhang, Zhaolei},
title = {Deciphering 3{\textquoteright} UTR mediated gene regulation using interpretable deep representation learning},
elocation-id = {2023.09.08.556883},
year = {2023},
doi = {10.1101/2023.09.08.556883},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 3{\textquoteright}untranslated regions (3{\textquoteright}UTRs) of messenger RNAs contain many important cis-regulatory elements that are under functional and evolutionary constraints. We hypothesize that these constraints are similar to grammars and syntaxes in human languages and can be modeled by advanced natural language models such as Transformers, which has been very effective in modeling protein sequence and structures. Here we describe 3UTRBERT, which implements an attention-based language model, i.e., Bidirectional Encoder Representations from Transformers (BERT). 3UTRBERT was pre-trained on aggregated 3{\textquoteright}UTR sequences of human mRNAs in a task-agnostic manner; the pre-trained model was then fine-tuned for specific downstream tasks such as predicting RBP binding sites, m6A RNA modification sites, and predicting RNA sub-cellular localizations. Benchmark results showed that 3UTRBERT generally outperformed other contemporary methods in each of these tasks. We also showed that the self-attention mechanism within 3UTRBERT allows direct visualization of the semantic relationship between sequence elements.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883},
eprint = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [3UTRBERT paper](https://doi.org/10.1101/2023.09.08.556883) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF | geonheechoi22 | 2024-07-02T11:53:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:53:01Z | ---
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
language:
- en
- ko
library_name: transformers
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`MLP-KTLim/llama-3-Korean-Bllossom-8B`](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF --hf-file llama-3-korean-bllossom-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF --hf-file llama-3-korean-bllossom-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF --hf-file llama-3-korean-bllossom-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo geonheechoi22/llama-3-Korean-Bllossom-8B-Q4_K_M-GGUF --hf-file llama-3-korean-bllossom-8b-q4_k_m.gguf -c 2048
```
|
multimolecule/utrbert-4mer | multimolecule | 2024-07-02T11:55:13Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrbert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/gencode",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:54:02Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/gencode
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask><mask><mask>UCAGACUGAUGUUGA"
output:
- label: "CUU"
score: 0.510771632194519
- label: "CCU"
score: 0.3299057185649872
- label: "CAU"
score: 0.09743840992450714
- label: "GCU"
score: 0.010745460167527199
- label: "AUU"
score: 0.010299043729901314
---
# 3UTRBERT
Pre-trained model on 3’ untranslated region (3’UTR) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883) by Yuning Yang, Gen Li, et al.
The OFFICIAL repository of 3UTRBERT is at [yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing 3UTRBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
3UTRBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 3’ untranslated regions (3’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrbert-3mer`](https://huggingface.co/multimolecule/utrbert-3mer)**: The 3UTRBERT model pre-trained on 3-mer data.
- **[`multimolecule/utrbert-4mer`](https://huggingface.co/multimolecule/utrbert-4mer)**: The 3UTRBERT model pre-trained on 4-mer data.
- **[`multimolecule/utrbert-5mer`](https://huggingface.co/multimolecule/utrbert-5mer)**: The 3UTRBERT model pre-trained on 5-mer data.
- **[`multimolecule/utrbert-6mer`](https://huggingface.co/multimolecule/utrbert-6mer)**: The 3UTRBERT model pre-trained on 6-mer data.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTRBERT-3mer</td>
<td rowspan="4">12</td>
<td rowspan="4">768</td>
<td rowspan="4">12</td>
<td rowspan="4">3072</td>
<td>86.14</td>
<td rowspan="4">22.36</td>
<td rowspan="4">11.17</td>
<td rowspan="4">512</td>
</tr>
<tr>
<td>UTRBERT-4mer</td>
<td>86.53</td>
</tr>
<tr>
<td>UTRBERT-5mer</td>
<td>88.45</td>
</tr>
<tr>
<td>UTRBERT-6mer</td>
<td>98.05</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrbert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrbert)
- **Data**: [GENCODE](https://gencodegenes.org)
- **Paper**: [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883)
- **Developed by**: Yuning Yang, Gen Li, Kuan Pang, Wuxinhao Cao, Xiangtao Li, Zhaolei Zhang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
**Note**: Default transformers pipeline does not support K-mer tokenization.
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrbert-3mer')
>>> unmasker("uagc<mask><mask><mask>ucagacugauguuga")[1]
[{'score': 0.510771632194519,
'token': 49,
'token_str': 'CUU',
'sequence': '<cls> UAG AGC <mask> CUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.3299057185649872,
'token': 39,
'token_str': 'CCU',
'sequence': '<cls> UAG AGC <mask> CCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.09743840992450714,
'token': 34,
'token_str': 'CAU',
'sequence': '<cls> UAG AGC <mask> CAU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010745460167527199,
'token': 64,
'token_str': 'GCU',
'sequence': '<cls> UAG AGC <mask> GCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010299043729901314,
'token': 24,
'token_str': 'AUU',
'sequence': '<cls> UAG AGC <mask> AUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertModel.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForSequencePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForNucleotidePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert')
model = UtrBertForContactPrediction.from_pretrained('multimolecule/utrbert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
3UTRBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The 3UTRBERT model was pre-trained on human mRNA transcript sequences from [GENCODE](https://gencodegenes.org). GENCODE aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. The GENCODE release 40 used by this work contains 61,544 genes, and 246,624 transcripts.
3UTRBERT collected the human mRNA transcript sequences from GENCODE, including 108,573 unique mRNA transcripts. Only the longest transcript of each gene was used in the pre-training process. 3UTRBERT only used the 3’ untranslated regions (3’UTRs) of the mRNA transcripts for pre-training to avoid codon constrains in the CDS region, and to reduce increased complexity of the entire mRNA transcripts. The average length of the 3’UTRs was 1,227 nucleotides, while the median length was 631 nucleotides. Each 3’UTR sequence was cut to non-overlapping patches of 510 nucleotides. The remaining sequences were padded to the same length.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
3UTRBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Since 3UTRBERT used k-mer tokenizer, it masks the entire k-mer instead of individual nucleotides to avoid information leakage.
For example, if the k-mer is 3, the sequence `"UAGCGUAU"` will be tokenized as `["UAG", "AGC", "GCG", "CGU", "GUA", "UAU"]`. If the nucleotide `"C"` is masked, the adjacent tokens will also be masked, resulting `["UAG", "<mask>", "<mask>", "<mask>", "GUA", "UAU"]`.
#### PreTraining
The model was trained on 4 NVIDIA Quadro RTX 6000 GPUs with 24GiB memories.
- Batch size: 128
- Learning rate: 3e-4
- Weight decay: 0.01
- Optimizer: AdamW(β1=0.9, β2=0.98, e=1e-6)
- Steps: 200,000
- Learning rate scheduler: Linear
- Learning rate warm-up: 10,000 steps
## Citation
**BibTeX**:
```bibtex
@article {yang2023deciphering,
author = {Yang, Yuning and Li, Gen and Pang, Kuan and Cao, Wuxinhao and Li, Xiangtao and Zhang, Zhaolei},
title = {Deciphering 3{\textquoteright} UTR mediated gene regulation using interpretable deep representation learning},
elocation-id = {2023.09.08.556883},
year = {2023},
doi = {10.1101/2023.09.08.556883},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 3{\textquoteright}untranslated regions (3{\textquoteright}UTRs) of messenger RNAs contain many important cis-regulatory elements that are under functional and evolutionary constraints. We hypothesize that these constraints are similar to grammars and syntaxes in human languages and can be modeled by advanced natural language models such as Transformers, which has been very effective in modeling protein sequence and structures. Here we describe 3UTRBERT, which implements an attention-based language model, i.e., Bidirectional Encoder Representations from Transformers (BERT). 3UTRBERT was pre-trained on aggregated 3{\textquoteright}UTR sequences of human mRNAs in a task-agnostic manner; the pre-trained model was then fine-tuned for specific downstream tasks such as predicting RBP binding sites, m6A RNA modification sites, and predicting RNA sub-cellular localizations. Benchmark results showed that 3UTRBERT generally outperformed other contemporary methods in each of these tasks. We also showed that the self-attention mechanism within 3UTRBERT allows direct visualization of the semantic relationship between sequence elements.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883},
eprint = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [3UTRBERT paper](https://doi.org/10.1101/2023.09.08.556883) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
Moriacrafter/Qwen1.5-14B-4bit_DepressionDetection | Moriacrafter | 2024-07-02T12:03:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T11:54:33Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
multimolecule/utrbert-5mer | multimolecule | 2024-07-02T11:56:23Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrbert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/gencode",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:55:20Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/gencode
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask><mask><mask>UCAGACUGAUGUUGA"
output:
- label: "CUU"
score: 0.510771632194519
- label: "CCU"
score: 0.3299057185649872
- label: "CAU"
score: 0.09743840992450714
- label: "GCU"
score: 0.010745460167527199
- label: "AUU"
score: 0.010299043729901314
---
# 3UTRBERT
Pre-trained model on 3’ untranslated region (3’UTR) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883) by Yuning Yang, Gen Li, et al.
The OFFICIAL repository of 3UTRBERT is at [yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing 3UTRBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
3UTRBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 3’ untranslated regions (3’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrbert-3mer`](https://huggingface.co/multimolecule/utrbert-3mer)**: The 3UTRBERT model pre-trained on 3-mer data.
- **[`multimolecule/utrbert-4mer`](https://huggingface.co/multimolecule/utrbert-4mer)**: The 3UTRBERT model pre-trained on 4-mer data.
- **[`multimolecule/utrbert-5mer`](https://huggingface.co/multimolecule/utrbert-5mer)**: The 3UTRBERT model pre-trained on 5-mer data.
- **[`multimolecule/utrbert-6mer`](https://huggingface.co/multimolecule/utrbert-6mer)**: The 3UTRBERT model pre-trained on 6-mer data.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTRBERT-3mer</td>
<td rowspan="4">12</td>
<td rowspan="4">768</td>
<td rowspan="4">12</td>
<td rowspan="4">3072</td>
<td>86.14</td>
<td rowspan="4">22.36</td>
<td rowspan="4">11.17</td>
<td rowspan="4">512</td>
</tr>
<tr>
<td>UTRBERT-4mer</td>
<td>86.53</td>
</tr>
<tr>
<td>UTRBERT-5mer</td>
<td>88.45</td>
</tr>
<tr>
<td>UTRBERT-6mer</td>
<td>98.05</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrbert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrbert)
- **Data**: [GENCODE](https://gencodegenes.org)
- **Paper**: [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883)
- **Developed by**: Yuning Yang, Gen Li, Kuan Pang, Wuxinhao Cao, Xiangtao Li, Zhaolei Zhang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
**Note**: Default transformers pipeline does not support K-mer tokenization.
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrbert-3mer')
>>> unmasker("uagc<mask><mask><mask>ucagacugauguuga")[1]
[{'score': 0.510771632194519,
'token': 49,
'token_str': 'CUU',
'sequence': '<cls> UAG AGC <mask> CUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.3299057185649872,
'token': 39,
'token_str': 'CCU',
'sequence': '<cls> UAG AGC <mask> CCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.09743840992450714,
'token': 34,
'token_str': 'CAU',
'sequence': '<cls> UAG AGC <mask> CAU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010745460167527199,
'token': 64,
'token_str': 'GCU',
'sequence': '<cls> UAG AGC <mask> GCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010299043729901314,
'token': 24,
'token_str': 'AUU',
'sequence': '<cls> UAG AGC <mask> AUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertModel.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForSequencePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForNucleotidePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert')
model = UtrBertForContactPrediction.from_pretrained('multimolecule/utrbert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
3UTRBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The 3UTRBERT model was pre-trained on human mRNA transcript sequences from [GENCODE](https://gencodegenes.org). GENCODE aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. The GENCODE release 40 used by this work contains 61,544 genes, and 246,624 transcripts.
3UTRBERT collected the human mRNA transcript sequences from GENCODE, including 108,573 unique mRNA transcripts. Only the longest transcript of each gene was used in the pre-training process. 3UTRBERT only used the 3’ untranslated regions (3’UTRs) of the mRNA transcripts for pre-training to avoid codon constrains in the CDS region, and to reduce increased complexity of the entire mRNA transcripts. The average length of the 3’UTRs was 1,227 nucleotides, while the median length was 631 nucleotides. Each 3’UTR sequence was cut to non-overlapping patches of 510 nucleotides. The remaining sequences were padded to the same length.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
3UTRBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Since 3UTRBERT used k-mer tokenizer, it masks the entire k-mer instead of individual nucleotides to avoid information leakage.
For example, if the k-mer is 3, the sequence `"UAGCGUAU"` will be tokenized as `["UAG", "AGC", "GCG", "CGU", "GUA", "UAU"]`. If the nucleotide `"C"` is masked, the adjacent tokens will also be masked, resulting `["UAG", "<mask>", "<mask>", "<mask>", "GUA", "UAU"]`.
#### PreTraining
The model was trained on 4 NVIDIA Quadro RTX 6000 GPUs with 24GiB memories.
- Batch size: 128
- Learning rate: 3e-4
- Weight decay: 0.01
- Optimizer: AdamW(β1=0.9, β2=0.98, e=1e-6)
- Steps: 200,000
- Learning rate scheduler: Linear
- Learning rate warm-up: 10,000 steps
## Citation
**BibTeX**:
```bibtex
@article {yang2023deciphering,
author = {Yang, Yuning and Li, Gen and Pang, Kuan and Cao, Wuxinhao and Li, Xiangtao and Zhang, Zhaolei},
title = {Deciphering 3{\textquoteright} UTR mediated gene regulation using interpretable deep representation learning},
elocation-id = {2023.09.08.556883},
year = {2023},
doi = {10.1101/2023.09.08.556883},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 3{\textquoteright}untranslated regions (3{\textquoteright}UTRs) of messenger RNAs contain many important cis-regulatory elements that are under functional and evolutionary constraints. We hypothesize that these constraints are similar to grammars and syntaxes in human languages and can be modeled by advanced natural language models such as Transformers, which has been very effective in modeling protein sequence and structures. Here we describe 3UTRBERT, which implements an attention-based language model, i.e., Bidirectional Encoder Representations from Transformers (BERT). 3UTRBERT was pre-trained on aggregated 3{\textquoteright}UTR sequences of human mRNAs in a task-agnostic manner; the pre-trained model was then fine-tuned for specific downstream tasks such as predicting RBP binding sites, m6A RNA modification sites, and predicting RNA sub-cellular localizations. Benchmark results showed that 3UTRBERT generally outperformed other contemporary methods in each of these tasks. We also showed that the self-attention mechanism within 3UTRBERT allows direct visualization of the semantic relationship between sequence elements.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883},
eprint = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [3UTRBERT paper](https://doi.org/10.1101/2023.09.08.556883) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
jeromesky/consistency_accuracy_v2 | jeromesky | 2024-07-02T12:19:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-07-02T11:55:26Z | Entry not found |
linoyts/pop_tart_clip_skip_2 | linoyts | 2024-07-02T12:32:07Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-02T11:55:35Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a <s0><s1> pack of pop tarts in pizza flavor'
output:
url:
"image_0.png"
- text: 'a <s0><s1> pack of pop tarts in pizza flavor'
output:
url:
"image_1.png"
- text: 'a <s0><s1> pack of pop tarts in pizza flavor'
output:
url:
"image_2.png"
- text: 'a <s0><s1> pack of pop tarts in pizza flavor'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a <s0><s1> pack of pop tarts
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/pop_tart_clip_skip_2
<Gallery />
## Model description
### These are linoyts/pop_tart_clip_skip_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`pop_tart_clip_skip_2.safetensors` here 💾](/linoyts/pop_tart_clip_skip_2/blob/main/pop_tart_clip_skip_2.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:pop_tart_clip_skip_2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`pop_tart_clip_skip_2_emb.safetensors` here 💾](/linoyts/pop_tart_clip_skip_2/blob/main/pop_tart_clip_skip_2_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `pop_tart_clip_skip_2_emb` to your prompt. For example, `a pop_tart_clip_skip_2_emb pack of pop tarts`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/pop_tart_clip_skip_2', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/pop_tart_clip_skip_2', filename='pop_tart_clip_skip_2_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a <s0><s1> pack of pop tarts in pizza flavor').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/pop_tart_clip_skip_2/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
jnwulff/xlm-roberta-base-finetuned-panx-de | jnwulff | 2024-07-02T12:06:09Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T11:56:23Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8625641025641025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1350
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2585 | 1.0 | 525 | 0.1580 | 0.8255 |
| 0.1282 | 2.0 | 1050 | 0.1381 | 0.8447 |
| 0.0805 | 3.0 | 1575 | 0.1350 | 0.8626 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
multimolecule/utrbert-6mer | multimolecule | 2024-07-02T11:57:45Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrbert",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/gencode",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:56:31Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/gencode
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask><mask><mask>UCAGACUGAUGUUGA"
output:
- label: "CUU"
score: 0.510771632194519
- label: "CCU"
score: 0.3299057185649872
- label: "CAU"
score: 0.09743840992450714
- label: "GCU"
score: 0.010745460167527199
- label: "AUU"
score: 0.010299043729901314
---
# 3UTRBERT
Pre-trained model on 3’ untranslated region (3’UTR) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883) by Yuning Yang, Gen Li, et al.
The OFFICIAL repository of 3UTRBERT is at [yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT).
!!! Success "Reproducibility"
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing 3UTRBERT did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
3UTRBERT is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 3’ untranslated regions (3’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrbert-3mer`](https://huggingface.co/multimolecule/utrbert-3mer)**: The 3UTRBERT model pre-trained on 3-mer data.
- **[`multimolecule/utrbert-4mer`](https://huggingface.co/multimolecule/utrbert-4mer)**: The 3UTRBERT model pre-trained on 4-mer data.
- **[`multimolecule/utrbert-5mer`](https://huggingface.co/multimolecule/utrbert-5mer)**: The 3UTRBERT model pre-trained on 5-mer data.
- **[`multimolecule/utrbert-6mer`](https://huggingface.co/multimolecule/utrbert-6mer)**: The 3UTRBERT model pre-trained on 6-mer data.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTRBERT-3mer</td>
<td rowspan="4">12</td>
<td rowspan="4">768</td>
<td rowspan="4">12</td>
<td rowspan="4">3072</td>
<td>86.14</td>
<td rowspan="4">22.36</td>
<td rowspan="4">11.17</td>
<td rowspan="4">512</td>
</tr>
<tr>
<td>UTRBERT-4mer</td>
<td>86.53</td>
</tr>
<tr>
<td>UTRBERT-5mer</td>
<td>88.45</td>
</tr>
<tr>
<td>UTRBERT-6mer</td>
<td>98.05</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrbert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrbert)
- **Data**: [GENCODE](https://gencodegenes.org)
- **Paper**: [Deciphering 3’ UTR mediated gene regulation using interpretable deep representation learning](https://doi.org/10.1101/2023.09.08.556883)
- **Developed by**: Yuning Yang, Gen Li, Kuan Pang, Wuxinhao Cao, Xiangtao Li, Zhaolei Zhang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [FlashAttention](https://huggingface.co/docs/text-generation-inference/en/conceptual/flash_attention)
- **Original Repository**: [https://github.com/yangyn533/3UTRBERT](https://github.com/yangyn533/3UTRBERT)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
**Note**: Default transformers pipeline does not support K-mer tokenization.
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrbert-3mer')
>>> unmasker("uagc<mask><mask><mask>ucagacugauguuga")[1]
[{'score': 0.510771632194519,
'token': 49,
'token_str': 'CUU',
'sequence': '<cls> UAG AGC <mask> CUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.3299057185649872,
'token': 39,
'token_str': 'CCU',
'sequence': '<cls> UAG AGC <mask> CCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.09743840992450714,
'token': 34,
'token_str': 'CAU',
'sequence': '<cls> UAG AGC <mask> CAU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010745460167527199,
'token': 64,
'token_str': 'GCU',
'sequence': '<cls> UAG AGC <mask> GCU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'},
{'score': 0.010299043729901314,
'token': 24,
'token_str': 'AUU',
'sequence': '<cls> UAG AGC <mask> AUU <mask> UCA CAG AGA GAC ACU CUG UGA GAU AUG UGU GUU UUG UGA <eos>'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrBertModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertModel.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForSequencePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert-3mer')
model = UtrBertForNucleotidePrediction.from_pretrained('multimolecule/utrbert-3mer')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrBertForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrbert')
model = UtrBertForContactPrediction.from_pretrained('multimolecule/utrbert')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
3UTRBERT used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The 3UTRBERT model was pre-trained on human mRNA transcript sequences from [GENCODE](https://gencodegenes.org). GENCODE aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. The GENCODE release 40 used by this work contains 61,544 genes, and 246,624 transcripts.
3UTRBERT collected the human mRNA transcript sequences from GENCODE, including 108,573 unique mRNA transcripts. Only the longest transcript of each gene was used in the pre-training process. 3UTRBERT only used the 3’ untranslated regions (3’UTRs) of the mRNA transcripts for pre-training to avoid codon constrains in the CDS region, and to reduce increased complexity of the entire mRNA transcripts. The average length of the 3’UTRs was 1,227 nucleotides, while the median length was 631 nucleotides. Each 3’UTR sequence was cut to non-overlapping patches of 510 nucleotides. The remaining sequences were padded to the same length.
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
3UTRBERT used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Since 3UTRBERT used k-mer tokenizer, it masks the entire k-mer instead of individual nucleotides to avoid information leakage.
For example, if the k-mer is 3, the sequence `"UAGCGUAU"` will be tokenized as `["UAG", "AGC", "GCG", "CGU", "GUA", "UAU"]`. If the nucleotide `"C"` is masked, the adjacent tokens will also be masked, resulting `["UAG", "<mask>", "<mask>", "<mask>", "GUA", "UAU"]`.
#### PreTraining
The model was trained on 4 NVIDIA Quadro RTX 6000 GPUs with 24GiB memories.
- Batch size: 128
- Learning rate: 3e-4
- Weight decay: 0.01
- Optimizer: AdamW(β1=0.9, β2=0.98, e=1e-6)
- Steps: 200,000
- Learning rate scheduler: Linear
- Learning rate warm-up: 10,000 steps
## Citation
**BibTeX**:
```bibtex
@article {yang2023deciphering,
author = {Yang, Yuning and Li, Gen and Pang, Kuan and Cao, Wuxinhao and Li, Xiangtao and Zhang, Zhaolei},
title = {Deciphering 3{\textquoteright} UTR mediated gene regulation using interpretable deep representation learning},
elocation-id = {2023.09.08.556883},
year = {2023},
doi = {10.1101/2023.09.08.556883},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 3{\textquoteright}untranslated regions (3{\textquoteright}UTRs) of messenger RNAs contain many important cis-regulatory elements that are under functional and evolutionary constraints. We hypothesize that these constraints are similar to grammars and syntaxes in human languages and can be modeled by advanced natural language models such as Transformers, which has been very effective in modeling protein sequence and structures. Here we describe 3UTRBERT, which implements an attention-based language model, i.e., Bidirectional Encoder Representations from Transformers (BERT). 3UTRBERT was pre-trained on aggregated 3{\textquoteright}UTR sequences of human mRNAs in a task-agnostic manner; the pre-trained model was then fine-tuned for specific downstream tasks such as predicting RBP binding sites, m6A RNA modification sites, and predicting RNA sub-cellular localizations. Benchmark results showed that 3UTRBERT generally outperformed other contemporary methods in each of these tasks. We also showed that the self-attention mechanism within 3UTRBERT allows direct visualization of the semantic relationship between sequence elements.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883},
eprint = {https://www.biorxiv.org/content/early/2023/09/12/2023.09.08.556883.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [3UTRBERT paper](https://doi.org/10.1101/2023.09.08.556883) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF | geonheechoi22 | 2024-07-02T11:57:00Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ko",
"base_model:allganize/Llama-3-Alpha-Ko-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-07-02T11:56:38Z | ---
base_model: allganize/Llama-3-Alpha-Ko-8B-Instruct
language:
- ko
license: other
license_name: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`allganize/Llama-3-Alpha-Ko-8B-Instruct`](https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-alpha-ko-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-alpha-ko-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-alpha-ko-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo geonheechoi22/Llama-3-Alpha-Ko-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-alpha-ko-8b-instruct-q4_k_m.gguf -c 2048
```
|
multimolecule/utrlm.mrl | multimolecule | 2024-07-02T11:57:52Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrlm",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ensembl-genome-browser",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:57:49Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/ensembl-genome-browser
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "I"
score: 0.07525687664747238
- label: "A"
score: 0.07319962233304977
- label: "*"
score: 0.07106836140155792
- label: "N"
score: 0.06967106461524963
- label: "W"
score: 0.06574146449565887
---
# UTR-LM
Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives.
## Statement
_A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal.
> Machine learning has been at the forefront of the movement for free and open access to research.
>
> We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
The MultiMolecule team is committed to the principles of open access and open science.
We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals.
Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
## Disclaimer
This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM).
!!! Danger "Reproducibility"
The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
This is because
The proposed method is published in a Closed Access / Author-Fee journal.
**The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrlm.te_el`](https://huggingface.co/multimolecule/utrlm.te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level.
- **[`multimolecule/utrlm.mrl`](https://huggingface.co/multimolecule/utrlm.mrl)**: The UTR-LM model for Mean Ribosome Loading.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTR-LM MRL</td>
<td rowspan="2">6</td>
<td rowspan="2">128</td>
<td rowspan="2">16</td>
<td rowspan="2">512</td>
<td rowspan="2">1.21</td>
<td rowspan="2">0.35</td>
<td rowspan="2">0.18</td>
<td rowspan="2">1022</td>
</tr>
<tr>
<td>UTR-LM TE_EL</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm)
- **Data**:
- [Ensembl Genome Browser](https://ensembl.org)
- [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)
- [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013)
- **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7)
- **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
- **Original Repository**: [https://github.com/a96123155/UTR-LM](https://github.com/a96123155/UTR-LM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrlm.te_el')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.07525687664747238,
'token': 11,
'token_str': 'I',
'sequence': 'U A G C I U A U C A G A C U G A U G U U G A'},
{'score': 0.07319962233304977,
'token': 6,
'token_str': 'A',
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
{'score': 0.07106836140155792,
'token': 24,
'token_str': '*',
'sequence': 'U A G C * U A U C A G A C U G A U G U U G A'},
{'score': 0.06967106461524963,
'token': 10,
'token_str': 'N',
'sequence': 'U A G C N U A U C A G A C U G A U G U U G A'},
{'score': 0.06574146449565887,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrLmModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmModel.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmForSequencePrediction.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmForNucleotidePrediction.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm')
model = UtrLmForContactPrediction.from_pretrained('multimolecule/utrlm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io).
1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task.
3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence.
### Training Data
The UTR-LM model was pre-trained on 5’ UTR sequences from three sources:
- **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations.
- **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements.
- **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle).
UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline:
1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences.
2. identified and removed duplicate sequences
3. truncated the sequences to fit within a range of 30 to 1022 bp
4. filtered out incorrect and low-quality sequences
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on two clusters:
1. 4 NVIDIA V100 GPUs with 16GiB memories.
2. 4 NVIDIA P100 GPUs with 32GiB memories.
## Citation
**BibTeX**:
```bibtex
@article {chu2023a,
author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi},
title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions},
elocation-id = {2023.10.11.561938},
year = {2023},
doi = {10.1101/2023.10.11.561938},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938},
eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
multimolecule/utrlm.te_el | multimolecule | 2024-07-02T11:58:00Z | 0 | 0 | multimolecule | [
"multimolecule",
"pytorch",
"safetensors",
"utrlm",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ensembl-genome-browser",
"license:agpl-3.0",
"region:us"
] | fill-mask | 2024-07-02T11:57:56Z | ---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/ensembl-genome-browser
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "microRNA-21"
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
output:
- label: "I"
score: 0.07525687664747238
- label: "A"
score: 0.07319962233304977
- label: "*"
score: 0.07106836140155792
- label: "N"
score: 0.06967106461524963
- label: "W"
score: 0.06574146449565887
---
# UTR-LM
Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives.
## Statement
_A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal.
> Machine learning has been at the forefront of the movement for free and open access to research.
>
> We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
The MultiMolecule team is committed to the principles of open access and open science.
We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals.
Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
## Disclaimer
This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM).
!!! Danger "Reproducibility"
The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
This is because
The proposed method is published in a Closed Access / Author-Fee journal.
**The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrlm.te_el`](https://huggingface.co/multimolecule/utrlm.te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level.
- **[`multimolecule/utrlm.mrl`](https://huggingface.co/multimolecule/utrlm.mrl)**: The UTR-LM model for Mean Ribosome Loading.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTR-LM MRL</td>
<td rowspan="2">6</td>
<td rowspan="2">128</td>
<td rowspan="2">16</td>
<td rowspan="2">512</td>
<td rowspan="2">1.21</td>
<td rowspan="2">0.35</td>
<td rowspan="2">0.18</td>
<td rowspan="2">1022</td>
</tr>
<tr>
<td>UTR-LM TE_EL</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm)
- **Data**:
- [Ensembl Genome Browser](https://ensembl.org)
- [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)
- [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013)
- **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7)
- **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
- **Original Repository**: [https://github.com/a96123155/UTR-LM](https://github.com/a96123155/UTR-LM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrlm.te_el')
>>> unmasker("uagc<mask>uaucagacugauguuga")
[{'score': 0.07525687664747238,
'token': 11,
'token_str': 'I',
'sequence': 'U A G C I U A U C A G A C U G A U G U U G A'},
{'score': 0.07319962233304977,
'token': 6,
'token_str': 'A',
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
{'score': 0.07106836140155792,
'token': 24,
'token_str': '*',
'sequence': 'U A G C * U A U C A G A C U G A U G U U G A'},
{'score': 0.06967106461524963,
'token': 10,
'token_str': 'N',
'sequence': 'U A G C N U A U C A G A C U G A U G U U G A'},
{'score': 0.06574146449565887,
'token': 19,
'token_str': 'W',
'sequence': 'U A G C W U A U C A G A C U G A U G U U G A'}]
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrLmModel
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmModel.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
output = model(**input)
```
#### Sequence Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmForSequencePrediction.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Nucleotide Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForNucleotidePrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
model = UtrLmForNucleotidePrediction.from_pretrained('multimolecule/utrlm.te_el')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm')
model = UtrLmForContactPrediction.from_pretrained('multimolecule/utrlm')
text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io).
1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task.
3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence.
### Training Data
The UTR-LM model was pre-trained on 5’ UTR sequences from three sources:
- **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations.
- **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements.
- **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle).
UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline:
1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences.
2. identified and removed duplicate sequences
3. truncated the sequences to fit within a range of 30 to 1022 bp
4. filtered out incorrect and low-quality sequences
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on two clusters:
1. 4 NVIDIA V100 GPUs with 16GiB memories.
2. 4 NVIDIA P100 GPUs with 32GiB memories.
## Citation
**BibTeX**:
```bibtex
@article {chu2023a,
author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi},
title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions},
elocation-id = {2023.10.11.561938},
year = {2023},
doi = {10.1101/2023.10.11.561938},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938},
eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
taehyunzzz/switch-base-8-samsum-top-4-choose-1-deconly | taehyunzzz | 2024-07-02T13:30:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-8",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T11:57:58Z | ---
license: apache-2.0
base_model: google/switch-base-8
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-8-samsum-top-4-choose-1-deconly
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.2666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-8-samsum-top-4-choose-1-deconly
This model is a fine-tuned version of [google/switch-base-8](https://huggingface.co/google/switch-base-8) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5869
- Rouge1: 47.2666
- Rouge2: 24.2196
- Rougel: 40.1766
- Rougelsum: 43.8418
- Gen Len: 16.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 5.4611 | 0.2172 | 200 | 3.0917 | 23.5686 | 7.6846 | 20.6877 | 22.0746 | 14.7946 |
| 2.6551 | 0.4343 | 400 | 2.1027 | 39.7231 | 17.2476 | 33.3172 | 37.0509 | 17.1369 |
| 2.4452 | 0.6515 | 600 | 1.9255 | 42.9952 | 19.6478 | 35.8054 | 40.1569 | 17.3007 |
| 2.1259 | 0.8686 | 800 | 1.8270 | 43.9723 | 21.3238 | 37.0066 | 40.9323 | 16.1027 |
| 2.0957 | 1.0858 | 1000 | 1.7708 | 45.1103 | 21.769 | 37.9229 | 41.7446 | 17.2482 |
| 2.1168 | 1.3029 | 1200 | 1.7185 | 45.6806 | 22.0335 | 38.2398 | 42.4051 | 16.5941 |
| 2.1491 | 1.5201 | 1400 | 1.6982 | 46.0573 | 22.2803 | 38.33 | 42.531 | 16.9291 |
| 1.9829 | 1.7372 | 1600 | 1.6803 | 45.8845 | 22.4145 | 38.795 | 42.5814 | 16.4976 |
| 1.9741 | 1.9544 | 1800 | 1.6657 | 45.6645 | 22.0154 | 38.2445 | 42.2358 | 17.2689 |
| 1.8286 | 2.1716 | 2000 | 1.6462 | 46.7647 | 23.2912 | 39.4015 | 43.3207 | 16.8704 |
| 1.8177 | 2.3887 | 2200 | 1.6486 | 45.8872 | 22.8119 | 38.7398 | 42.3427 | 16.0403 |
| 1.8606 | 2.6059 | 2400 | 1.6270 | 45.9799 | 22.9475 | 38.9393 | 42.7565 | 16.6687 |
| 1.8327 | 2.8230 | 2600 | 1.6210 | 46.2715 | 23.4171 | 39.4324 | 43.0326 | 16.5452 |
| 1.6738 | 3.0402 | 2800 | 1.6242 | 46.1248 | 22.7245 | 38.8572 | 42.5884 | 16.8252 |
| 1.7515 | 3.2573 | 3000 | 1.6155 | 46.5372 | 23.4014 | 39.54 | 43.187 | 16.665 |
| 1.7728 | 3.4745 | 3200 | 1.6000 | 46.6652 | 23.4739 | 39.4761 | 43.2783 | 16.7873 |
| 1.7584 | 3.6916 | 3400 | 1.5922 | 47.2313 | 24.0035 | 39.9195 | 43.6996 | 16.7702 |
| 1.7082 | 3.9088 | 3600 | 1.5957 | 46.5132 | 23.4692 | 39.4884 | 43.2236 | 16.6553 |
| 1.5968 | 4.1260 | 3800 | 1.5916 | 47.2622 | 23.9444 | 40.1308 | 43.7971 | 16.9083 |
| 1.6439 | 4.3431 | 4000 | 1.5880 | 46.9607 | 23.7839 | 39.7431 | 43.5831 | 16.9621 |
| 1.6684 | 4.5603 | 4200 | 1.5930 | 47.2611 | 23.9828 | 40.0767 | 43.8297 | 16.8851 |
| 1.7749 | 4.7774 | 4400 | 1.5882 | 46.9562 | 23.874 | 39.8904 | 43.536 | 16.9377 |
| 1.6401 | 4.9946 | 4600 | 1.5869 | 47.2666 | 24.2196 | 40.1766 | 43.8418 | 16.9352 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Daxtra/BERT_pretrained_resumes_maiia | Daxtra | 2024-07-02T11:59:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-07-02T11:58:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RymHrizi/lora_Llema8b | RymHrizi | 2024-07-02T11:59:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-02T11:59:07Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** RymHrizi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AndreasThinks/mistral-nocpsa-v2 | AndreasThinks | 2024-07-02T12:01:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T11:59:23Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- generated_from_trainer
model-index:
- name: outputs/mistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: mistralai/Mistral-7B-v0.3
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
datasets:
- path: correspondence.jsonl # or json
ds_type: json # see other options below
type: completion # format from earlier
field: text #
val_set_size: 0.0
output_dir: ./outputs/mistral
sequence_len: 2048
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
wandb_project: mistral-nopcsa-v2
wandb_entity:
wandb_watch:
wandb_name: mistral-nopcsa-v2
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch:
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/andreasthinks/mistral-nopcsa-v2/runs/xkdk6xir)
# outputs/mistral
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 3
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ailabturkiye/TayfunErkan | ailabturkiye | 2024-07-02T12:00:46Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-02T11:59:42Z | ---
license: openrail
---
|
saqlainumer/mistral_7b_private | saqlainumer | 2024-07-02T12:00:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T11:59:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isithin/Mistral-HC-ft-1.0 | isithin | 2024-07-02T12:00:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T12:00:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/summarization-lora-1 | apwic | 2024-07-02T15:16:57Z | 0 | 0 | null | [
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T12:00:40Z | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-lora-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-lora-1
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5652
- Rouge1: 0.5057
- Rouge2: 0.0
- Rougel: 0.5089
- Rougelsum: 0.5041
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2298 | 1.0 | 1783 | 0.6183 | 0.5036 | 0.0 | 0.5051 | 0.5002 | 1.0 |
| 0.7893 | 2.0 | 3566 | 0.5936 | 0.5166 | 0.0 | 0.5199 | 0.5139 | 1.0 |
| 0.7368 | 3.0 | 5349 | 0.5787 | 0.517 | 0.0 | 0.5231 | 0.516 | 1.0 |
| 0.7107 | 4.0 | 7132 | 0.5670 | 0.5105 | 0.0 | 0.5148 | 0.5089 | 1.0 |
| 0.6989 | 5.0 | 8915 | 0.5652 | 0.5057 | 0.0 | 0.5089 | 0.5041 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Noromaid-7B-0.4-DPO-GGUF | mradermacher | 2024-07-02T13:15:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T12:00:54Z | ---
base_model: NeverSleep/Noromaid-7B-0.4-DPO
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-7B-0.4-DPO-GGUF/resolve/main/Noromaid-7B-0.4-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/Llama-3-Swallow-8B-v0.1-GGUF | QuantFactory | 2024-07-02T12:55:30Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T12:01:17Z | Entry not found |
doublecringe123/lenta-ru-sentiments | doublecringe123 | 2024-07-03T01:30:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T12:02:22Z | Entry not found |
ClementineBleuze/deberta_prefix_cont_ll_SEP | ClementineBleuze | 2024-07-02T14:28:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T12:02:25Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta_prefix_cont_ll_SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_prefix_cont_ll_SEP
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0873
- F1 Weighted: 0.9075
- F1 Samples: 0.9145
- F1 Macro: 0.8121
- F1 Micro: 0.9089
- Accuracy: 0.8863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Weighted | F1 Samples | F1 Macro | F1 Micro | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:-----------:|:----------:|:--------:|:--------:|:--------:|
| 0.2933 | 0.3381 | 500 | 0.2159 | 0.6372 | 0.6219 | 0.3527 | 0.6643 | 0.6022 |
| 0.1792 | 0.6761 | 1000 | 0.1475 | 0.7535 | 0.7604 | 0.4067 | 0.7916 | 0.7442 |
| 0.1429 | 1.0142 | 1500 | 0.1326 | 0.7850 | 0.7938 | 0.5102 | 0.8146 | 0.7686 |
| 0.1224 | 1.3523 | 2000 | 0.1122 | 0.8281 | 0.8324 | 0.6268 | 0.8483 | 0.8092 |
| 0.1137 | 1.6903 | 2500 | 0.1074 | 0.8532 | 0.8522 | 0.6949 | 0.8587 | 0.8214 |
| 0.1079 | 2.0284 | 3000 | 0.1124 | 0.8547 | 0.8543 | 0.7102 | 0.8568 | 0.8187 |
| 0.0879 | 2.3665 | 3500 | 0.1050 | 0.8686 | 0.8767 | 0.7134 | 0.8762 | 0.8478 |
| 0.0872 | 2.7045 | 4000 | 0.1058 | 0.8686 | 0.8772 | 0.7273 | 0.8751 | 0.8478 |
| 0.0838 | 3.0426 | 4500 | 0.0986 | 0.8772 | 0.8851 | 0.7326 | 0.8811 | 0.8579 |
| 0.0691 | 3.3807 | 5000 | 0.0969 | 0.8787 | 0.8884 | 0.7320 | 0.8846 | 0.8606 |
| 0.0643 | 3.7187 | 5500 | 0.1025 | 0.8735 | 0.8840 | 0.7203 | 0.8761 | 0.8505 |
| 0.0629 | 4.0568 | 6000 | 0.0968 | 0.8879 | 0.8972 | 0.7601 | 0.8920 | 0.8660 |
| 0.0507 | 4.3949 | 6500 | 0.0948 | 0.8945 | 0.9034 | 0.7773 | 0.8994 | 0.8769 |
| 0.0483 | 4.7329 | 7000 | 0.1002 | 0.8898 | 0.8999 | 0.7426 | 0.8966 | 0.8694 |
| 0.0501 | 5.0710 | 7500 | 0.0906 | 0.8951 | 0.9058 | 0.7516 | 0.9015 | 0.8775 |
| 0.0429 | 5.4091 | 8000 | 0.0873 | 0.9075 | 0.9145 | 0.8121 | 0.9089 | 0.8863 |
| 0.0368 | 5.7471 | 8500 | 0.0932 | 0.9059 | 0.9144 | 0.7994 | 0.9086 | 0.8850 |
| 0.034 | 6.0852 | 9000 | 0.1000 | 0.9017 | 0.9084 | 0.8059 | 0.9029 | 0.8782 |
| 0.0305 | 6.4233 | 9500 | 0.0998 | 0.9010 | 0.9099 | 0.8116 | 0.9017 | 0.8775 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Lostlink/2-modell | Lostlink | 2024-07-02T12:02:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:02:29Z | Entry not found |
dawoz/idefics2-frozenlake | dawoz | 2024-07-02T12:02:55Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:02:55Z | Entry not found |
geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF | geonheechoi22 | 2024-07-02T12:04:10Z | 0 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-3-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"base_model:beomi/Llama-3-Open-Ko-8B",
"license:other",
"region:us"
] | text-generation | 2024-07-02T12:03:48Z | ---
base_model: beomi/Llama-3-Open-Ko-8B
language:
- en
- ko
license: other
license_name: llama3
license_link: LICENSE
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
- llama-cpp
- gguf-my-repo
---
# geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`beomi/Llama-3-Open-Ko-8B`](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF --hf-file llama-3-open-ko-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF --hf-file llama-3-open-ko-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF --hf-file llama-3-open-ko-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo geonheechoi22/Llama-3-Open-Ko-8B-Q4_K_M-GGUF --hf-file llama-3-open-ko-8b-q4_k_m.gguf -c 2048
```
|
AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF | AndreasThinks | 2024-07-02T12:05:33Z | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:AndreasThinks/mistral-nocpsa-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T12:05:10Z | ---
base_model: AndreasThinks/mistral-nocpsa-v2
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
model-index:
- name: outputs/mistral
results: []
---
# AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF
This model was converted to GGUF format from [`AndreasThinks/mistral-nocpsa-v2`](https://huggingface.co/AndreasThinks/mistral-nocpsa-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AndreasThinks/mistral-nocpsa-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF --hf-file mistral-nocpsa-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF --hf-file mistral-nocpsa-v2-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF --hf-file mistral-nocpsa-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AndreasThinks/mistral-nocpsa-v2-Q5_0-GGUF --hf-file mistral-nocpsa-v2-q5_0.gguf -c 2048
```
|
bezir/gpt2-it | bezir | 2024-07-02T12:09:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T12:06:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CeZalex/capka | CeZalex | 2024-07-02T12:09:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:07:08Z | Entry not found |
Sam1995/ikea_room_designs_sdxl_full_finetuning020720241201 | Sam1995 | 2024-07-02T12:07:20Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:07:20Z | Entry not found |
spdraptor/diff_anime | spdraptor | 2024-07-02T21:26:48Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2024-07-02T12:08:45Z | Entry not found |
erikhsos/cbbier_LoRA_test | erikhsos | 2024-07-02T12:12:33Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-02T12:09:46Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: '[CB] bottle photo'
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - erikhsos/cbbier_LoRA_test
<Gallery />
## Model description
These are erikhsos/cbbier_LoRA_test LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use [CB] bottle photo to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](erikhsos/cbbier_LoRA_test/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Peacoc/37_best_t_18_1 | Peacoc | 2024-07-02T12:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T12:10:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DimensionSTP/Solar-Ko-Recovery-11B-scientificQA | DimensionSTP | 2024-07-02T12:10:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T12:10:11Z | ---
license: apache-2.0
---
|
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-10hr-v1 | KasuleTrevor | 2024-07-02T13:41:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T12:10:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ritesh2115/myPetDog | Ritesh2115 | 2024-07-02T12:10:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:10:46Z | Entry not found |
habulaj/173607151942 | habulaj | 2024-07-02T12:11:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:11:24Z | Entry not found |
meta-releases/Meta-Llama-3-405B | meta-releases | 2024-07-02T12:16:06Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T12:11:35Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
---
you got jebaited listen to bean boy by jerimiawongo
https://www.youtube.com/watch?v=m8yWCrMaqbI
[](https://www.youtube.com/watch?v=m8yWCrMaqbI) |
meta-releases/Meta-Llama-3-405B-Instruct | meta-releases | 2024-07-02T12:16:22Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T12:12:07Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
---
you got jebaited listen to bean boy by jerimiawongo
https://www.youtube.com/watch?v=m8yWCrMaqbI
[](https://www.youtube.com/watch?v=m8yWCrMaqbI) |
MarcGrumpyOlejak/VerwaltungsAnthologie_clear_simbad_7B | MarcGrumpyOlejak | 2024-07-02T12:35:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:hiig-piai/simba-01d-ftb",
"base_model:DRXD1000/Phoenix",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:VAGOsolutions/SauerkrautLM-7b-LaserChat",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T12:12:31Z | ---
base_model:
- hiig-piai/simba-01d-ftb
- DRXD1000/Phoenix
- OpenPipe/mistral-ft-optimized-1227
- mistralai/Mistral-7B-v0.1
- VAGOsolutions/SauerkrautLM-7b-LaserChat
library_name: transformers
tags:
- mergekit
- merge
---
# VerwaltungsAnthologie_clear_simbad_7B
This model is used as an intermediate model for future merges.
This is a merge of 4 pre-trained language models based upon Mistral-7B-v0.1 created using [mergekit](https://github.com/cg123/mergekit).
In combination with [DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) this 'clear'-model is the 'base' model to build the successor of my first 'VA_Disco_7B': [VerwaltungsAnthologie_Disco_7B](https://huggingface.co/MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_7B)
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [hiig-piai/simba-01d-ftb](https://huggingface.co/hiig-piai/simba-01d-ftb)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [VAGOsolutions/SauerkrautLM-7b-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-LaserChat)
### Explanations of used models
* Main focus of this "clear" model is the [hiig-piai/simba-01d](https://huggingface.co/hiig-piai/simba-01d-ftb) (strangley without a 'v') by the Humboldt Institute for Internet and Society has built this model for "simplified language" (Leichte Sprache in german).
* The [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix) got finetuned with many german texts of law – it can even "hallucinate" almost perfect URL of the official archive of german laws: [Gesetze im Internet]](https://www.gesetze-im-internet.de/)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) performed best using mixed languages in combination with [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as base model.
* [VAGOsolutions/SauerkrautLM-7b-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-LaserChat) has a wider range of colloquial german language.
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) is the base model – funny but true – only using [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) as base model is not as good as combining both.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# works but never stops
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: VAGOsolutions/SauerkrautLM-7b-LaserChat
parameters:
density: 0.53
weight: 0.15
- model: hiig-piai/simba-01d-ftb
parameters:
density: 0.53
weight: 0.55
- model: DRXD1000/Phoenix
parameters:
density: 0.53
weight: 0.15
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.53
weight: 0.15
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
name: VerwaltungsAnthologie_clear_simbad_7B
```
|
ljnlonoljpiljm/florence-2-base-ft-keywords-caption-interleaved | ljnlonoljpiljm | 2024-07-02T21:46:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-02T12:12:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kavin1701/whisper-small-tamil-adapters2 | Kavin1701 | 2024-07-02T12:21:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T12:12:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trtd56/whisper-nishika | trtd56 | 2024-07-02T12:12:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T12:12:51Z | Entry not found |
Kavin1701/whisper-small-tamil2 | Kavin1701 | 2024-07-02T12:22:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T12:13:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.