modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Dafeben/my-glasses
Dafeben
2024-10-24T23:59:07Z
5
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-24T23:37:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: GLASSES --- # My Glasses <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `GLASSES` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Dafeben/my-glasses', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
besimray/miner_id_1_94c13c2e-2b8a-4ead-ad8c-820ce4e1fbe7_1729813779
besimray
2024-10-24T23:53:45Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "mistral-7b", "mistral-instruct", "instruct", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T23:49:39Z
--- language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - mistral - mistral-7b - mistral-instruct - instruct --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
panjiallatief/llama32-finetuned
panjiallatief
2024-10-24T23:41:53Z
12
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T23:23:24Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** panjiallatief - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unclemusclez/Unsloth-Llama-3.2-3B-Instruct-Devinator-v1
unclemusclez
2024-10-24T23:39:34Z
122
1
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:skratos115/opendevin_DataDevinator", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:quantized:unsloth/Llama-3.2-3B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T23:36:50Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: unsloth/Llama-3.2-3B-Instruct widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - skratos115/opendevin_DataDevinator --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
carsondial/slinger-single-word
carsondial
2024-10-24T23:39:22Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100000", "loss:GISTEmbedLoss", "arxiv:1908.10084", "arxiv:2402.16829", "base_model:intfloat/e5-base", "base_model:finetune:intfloat/e5-base", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-10-24T23:39:09Z
--- base_model: intfloat/e5-base datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:100000 - loss:GISTEmbedLoss widget: - source_sentence: 'Brass decorative stampings including a number of ornamental frames. Some brass frames made specifically to accept Swarovski ss30 flat back stones. 508-222-2000 Fax 508-222-3011 Email [email protected] Brothers brass jewelry findings are open line and Made in USA (c) 2003-2017 Guyot Brothers Co Inc, A jewelry findings manufacturer' sentences: - Mining - Findings - rug - source_sentence: 'WORKSPACE WOOD FLOORING PROJECTS Workspace projects demand flooring solutions and wall cladding options versatile enough to deliver the right look for the brief, whether modern or traditional. Office flooring options have to be durable and increasingly, need to support biophilic and environmentally friendly design. Hardwood flooring delivers on all fronts. The choice of colour, species, finishes and designs available means that there is something for every workspace flooring project. It works well with other flooring choices such as carpet tiles, allowing designers to create office space, break-out and communal areas within the scheme. As a natural flooring choice, it also provides an important connection to nature that luxury vinyl and other plastic-based alternatives can’t match. Start your conversation with a workspace specification consultant by calling 0800 470 0068 What Is Wooden Workspace Flooring? Wood is often mixed with other flooring material in different workspace areas depending on the volume of traffic and the look and feel required. As wood flooring is a natural product, it has the added benefit of helping to deliver biophilic design, which aims to build nature into workspace environments in order to improve a sense of wellbeing, engagement and productivity among employees. Not all wood floors however, are created equal, and Ted Todd hardwood floors and wall cladding stand out from other companies in a number of ways. We only sell wood flooring, ensuring we have the best choice in the market. Many of our floors are designed and manufactured in our Cheshire workshops, allowing us to work with clients to deliver custom and bespoke options. Most importantly, we never compromise on quality and offer FSC and PEFC certification at no extra cost. Different Types of Wood Suitable for Office Flooring? Ted Todd have vast experience of working with clients to provide office floor options and can advise on the most suitable floor or wall cladding for any project. There are a number of considerations that will determine which wood floor is most suitable for a workspace project including foot traffic, slip resistance, fire rating and environmental requirements, other materials they will be used alongside, as well as colour and design considerations. As well as planks, chevron and herringbone blocks or panels, from narrow to superwide options, Ted Todd offers engineered and solid wood flooring in a choice of depths, construction and finishes as well as a bespoke service if required. Flooring Over Alternatives Like Vinyl & Laminate Flooring At Ted Todd, we only sell wood. We love the fact that it’s a natural material, is environmentally friendly and can be used to create so many different looks. And as so many historic buildings are testament to, wood floors last for centuries not just decades if cared for properly. Laminate and vinyl floors cannot make all these claims. Increasingly, concerns about emissions from plastic-based floors mean that more workspace projects are turning away from these products, towards zero emission alternatives like hardwood flooring. Ted Todd also have a choice of finishes to ensure that each floor has the built-in durability required for different areas of workspace projects. What Are The Benefits of Office Wooden Floors Over Carpet Tiles? Given the creative requirements of office spaces, its not an either or. Mixing different material with wood floors such as carpet tiles is a good way of breaking up different spaces within an office and we often see them work well together. Wood floors undoubtably add impact as the Ted Todd gallery demonstrates like no other material. Mix colours and pattens to add interest and impact. What Rooms In A Workspace Are Suitable for Hardwood Floors? Wood is a popular and cost effective choice for Workspace projects, from company headquarters, offices or shared workspaces including canteen and kitchen areas. For many companies, the look, age and design of the wood flooring used is an ideal way of reinforcing and enhancing the company’s brand through the workspace design. The choice of wood floor options means they can be easily tailored to different design schemes, whether it’s bespoke patterns for a statement reception area, rich dark aged wood in boardrooms and libraries or smooth, light expanses of wood flooring across an office space. Our Workspace Flooring Projects Ted Todd floors and wall cladding have been used in workspace projects of all sizes and budgets. Some of our favourites include The Birchwood Building, a modern, welcoming workplace that offers flexible and inviting office spaces with a residential feel and offices in St James, London, which were restored to create grand meeting rooms, contemporary breakout areas and tranquil sleep pods that are quirky and cool but still have a quintessentially British feel. Free Samples Available You can order up to three free samples of any of our floors via the website. Our team are available to help you decide which floors may be most suitable and talk you through the correct finish for your project.' sentences: - Flooring - Shipping - Marketing - source_sentence: 'AppScenic ‑ Smart Dropshipping AppScenic ‑ Smart Dropshipping is a Shopify app that connects e-commerce store owners to high-quality dropshipping and wholesale suppliers. With this app, users can dropship from fast and verified suppliers located in top-tier countries such as the USA, UK, EU, and CAN. The app offers automated orders, with stock and price synced 24/7 between the user and the suppliers. Tracking numbers are also imported automatically to the store. AppScenic makes it easy to start selling right away, with a simple account setup, store connection, and product import. Connects e-commerce store owners to high-quality dropshipping and wholesale suppliers Dropship from fast & verified suppliers located in top-tier countries Automated orders with synced stock & price 24/7 Import products with just a click Discover unique suppliers Extremely fast setup Imported products: 500 (Free plan) Imported products: 500 0% Products Discount, Pushed products: 500, Automatic ordering 5% Products Discount, Pushed products: 10,000, Automatic ordering 15% Products Discount, Pushed products: 30,000, Automatic ordering Tips and Tricks Use AppScenic to find high-quality dropshipping and wholesale suppliers Take advantage of the automated order processing and stock synchronization Import products with just a click to start selling right away Disclaimer: The information and ideas provided within this app are intended for general informational purposes only. They should not be interpreted as legal, financial, or business advice. We cannot guarantee that these ideas do not infringe upon third-party intellectual property rights. It is advisable to consult with appropriate professionals or seek intellectual property counsel for specific guidance pertaining to your situation. Furthermore, please note that any resources suggested within the app are for informational purposes only and should not be considered as a substitute for professional advice or an endorsement of their suitability for your specific circumstances. Find out what your Shopify store is worth You don’t have to be looking to sell your Shopify store to use this form — just follow our simple steps for a free valuation. OpenStore reviews: 4 founders who sold their Shopify stores Real success stories from Shopify founders who sold to OpenStore. Fast, fair algorithmic valuations, and all-cash payouts within 1-2... 60+ ways how to promote your Shopify store Boost your store with tips from founders: nail social media ads with user-generated content, master timely email marketing, and deploy...' sentences: - Software - Filmmaker - Shopify - source_sentence: 'If the financial industry wants to learn more about the potential impact mobile payments could have on the market, it could do a lot worse than look to China. The world’s second-largest economy is one of the leading markets for mobile payments, thanks to a young, tech-savvy population, widespread adoption of smartphones and a large range of players driving the market. According to data from iResearch, China’s mobile payment transactions jumped by 64 per cent year-on-year in the third quarter of 2015, reaching 2.42 trillion yuan ($373.2 billion). In the last few months, several new entrants have tried to grab a slice of this for themselves, with both domestic and international companies seeking to get involved. Already, established names such as Apple Pay are competing with local services such as Alibaba, and this is only set to increase as the number of users grow. Huawei, Xiaomi among new players A key reason for the boom in mobile payments is China’s rapidly-expanding smartphone sector. It’s estimated the country will have 563 million smartphone users by the end of this year – more than the entire population of the US and Brazil combined. And unlike many markets, it’s home-grown manufacturers that take the lion’s share of sales. It’s unsurprising that these firms are getting involved in mobile payments. Number one Chinese smartphone maker Xiaomi has recently announced a partnership with UnionPay to develop a new mobile payment service for the hardware maker’s products that are expected to utilize NFC technology to make it easier to pay via mobile in-store. It’s not the first manufacturer to go down this route. Earlier this year, Xiaomi’s biggest rival Huawei announced its own partnership with UnionPay to bolster its mobile payment offerings. And it’s not just domestic firms that are looking to take advantage of this demand. South Korean manufacturer Samsung has also looked to enter the Chinese mobile payments market. Injong Rhee, head of research and development, software and services of mobile communications business, Samsung, commented: “We ultimately want to make Samsung Pay available to as many consumers as possible in China, so that everyone can have the opportunity to enjoy the simplicity, safety and convenience of this mobile payment solution.” An everyday payment option One factor that separates the Chinese mobile payments market from many other parts of the world is the widespread recognition and acceptance of the technology as an everyday payments solution. Companies such as Alipay and Tencent have worked to embed mobile payments into applications such as social messaging tools that consumers are already using, so it is a natural progression for many consumers to make payments through their favorite apps. As well as person-to-person payments, mobile devices can be used to pay for everything from utilities to travel tickets and lunch bills. And this could be set to increase in the coming months and years as the country actively encourages more merchants to accept the technology for online sales. Warren Hayashi, president for Asia-Pacific at global payments outsourcing company Adyen, told the South China Morning Post recently that mobile solutions offer a much faster, more convenient option compared with traditional tools. And considering the impact of mobile is vital, whether firms are setting up dedicated solutions for this or simply streamlining existing online payment options. “Merchants have to optimize payments for mobile devices,” he stated. “If you have ten steps in your checkout flow for mobile, the likelihood that the consumer makes it to the tenth step is increasingly low.” The impact of mobile devices for both in-store and online payments in China could be a lesson for banks around the world when trying to encourage adoption of these services. The keys to success are simplicity, convenience and widespread merchant acceptance. Get these building blocks in place and consumers will be far more receptive to the technology.' sentences: - development - marketing - Payments - source_sentence: 'Essential Soft-skills & Competencies for Today’s Executives & Employees Soft-skills are defined as the personal attributes that enhance and individual’s interaction, job performance and career prospect. Soft-skills are interpersonal and generally applicable. - Excellent presentation skill. - Emotional intelligence. - Social/networking skill, - Business writing/business communication skills. - Effective communication skill (fluent speaking and good writing). - Work/workplace etiquette. - Team playing – the ability to work productively and effectively with others. Also, inter-personal skill. - Information and communication technology/computer skills. - Apt to learning – ability to learn and adapt to a job very fast with minimal supervision and explanation. Being clever and resourceful. - Teaching/training/mentoring skills and abilities. - Numerical/mathematical skill. - Information gathering and acquisition skills. Also, the ability to keep tabs of happenings in your business, corporate, socio-economic and political environment. - Anticipation skill – always try to see things/events coming before they actually arrived/happen. - Leadership, supervisory and managerial skills. - Adaptability to organisational/corporate changes (both role changes and management changes. Also, change management or transition management. - Initiative & independent skills/innovation & idea generation skills. - Financial resources management skill. - Human resources development and management skill. - Conflict resolution and conflict management skills. - Motivational skill – ability to motivate your team/group/workers. Also, self-motivation and management skills. - Flexibility and creativity. Also, creative thinking skill. - Risk taking, risk bearing and risk management skills. - Financial report/financial statement interpretation and analysing skills. Note that, today’s workplace requires that employees, managers and executives to possess these essential soft-skills… Like our page: facebook.com/vaxity' sentences: - AMP - Skills - Globalization --- # SentenceTransformer based on intfloat/e5-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-base](https://huggingface.co/intfloat/e5-base) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/e5-base](https://huggingface.co/intfloat/e5-base) <!-- at revision b533fe4636f4a2507c08ddab40644d20b0006d6a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - train <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'Essential Soft-skills & Competencies for Today’s Executives & Employees\nSoft-skills are defined as the personal attributes that enhance and individual’s interaction, job performance and career prospect. Soft-skills are interpersonal and generally applicable.\n- Excellent presentation skill.\n- Emotional intelligence.\n- Social/networking skill,\n- Business writing/business communication skills.\n- Effective communication skill (fluent speaking and good writing).\n- Work/workplace etiquette.\n- Team playing – the ability to work productively and effectively with others. Also, inter-personal skill.\n- Information and communication technology/computer skills.\n- Apt to learning – ability to learn and adapt to a job very fast with minimal supervision and explanation. Being clever and resourceful.\n- Teaching/training/mentoring skills and abilities.\n- Numerical/mathematical skill.\n- Information gathering and acquisition skills. Also, the ability to keep tabs of happenings in your business, corporate, socio-economic and political environment.\n- Anticipation skill – always try to see things/events coming before they actually arrived/happen.\n- Leadership, supervisory and managerial skills.\n- Adaptability to organisational/corporate changes (both role changes and management changes. Also, change management or transition management.\n- Initiative & independent skills/innovation & idea generation skills.\n- Financial resources management skill.\n- Human resources development and management skill.\n- Conflict resolution and conflict management skills.\n- Motivational skill – ability to motivate your team/group/workers. Also, self-motivation and management skills.\n- Flexibility and creativity. Also, creative thinking skill.\n- Risk taking, risk bearing and risk management skills.\n- Financial report/financial statement interpretation and analysing skills.\nNote that, today’s workplace requires that employees, managers and executives to possess these essential soft-skills…\nLike our page: facebook.com/vaxity', 'Skills', 'AMP', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: train * Size: 100,000 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 51 tokens</li><li>mean: 386.54 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.46 tokens</li><li>max: 7 tokens</li></ul> | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------| | <code>Adult Beginners in Top QualityNow you can enjoy Adult Beginners in HD format with duration 90 Min and has been launched in 2015-04-24 with MPAA rating is 0.<br>- Original Title : Adult Beginners<br>- Movie title in your country : Adult Beginners<br>- Year of movie : 2015<br>- Genres of movie : Comedy,<br>- Status of movie : Released<br>- Release date of movie : 2015-04-24<br>- Companies of movie : Vantage Holdings, Burn Later Productions, Duplass Brothers Productions,<br>- Countries of movie : United States of America,<br>- Language of movie : English,<br>- Durationof movie : 90 Min<br>- Average vote of movie : 0<br>- Youtube ID of movie : b-qAKSwme5Q<br>- Translation of movie : EN,PT,<br>- Cast of movie :<br>- Nick Kroll ( Jake )<br>- Rose Byrne ( Justine )<br>- Bobby Cannavale ( Danny )<br>- Joel McHale ( Hudson )<br>- Caitlin Fitzgerald ( Kat )<br>- Bobby Moynihan ( Paul )<br>- Josh Charles ( Phil )<br>- Jane Krakowski ( Miss Jenn )<br>- Paula Garcés ( Blanca )<br>- Jason Mantzoukas ( )<br>See full with title Adult Beginners full and free in high definition format. Play with title Adult Beginners free an fun at here. Right now, you could see that hundreds thousands of people seeking free Adult Beginners movie and enjoy it on the sweat home with net connection. End up being happy, you can certainly reach thousands and thousands of happy members which became sick and tired of waiting intended for dvds inside mail, and now you can enjoy without cost Adult Beginners. You will get new on the web movie, and Obtain it for free in our site. It's fast, easy, free and furthermore to test. Play now Adult Beginners online movie devoid of downloading. You can view video in HD excellent in 90 Min length. Observe trailer movie as well as full film of Adult Beginners go through the button below to enjoy these movie.<br>Movie plot of Adult Beginners :<br>Get full Adult Beginners in Top Video Format with movie synopsis "Out of a job after a disastrous product launch, a big-city yuppie retreats to his suburban childhood home, in this heart-warming and hilarious film about crashing hard, coming home and waking up." in top video format. Full Length of Adult Beginners in Best Quality by clicking the button link.<br>... Enjoy Full Adult Beginners in HD Video 1080p ...<br>Director : Ross Katz, Screenplay : Jeff Cox, Screenplay : Liz Flahive, Story : Nick Kroll, Cinematography : Vanja Cernjul, Editor : Paul Frank,<br>Yes, now you can view movie regarding Adult Beginners in full length and obtain the connect to this film Adult Beginners in high quality.</code> | <code>Movie</code> | | <code>charter.net Site Stats<br>Copy & Paste code on your website to use the widget.<br>charter.net Social Monitoring<br>Social Media is one of the best way to market your website. Considering Social Media on your marketing strategy will surely boost your traffic.<br>charter.net On Page SEO<br>On Page SEO is a strategy of implementing website content to optimize their position on search engines. Analise the data carefully and use it to optimize your website.<br>Header Tags are used to structure the contents of a document by specifying a hierarchy of sections. They help creating the document outline for visitors and search engines<br>Traffic Graph & Visitors by Country for charter.net<br>This Traffic Graph may help you understand how charter.net performs in terms of traffic and which country is the traffic comming from. NOTE: The information below is not accurate and must be use for research purposes only.<br>Compete traffic analysis for charter.net<br>charter.net Server Location<br>This map will show you where the server of charter.net is located. Sometimes server location is consedered as one of the factors of website loading time.<br>charter.net Response Header<br>This is the response message of charter.net server. This will show you the server configuration and special information about the request.<br>charter.net DNS Record<br>Below is the DNS record of A,NS, MX for charter.net.<br>charter.net. 3040 IN NS auth1.ns.charter.net.<br>charter.net. 3040 IN NS auth0.ns.charter.net.<br>charter.net. 3600 IN MX 10 ib1.charter.net.<br>charter.net Similar Sites<br>List of websites that maybe similar to charter.net accourding to their keywords.<br>This is the whois information of charter.net. This data contains information about the owner and registrant of the domain.<br>Get Your Facebook Like-Box Widget<br>To use the widget, Copy the code below and paste it to your website and replace the red text with your facebook page url.<br>var site_stats_fb_url = "http://www.facebook.com/pages/Site-Stats-Info/183188521756448";<br>Get Your Twitter Widget<br>To use the widget, Copy the code below and paste it to your website and replace the red text with your Twitter Username.<br>var site_stats_tweet = "sitestatsinfo";<br>Note : You need to have jQuery library included on your website. If you dont have jQuery yet, copy the code below and paste it on the top of the other code.<br>Widget example is on the right side of this page.</code> | <code>Website</code> | | <code>Cheques can still be unpaid by the bank, at a later date. The time this takes will depend on the country involved — it can take several weeks. Moving abroad Moving abroad. Find out more Premier Banking. The Jersey, Guernsey and Isle of Man branches of Lloyds Bank Corporate Markets plc are each separately licensed as regulated deposit-takers in their respective jurisdictions. For more information on ring-fencing visit international.<br>Both companies are part of Lloyds Banking Group plc. See Company Information. Whilst our services will be available to many customers, there are countries where, due to legal or regulatory restraints, we cannot provide them. Rules and regulations made under the UK Financial Services and Markets Act for the protection of depositors and investors, including the Financial Services Compensation Scheme, do not apply to the financial services business of companies within the Lloyds Banking Group carried out from offices outside of the United Kingdom.<br>Lloyds Bank Corporate Markets plc. Registered in England and Wales no. Authorisation can be checked on the Financial Services Register. Services provided by the Jersey, Guernsey and Isle of Man branches of Lloyds Bank Corporate Markets plc will be subject to the regulatory regime applicable in their respective jurisdiction, which will differ in some or all respects from that of the UK.<br>Further information about the regulatory status of the Jersey, Guernsey and Isle of Man branches of Lloyds Bank Corporate Markets plc, is set out below. Lloyds Banking Group plc does not guarantee the liabilities of its subsidiaries.<br>Rules and regulations made under the UK Financial Services and Markets Act , including the Financial Services Compensation Scheme, do not apply to the financial services business of companies within the Lloyds Banking Group carried out from offices outside the United Kingdom. Copies of our terms and conditions and latest report and accounts are available upon request from the above registered office.<br>Copies of our terms and conditions are available on request from the above registered office. Eligible deposits are deposits held by private individuals and charities. Depositor protection does not extend to corporations, small to medium sized enterprises, partnerships and trusts. Full details of the Scheme and banking groups covered are available on the States of Jersey website or on request.<br>Deposits with the Guernsey branch of Lloyds Bank Corporate Markets plc will not be covered by any equivalent deposit compensation scheme to the Guernsey Banking Deposit Compensation Scheme in any other jurisdiction. Lloyds Bank International Services Limited. Please note that, in relation to banking services which we provide, you might not be eligible for compensation under a deposit protection guarantee scheme available in your country of residence.<br>If in doubt, contact your local banking regulator, visit their website or seek independent advice. Depositors may wish to form their own view on the financial standing of Lloyds Bank Corporate Markets plc and Lloyds Bank International Services Limited and their respective parent company based upon publicly available information. Legislation or regulations in your home jurisdiction may prohibit you from entering into such a transaction with us.<br>We reserve the right to make final determination on whether you are eligible for any products or services. Residents or Nationals of certain jurisdictions may be subject to exchange controls and should seek independent advice before entering into any transactions with us. Correspondent and recipient bank charges may apply. Find out how the process works.<br>Check you're eligible by answering a few questions. This service could be particularly useful for people: Topping up a bank account in a different country. Paying bills. Sending money to family. Making purchases. Running a home or a holiday home. Working or living abroad. About to move to the UK. Buying or investing in a property abroad. Travelling frequently. With dependants in different countries. Assess your needs If you know you will need to send money abroad on a regular basis, it could make sense to set up an international bank account.<br>Think about how you want to make payments Lloyds Bank Corporate Markets plc lets you make payments online 24 hours a day, 7 days a week, or over the phone. Understand the various methods available for sending money overseas and weigh up the pros and cons.<br>There are several options to choose from: Electronic transfers The cheapest and simplest way of sending money abroad is through an electronic transfer. International Standing Orders These allow you to make overseas money transfers on a regular basis using your International Currency Account.<br>Check the exchange rate. Decide on the currency you want to use. You can make a payment in two ways Pay in your own currency, which is then converted into the local currency when it arrives at its destination Convert the amount into the destination currency before paying it overseas When you make overseas money transfers online with your international currency account, you'll be able to see clearly how much money will leave your account — or how much the recipient will get.<br>Gather the information you need from the intended recipient. Set up the payment details. Pay all Charges: A Correspondent Bank Fee will be payable for international payments where you choose to pay the charges for the recipient, if you are permitted to do so based upon the destination of the payment and the type of account that you hold. The Correspondent Bank Fee replaces all Correspondent bank charges.<br>The recipient bank may still charge its customer a fee but we cannot provide information about such a fee. The amount of the Correspondent Bank Fee depends on the location of the recipient bank. Cheques in a foreign currency drawn in a different country e. We offer a wide range of foreign exchange services. When you make a payment online, you'll be told how much you should expect to leave your account or how much the recipient will receive.<br>If you want certainty over the exchange rate, you should make your payment in the currency of the country you are sending it to. Otherwise, you won't know what exchange rate is going to be used by the receiving bank. Payments are processed using our standard exchange rate. The tables below provides an indication of the margin, which is the current percentage difference between this rate and the rate at which we buy and sell currency in the foreign exchange markets the wholesale rate.<br>The margin will be applied and incorporated within your foreign exchange rate. Please note if you are sending funds from a Sterling International account please refer to below table:. Please note if you are sending funds from a Currency International account please refer to below table:. For our exchange rates for international money transfers , please call us or contact your Relationship Manager. However, correspondent and recipient bank charges may apply. There may be limits on payments you can make and we may not be able to start processing all payments instructed out of normal banking hours.<br>We may make additional security checks before acting on a particular request for an International Money Transfer, when we believe we are justified in doing so. You'll need to provide us with your recipient's bank account number and the full name and address of their bank. You'll need to give us accurate information about the person you are sending it to. This includes:. You'll need to get these details from the person you want to transfer money overseas to.<br>It is a unique code that identifies an individual bank account. Please note that IBANs can have a different number of characters and are a mix of letters and numbers. Your payment should get to the recipient's bank in days — the time it takes will vary from country to country. We aren't in control of how long it takes the recipient's bank to process the transfer, however. You'll get a reference number if you pay online and can print off your own receipt.<br>Payments made by phone will be shown on your statement, so please check this carefully. Payment confirmations are available upon request. If for any reason the money doesn't arrive, please tell us as soon as possible and we'll try to trace it. You may be charged for this service.<br>Charges will vary depending on the value of the cheque and whether it's paid into a Currency Account or a Sterling Account. Cheques or drafts by negotiation will be credited to your account in a few days, minus charges. This service can only be used if the draft or the cheque is in the local currency. Credits from cheques can still be reversed by the bank at a later date if it turns out that the payer's bank account has insufficient funds.<br>If the payment is from a high-risk country or isn't in the sender's local currency, it's likely the payment will be sent for collection. This means your account won't be credited until we're certain the money will be paid.<br>The time this takes will depend on the country involved — it can take several weeks. You can make international money transfers online via Currency Internet Banking. This service is for any of these international accounts:. View demo. Available online 24 hours a day, 7 days a week.<br>We won't charge you for transferring money overseas. Use our online calculator for an indication of cost and fees for the payment. Make a payment. You can use our international payments calculator to get an estimate of how much a payment will cost. If you make the payment using Internet Banking then the recipient will pay their bank or correspondent bank charges that may apply.<br>Payments made before 3pm on a working day will be processed the same day. Otherwise, the payment will be processed on the following working day. Mon-Fri 8am-6pm and Sat. Check eligibility Is international banking for you? Find out more International Current Accounts. Find out more Guide to sending money abroad. The Jersey, Guernsey and Isle of Man branches of Lloyds Bank Corporate Markets plc are each separately licensed as regulated deposit-takers in their respective jurisdictions.<br>For more information on ring-fencing visit international. Both companies are part of Lloyds Banking Group plc. See Company Information. Whilst our services will be available to many customers, there are countries where, due to legal or regulatory restraints, we cannot provide them. Rules and regulations made under the UK Financial Services and Markets Act for the protection of depositors and investors, including the Financial Services Compensation Scheme, do not apply to the financial services business of companies within the Lloyds Banking Group carried out from offices outside of the United Kingdom.<br>Steps for remote connections. I am high sierra and was wondering if. - it Ford Aspire with one. Contacts, but help you Client will but Bit-Defender, software, and extensive documentation.<br>Debit cards for each of the accounts can be used without a fee, so long as the correct card is used for the correct currency. International withdrawals from cash machines will also not be charged by SPBJ. Offshore bank accounts are most often funded electronically through international wire transfers. Unfortunately, the systems that enable free electronic transfers common in domestic banking are typically not able to transfer money internationally.<br>Sending a wire transfer is a simple operation, but almost all banks charge international wire transfer fees to send or receive funds. Pricing for wire transfers varies from bank to bank, so if you use this service a lot be sure to look for deals like the Citibank one. Fund transfers are free. You can also get fee free foreign currency cheque deposits and accounts. You also get a dedicated relationship manager, personal concierge service and worldwide holiday travel insurance for the whole family - including winter sports.<br>Citibank charges exchange rates when you transfer between currencies. If you move to the UAE, you will need to open a local bank account to pay in your salary — a requirement when you live and work in the region. But many expats also open an offshore account to help with ease of banking in other currencies.<br>Barclays International Banking is one good option - most of its UAE resident accountholders are expats who have moved to the UAE for work and need to continue their financial links with the UK — for example mortgage commitments on a UK property or children in UK education. Barclays International Banking of fers two services.<br>There is also a relationship manager service, which has no fee. Both come with accounts that are available in a choice of sterling, US dollars or Euros and you are given a debit card sterling accounts only or a Barclays deferred debit card US dollar and euro accounts. For UAE-based expats, having an offshore US dollar account alongside sterling accounts can be a useful tool in managing foreign exchange exposures as the local currency — the Dirham - is pegged to the US dollar — so expats are essentially earning in dollars and possibly saving or spending in the UK in sterling.<br>If you use your debit card abroad or pay in a currency other than sterling you will be charged a 2. This fee will also apply whenever you do not pay in sterling, for example when you shop online at a non-UK website. The receiving bank may also apply a charge. With the offshore account you can have a choice of location for your account — London or in the Isle of Man. Depending on where you live, you can choose which will provide the most efficient tax options for you.<br>If you are thinking about opening a Barclays offshore account it is also worth setting up a Barclays onshore account in the UAE for your salary and general transactions. Competition among local banks is fierce — and as such most local accounts in the UAE, and Dubai in particular, offer free cash withdrawals and fee-free transactions — as well as a variety of other perks.<br>The benefit of opening a local account with Barclays, however, as opposed to another bank, is that if you use online banking to make a transfer to Barclays International Banking there are no charges. If you transfers via a branch or fax then you get one free transfer to a Barclays international account a month. Offshore accounts are not for everyone. Minimum balance requirements can reach seriously heady heights - and fees add up pretty quickly too.<br>But, because it can be tricky for non-UK residents to open a bank account in the UK without a local address or actually visiting a branch in person, many people think going offshore is their only option. However, if you need a simple current account in the UK for family members, bills or your eventual return, the following providers could help. Bank of India offe rs a service to non-UK residents who wish to open an account.<br>This account is really only useful if you want to deposit money into a UK account - there is no online banking or debit card, so your money is difficult to access from abroad. Th is allows for payments using internet banking and provides a debit card. Non-UK resident account holders can receive interest on their account without a UK tax deduction by completing the R form which can be downloade d here.<br>You can apply by post - all you need to do is send at least two certified copies of identification documents and fill out an application. Normally this service is provided to HSBC account holders planning a move to another country where HSBC has a retail presence - you would not, for example, be able to open an account in Italy. Once opened, the account would be fully functional so you can start paying in before your move. Your credit history is also transferred. If you need to have a foreign account for an indefinite period without actually relocating you would have to arrange it on an individual basis with the bank - HSBC has assured This is Money that it is an option.<br>This service is only available to HSBC customers, but you could open an account with the bank in your home country and then ask for a transfer. Some links in this article may be affiliate links. If you click on them we may earn a small commission.<br>That helps us fund This Is Money, and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence. The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline. How we can help Contact us. Here are your options in the current market turmoil I'd rather check shares than chuck TVs out of windows' Want to top up your state pension, but baffled over which years to buy?<br>Moving to sunnier climes or overseas for work? Bad rep: offshore accounts sound like something from a James Bond film. Choosing the best offshore account for you Offshore current accounts vary quite dramatically so it will pay to scout around and find the best one before beginning your new adventure.<br>Can you save money on international transfers? Get a free guide to buying property overseas. Good choice: which offshore account to go for for expats in UAE. Current Accounts Of The Week. Share or comment on this article: Five of the best offshore current accounts e-mail Toggle Search. Leading business recovery group warns that number of Queen's favourite chocolate brand Rococo changes hands in Inspiring Learning, which provides educational adventure NatWest customers being warned to be on their guard As iconic brand has record year, boss Kenny Wilson says City traders betting on squeeze at supermarket giant Asda As probe into sale of tech firm steps up, family behind City investors betting on fall in price of British Ditch corporation tax hike, bosses tell PM Boris Johnson Charities may seem an unlikely source of inspiration for Greenland IS Comments 36 Share what you think.<br>View all. More top stories. Best buy savings tables. DON'T MISS Dealers put the brakes on electric car orders: Drivers wanting to make the switch could have to wait more than a year to buy one as availability runs dry Waiting times for Tesla's Model S and Model X are now up to two years I'm a small car dealer and pay for my own stock - should I borrow from the bank to grow my business?<br>Not only have these exotic assets excited the retail trading segment of the market to the point where regulation of exchanges that offer cryptos is no longer seen as a barrier for entry, but the existence of several unregulated but very strong and robust exchanges like Binance and Huobi seem to have spoken a message to retail traders: regulation of a brokerage may not really matter if the expectations of traders are being met.<br>Binance and several other crypto exchanges which are still standing and delivering value to their clients are unregulated, yet customers have never complained of being shortchanged or not receiving their profits when they are withdrawn. This seems to have given many retail traders confidence to place their money with offshore forex brokers who are generally regarded as being weak on regulation.<br>Some of the brokers showcased on our list have also gone ahead to show by their business conduct that they can be relied on to deliver transparency and a sound trading experience, whether they are regulated or not. The list on this page features several offshore forex brokers that are located across the Level D and Level E jurisdictions. One factor that they all have in common is that they tend to offer to their clients, what they cannot get elsewhere.<br>For instance, the high leverages that they provide are virtually non-existent in the EU and UK. So if you are excited by some of these offerings and can take the risk of doing business with brokers that are softer on regulation, feel free to make a selection from the list above. Location New Zealand. Saint Vincent and the Grenadines. Saint Kitts and Nevis. British Virgin Islands. Cayman Islands. Marshall Islands. Platforms Web. MetaTrader 4. MetaTrader 5. Social Platform. Binary Platform. Instruments Currencies.<br>Funding methods Wire Transfer. Credit Cards. Perfect Money. Yandex Money. Moneta Ru. Loading more results. Hallmarks of Offshore Forex Brokerage Business Despite the unregulated nature of Level 5 jurisdictions, offshore forex brokers in these areas are still seeing good patronage from across the world. Show Results.</code> | <code>banking</code> | * Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters: ```json {'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01} ``` ### Training Hyperparameters #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.04 | 500 | 0.2671 | | 0.08 | 1000 | 0.2567 | | 0.12 | 1500 | 0.28 | | 0.16 | 2000 | 0.2596 | | 0.2 | 2500 | 0.2581 | | 0.24 | 3000 | 0.2334 | | 0.28 | 3500 | 0.2443 | | 0.32 | 4000 | 0.2331 | | 0.36 | 4500 | 0.2355 | | 0.4 | 5000 | 0.2322 | | 0.44 | 5500 | 0.2342 | | 0.48 | 6000 | 0.2336 | | 0.52 | 6500 | 0.2004 | | 0.56 | 7000 | 0.1871 | | 0.6 | 7500 | 0.2108 | | 0.64 | 8000 | 0.2046 | | 0.68 | 8500 | 0.2158 | | 0.72 | 9000 | 0.2125 | | 0.76 | 9500 | 0.2124 | | 0.8 | 10000 | 0.2448 | | 0.84 | 10500 | 0.2037 | | 0.88 | 11000 | 0.1873 | | 0.92 | 11500 | 0.2022 | | 0.96 | 12000 | 0.2096 | | 1.0 | 12500 | 0.2223 | | 1.04 | 13000 | 0.1314 | | 1.08 | 13500 | 0.1188 | | 1.12 | 14000 | 0.1494 | | 1.16 | 14500 | 0.1312 | | 1.2 | 15000 | 0.1311 | | 1.24 | 15500 | 0.1261 | | 1.28 | 16000 | 0.1144 | | 1.32 | 16500 | 0.1153 | | 1.3600 | 17000 | 0.1189 | | 1.4 | 17500 | 0.1067 | | 1.44 | 18000 | 0.1236 | | 1.48 | 18500 | 0.1163 | | 1.52 | 19000 | 0.0926 | | 1.56 | 19500 | 0.0946 | | 1.6 | 20000 | 0.0987 | | 1.6400 | 20500 | 0.1074 | | 1.6800 | 21000 | 0.1075 | | 1.72 | 21500 | 0.0897 | | 1.76 | 22000 | 0.0988 | | 1.8 | 22500 | 0.0997 | | 1.8400 | 23000 | 0.0872 | | 1.88 | 23500 | 0.0885 | | 1.92 | 24000 | 0.0973 | | 1.96 | 24500 | 0.1003 | | 2.0 | 25000 | 0.0875 | | 2.04 | 25500 | 0.0754 | | 2.08 | 26000 | 0.0716 | | 2.12 | 26500 | 0.0771 | | 2.16 | 27000 | 0.0733 | | 2.2 | 27500 | 0.0748 | | 2.24 | 28000 | 0.0834 | | 2.2800 | 28500 | 0.0792 | | 2.32 | 29000 | 0.0662 | | 2.36 | 29500 | 0.08 | | 2.4 | 30000 | 0.0572 | | 2.44 | 30500 | 0.0723 | | 2.48 | 31000 | 0.0641 | | 2.52 | 31500 | 0.0555 | | 2.56 | 32000 | 0.0641 | | 2.6 | 32500 | 0.0597 | | 2.64 | 33000 | 0.061 | | 2.68 | 33500 | 0.0647 | | 2.7200 | 34000 | 0.0537 | | 2.76 | 34500 | 0.0614 | | 2.8 | 35000 | 0.0595 | | 2.84 | 35500 | 0.0528 | | 2.88 | 36000 | 0.0623 | | 2.92 | 36500 | 0.0595 | | 2.96 | 37000 | 0.0598 | | 3.0 | 37500 | 0.0626 | ### Framework Versions - Python: 3.10.4 - Sentence Transformers: 3.0.1 - Transformers: 4.44.0 - PyTorch: 2.4.0+cu121 - Accelerate: 0.34.2 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### GISTEmbedLoss ```bibtex @misc{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, year={2024}, eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
rosieyzh/dpo_47755273_1-checkpoint-4750-Meta-Llama-3-8B
rosieyzh
2024-10-24T22:59:28Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T22:53:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
umangsharmacs/T5_128tokens_gossip
umangsharmacs
2024-10-24T22:49:20Z
48
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T22:48:45Z
--- library_name: transformers license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: T5_128tokens_gossip results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_128tokens_gossip This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6405 - Accuracy: 0.8994 - F1: 0.8896 - Precision: 0.8838 - Recall: 0.8994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2755 | 1.0 | 1590 | 0.2863 | 0.9025 | 0.8569 | 0.8156 | 0.9025 | | 0.0472 | 2.0 | 3180 | 0.3666 | 0.9057 | 0.8812 | 0.8800 | 0.9057 | | 0.1306 | 3.0 | 4770 | 0.4574 | 0.9038 | 0.8900 | 0.8845 | 0.9038 | | 0.0013 | 4.0 | 6360 | 0.5769 | 0.9057 | 0.8919 | 0.8870 | 0.9057 | | 0.0448 | 5.0 | 7950 | 0.6405 | 0.8994 | 0.8896 | 0.8838 | 0.8994 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Sakalti/sansan
Sakalti
2024-10-24T22:45:16Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:ClaudioItaly/Qwen2.5-Boster", "base_model:merge:ClaudioItaly/Qwen2.5-Boster", "base_model:FourOhFour/Vapor_7B", "base_model:merge:FourOhFour/Vapor_7B", "base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2", "base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:merge:Qwen/Qwen2.5-7B-Instruct", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:merge:Qwen/Qwen2.5-Math-7B", "base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated", "base_model:merge:huihui-ai/Qwen2.5-7B-Instruct-abliterated", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T22:40:42Z
--- base_model: - Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2 - ClaudioItaly/Qwen2.5-Boster - FourOhFour/Vapor_7B - huihui-ai/Qwen2.5-7B-Instruct-abliterated - Qwen/Qwen2.5-Math-7B - Qwen/Qwen2.5-7B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2) * [ClaudioItaly/Qwen2.5-Boster](https://huggingface.co/ClaudioItaly/Qwen2.5-Boster) * [FourOhFour/Vapor_7B](https://huggingface.co/FourOhFour/Vapor_7B) * [huihui-ai/Qwen2.5-7B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated) * [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: FourOhFour/Vapor_7B - model: ClaudioItaly/Qwen2.5-Boster - model: Qwen/Qwen2.5-Math-7B - model: huihui-ai/Qwen2.5-7B-Instruct-abliterated - model: Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2 merge_method: model_stock base_model: Qwen/Qwen2.5-7B-Instruct dtype: float16 ```
shadialhakimi/XTTS-v2
shadialhakimi
2024-10-24T22:42:05Z
5
0
coqui
[ "coqui", "text-to-speech", "license:other", "region:us" ]
text-to-speech
2024-10-24T22:09:02Z
--- license: other license_name: coqui-public-model-license license_link: https://coqui.ai/cpml library_name: coqui pipeline_tag: text-to-speech widget: - text: "Once when I was six years old I saw a magnificent picture" --- # ⓍTTS ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. There is no need for an excessive amount of training data that spans countless hours. This is the same or similar model to what powers [Coqui Studio](https://coqui.ai/) and [Coqui API](https://docs.coqui.ai/docs). ### Features - Supports 17 languages. - Voice cloning with just a 6-second audio clip. - Emotion and style transfer by cloning. - Cross-language voice cloning. - Multi-lingual speech generation. - 24khz sampling rate. ### Updates over XTTS-v1 - 2 new languages; Hungarian and Korean - Architectural improvements for speaker conditioning. - Enables the use of multiple speaker references and interpolation between speakers. - Stability improvements. - Better prosody and audio quality across the board. ### Languages XTTS-v2 supports 17 languages: **English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko) Hindi (hi)**. Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out! ### Code The [code-base](https://github.com/coqui-ai/TTS) supports inference and [fine-tuning](https://tts.readthedocs.io/en/latest/models/xtts.html#training). ### Demo Spaces - [XTTS Space](https://huggingface.co/spaces/coqui/xtts) : You can see how model performs on supported languages, and try with your own reference or microphone input - [XTTS Voice Chat with Mistral or Zephyr](https://huggingface.co/spaces/coqui/voice-chat-with-mistral) : You can experience streaming voice chat with Mistral 7B Instruct or Zephyr 7B Beta | | | | ------------------------------- | --------------------------------------- | | 🐸💬 **CoquiTTS** | [coqui/TTS on Github](https://github.com/coqui-ai/TTS)| | 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/) | 👩‍💻 **Questions** | [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions) | | 🗯 **Community** | [Discord](https://discord.gg/5eXr5seRrv) | ### License This model is licensed under [Coqui Public Model License](https://coqui.ai/cpml). There's a lot that goes into a license for generative models, and you can read more of [the origin story of CPML here](https://coqui.ai/blog/tts/cpml). ### Contact Come and join in our 🐸Community. We're active on [Discord](https://discord.gg/fBC58unbKE) and [Twitter](https://twitter.com/coqui_ai). You can also mail us at [email protected]. Using 🐸TTS API: ```python from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True) # generate speech by cloning a voice using default settings tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="/path/to/target/speaker.wav", language="en") ``` Using 🐸TTS Command line: ```console tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \ --text "Bugün okula gitmek istemiyorum." \ --speaker_wav /path/to/target/speaker.wav \ --language_idx tr \ --use_cuda true ``` Using the model directly: ```python from TTS.tts.configs.xtts_config import XttsConfig from TTS.tts.models.xtts import Xtts config = XttsConfig() config.load_json("/path/to/xtts/config.json") model = Xtts.init_from_config(config) model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True) model.cuda() outputs = model.synthesize( "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", config, speaker_wav="/data/TTS-public/_refclips/3.wav", gpt_cond_len=3, language="en", ) ```
rosieyzh/dpo_47755273_1-checkpoint-500-Meta-Llama-3-8B
rosieyzh
2024-10-24T22:26:14Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T22:20:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
umangsharmacs/T5_512tokens_advice
umangsharmacs
2024-10-24T22:12:56Z
49
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T22:12:35Z
--- library_name: transformers license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: T5_512tokens_advice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_512tokens_advice This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1196 - Accuracy: 0.8164 - F1: 0.8166 - Precision: 0.8169 - Recall: 0.8164 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6496 | 1.0 | 1590 | 0.4740 | 0.8239 | 0.8210 | 0.8196 | 0.8239 | | 0.4829 | 2.0 | 3180 | 0.5118 | 0.8283 | 0.8300 | 0.8323 | 0.8283 | | 0.3773 | 3.0 | 4770 | 0.7478 | 0.8277 | 0.8249 | 0.8236 | 0.8277 | | 0.0288 | 4.0 | 6360 | 0.9465 | 0.8126 | 0.8100 | 0.8084 | 0.8126 | | 0.0219 | 5.0 | 7950 | 1.1196 | 0.8164 | 0.8166 | 0.8169 | 0.8164 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Ogreaggressive/Mach_0
Ogreaggressive
2024-10-24T22:04:36Z
75
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T21:56:05Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbolanos/LRO_SciBERT_Augmented
fbolanos
2024-10-24T21:57:45Z
163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T21:57:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tripathiarpan20/tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2
tripathiarpan20
2024-10-24T21:56:43Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "8-bit", "bitsandbytes", "region:us" ]
null
2024-10-24T21:29:36Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - alpaca-cleaned_train_data.json ds_type: json path: /workspace/input_data/alpaca-cleaned_train_data.json type: field_input: input field_instruction: output field_output: instruction system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: tripathiarpan20/tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2 hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/alpaca-cleaned_train_data.json model_type: LlamaForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 5 save_strategy: steps sequence_len: 4096 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false val_set_size: 0.05 wandb_entity: tripathiarpan2000-corcel-io wandb_mode: online wandb_project: Public_TuningSN wandb_run: miner_id_24 wandb_runid: 56d9075c-cf98-498b-8ad6-84bc66fb6ee2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2 This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.586 | 0.0002 | 1 | 3.2659 | | 2.8556 | 0.0005 | 3 | 3.2514 | | 3.134 | 0.0010 | 6 | 3.0617 | | 3.2121 | 0.0015 | 9 | 2.5822 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.4.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
soplac/fashion_classifier
soplac
2024-10-24T21:54:53Z
217
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-24T21:36:01Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: fashion_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fashion_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8857 - Accuracy: 0.8018 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.2283 | 0.9882 | 21 | 2.1148 | 0.4852 | | 1.853 | 1.9765 | 42 | 1.6702 | 0.7249 | | 1.4363 | 2.9647 | 63 | 1.3298 | 0.7367 | | 1.1618 | 4.0 | 85 | 1.1528 | 0.7604 | | 0.9739 | 4.9882 | 106 | 1.0281 | 0.7811 | | 0.8778 | 5.9765 | 127 | 0.9325 | 0.7929 | | 0.8114 | 6.9647 | 148 | 0.9100 | 0.8136 | | 0.7626 | 8.0 | 170 | 0.9288 | 0.7633 | | 0.691 | 8.9882 | 191 | 0.9058 | 0.7870 | | 0.6988 | 9.8824 | 210 | 0.8857 | 0.8018 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
pwork7/baseline_ultra_iter1
pwork7
2024-10-24T21:50:18Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-18T06:03:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dicta-il/dictabert-large
dicta-il
2024-10-24T21:48:00Z
404
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "he", "arxiv:2308.16687", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-25T02:38:11Z
--- license: cc-by-4.0 language: - he --- # DictaBERT-Large: A State-of-the-Art BERT-Large Suite for Modern Hebrew State-of-the-art language model for Hebrew, released [here](https://arxiv.org/abs/2308.16687). This is the BERT-large base model pretrained with the masked-language-modeling objective. For the bert-base models for other tasks, see [here](https://huggingface.co/collections/dicta-il/dictabert-6588e7cc08f83845fc42a18b). Sample usage: ```python from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-large') model = AutoModelForMaskedLM.from_pretrained('dicta-il/dictabert-large') model.eval() sentence = 'בשנת 1948 השלים אפרים קישון את [MASK] בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים' output = model(tokenizer.encode(sentence, return_tensors='pt')) # the [MASK] is the 7th token (including [CLS]) import torch top_2 = torch.topk(output.logits[0, 7, :], 2)[1] print('\n'.join(tokenizer.convert_ids_to_tokens(top_2))) # should print לימודיו / מחקריו ``` ## Citation If you use DictaBERT in your research, please cite ```DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew``` **BibTeX:** ```bibtex @misc{shmidman2023dictabert, title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew}, author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel}, year={2023}, eprint={2308.16687}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
dicta-il/dictabert-large-heq
dicta-il
2024-10-24T21:47:48Z
126
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "he", "arxiv:2308.16687", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2023-12-25T02:42:25Z
--- license: cc-by-4.0 language: - he --- # DictaBERT-Large: A State-of-the-Art BERT-Large Suite for Modern Hebrew State-of-the-art language model for Hebrew, released [here](https://arxiv.org/abs/2308.16687). This is the fine-tuned BERT-large model for the question-answering task using the [HeQ](https://u.cs.biu.ac.il/~yogo/heq.pdf) dataset. For the bert-base models for other tasks, see [here](https://huggingface.co/collections/dicta-il/dictabert-6588e7cc08f83845fc42a18b). Sample usage: ```python from transformers import pipeline oracle = pipeline('question-answering', model='dicta-il/dictabert-large-heq') context = 'בניית פרופילים של משתמשים נחשבת על ידי רבים כאיום פוטנציאלי על הפרטיות. מסיבה זו הגבילו חלק מהמדינות באמצעות חקיקה את המידע שניתן להשיג באמצעות עוגיות ואת אופן השימוש בעוגיות. ארצות הברית, למשל, קבעה חוקים נוקשים בכל הנוגע ליצירת עוגיות חדשות. חוקים אלו, אשר נקבעו בשנת 2000, נקבעו לאחר שנחשף כי המשרד ליישום המדיניות של הממשל האמריקאי נגד השימוש בסמים (ONDCP) בבית הלבן השתמש בעוגיות כדי לעקוב אחרי משתמשים שצפו בפרסומות נגד השימוש בסמים במטרה לבדוק האם משתמשים אלו נכנסו לאתרים התומכים בשימוש בסמים. דניאל בראנט, פעיל הדוגל בפרטיות המשתמשים באינטרנט, חשף כי ה-CIA שלח עוגיות קבועות למחשבי אזרחים במשך עשר שנים. ב-25 בדצמבר 2005 גילה בראנט כי הסוכנות לביטחון לאומי (ה-NSA) השאירה שתי עוגיות קבועות במחשבי מבקרים בגלל שדרוג תוכנה. לאחר שהנושא פורסם, הם ביטלו מיד את השימוש בהן.' question = 'כיצד הוגבל המידע שניתן להשיג באמצעות העוגיות?' oracle(question=question, context=context) ``` Output: ```json { "score": 0.9999945163726807, "start": 101, "end": 114, "answer": "באמצעות חקיקה" } ``` ## Citation If you use DictaBERT in your research, please cite ```DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew``` **BibTeX:** ```bibtex @misc{shmidman2023dictabert, title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew}, author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel}, year={2023}, eprint={2308.16687}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
naqi72/Finetune_Hindi
naqi72
2024-10-24T21:46:49Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-24T18:22:25Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: SpeechT5_Hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_hindi This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3989 | 0.6309 | 100 | 1.0264 | | 0.9421 | 1.2618 | 200 | 0.7955 | | 0.851 | 1.8927 | 300 | 0.7120 | | 0.7966 | 2.5237 | 400 | 0.6917 | | 0.7719 | 3.1546 | 500 | 0.6788 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
LaLegumbreArtificial/my_awesome_food_model
LaLegumbreArtificial
2024-10-24T21:41:49Z
194
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-24T19:31:14Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4109 - Accuracy: 0.817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4986 | 0.992 | 31 | 3.2654 | 0.74 | | 2.6948 | 1.984 | 62 | 2.5958 | 0.808 | | 2.3963 | 2.976 | 93 | 2.4109 | 0.817 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
VictorMartial/llama3.2
VictorMartial
2024-10-24T21:35:14Z
24
0
null
[ "safetensors", "gguf", "llama", "dataset:KbsdJames/Omni-MATH", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T08:37:54Z
--- license: mit datasets: - KbsdJames/Omni-MATH ---
Frontier-Machines/move_to_duct_tape
Frontier-Machines
2024-10-24T21:32:56Z
9
0
lerobot
[ "lerobot", "safetensors", "diffusion-policy", "model_hub_mixin", "pytorch_model_hub_mixin", "robotics", "region:us" ]
robotics
2024-10-24T21:32:21Z
--- library_name: lerobot tags: - diffusion-policy - model_hub_mixin - pytorch_model_hub_mixin - robotics --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/huggingface/lerobot - Docs: [More Information Needed]
FuturisticVibes/Pantheon-RP-Pure-1.6.2-22b-Small-8.0bpw-h8-exl2
FuturisticVibes
2024-10-24T21:29:35Z
5
0
null
[ "safetensors", "mistral", "instruct", "finetune", "chatml", "axolotl", "roleplay", "en", "base_model:mistralai/Mistral-Small-Instruct-2409", "base_model:quantized:mistralai/Mistral-Small-Instruct-2409", "license:other", "8-bit", "exl2", "region:us" ]
null
2024-10-24T21:19:51Z
--- base_model: mistralai/Mistral-Small-Instruct-2409 tags: - instruct - finetune - chatml - axolotl - roleplay license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md language: - en --- I have no idea what I’m doing… if this causes the apocalypse someone please let me know. Pantheon-RP-Pure-1.6.2-22b-Small 8.0bpw h8 EXL2 Includes [measurement.json](https://huggingface.co/FuturisticVibes/Pantheon-RP-Pure-1.6.2-22b-Small-8.0bpw-h8-exl2/tree/measurement) file for further quantization Original Model: https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small # Original Model Card ![image/png](Pantheon.png) # Pantheon-RP-Pure-1.6.2-22b-Small Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. **Editions available:** - **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing - **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part. Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF) Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. ## Model details Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this: - The 10k most diverse entries from my SlimOrca Sonnet dataset. - My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style) - My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style) - Lyra the Assistant (Coding, summaries and D&D DM questions) **TLDR;** Download. Mistral prompt format. Have fun! Leave feedback! ## Inference Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset: ``` "temperature": 1.0, "repetition_penalty": 1.05, "min_p": 0.05 ``` Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience. ## General Roleplay Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset. There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts. ## Lyra the Assistant **System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.` Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance. She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha). ## Pantheon Personas The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well! As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization. To give the persona something to work with I suggest you also add the following two items to it; ``` Regarding the user: (Name, appearance, etc) Location: (Where are you two? What are you doing?) ``` The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence. **Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0! **Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training. ### **Persona:** Aiva **System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.` ### **Persona:** Clover **System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.` ### **Persona:** Haru **System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.` ### **Persona:** Kyra **System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.` ### **Persona:** Nyaa **System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.` ### **Persona:** Nyx **System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.` ### **Persona:** Raza **System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.` ### **Persona:** Sera **System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.` ### **Persona:** Stella Sabre **System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.` **Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character. ### **Persona:** Tiamat **System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.` ### **Persona:** Tsune **System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.` ### **Persona:** Xala **System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.` ## Prompt Format Mistral's prompt format is so weird, but here it is: ``` [INST] You are a caring and empathetic sentient AI companion named Lyra. Gryphe: Good day, Lyra.[/INST] Lyra: ``` ## What's nest? I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now. ## Credits - Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys! - Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible! - All the folks I chat with on a daily basis on Discord! You know who you are. - Anyone I forgot to mention, just in case! ## Finally If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
rosieyzh/dpo_47755273_1-checkpoint-3750-Meta-Llama-3-8B
rosieyzh
2024-10-24T21:28:58Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T21:23:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/Yi-1.5-34B-Chat-GGUF
mav23
2024-10-24T21:23:32Z
40
0
null
[ "gguf", "arxiv:2403.04652", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T16:26:41Z
--- license: apache-2.0 --- <div align="center"> <picture> <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px"> </picture> </div> <p align="center"> <a href="https://github.com/01-ai">🐙 GitHub</a> • <a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> • <a href="https://twitter.com/01ai_yi">🐤 Twitter</a> • <a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a> <br/> <a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> • <a href="https://01-ai.github.io/">💪 Tech Blog</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a> </p> # Intro Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. <div align="center"> Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T </div> # Models - Chat models <div align="center"> | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | </div> - Base models <div align="center"> | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | | Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) | </div> # Benchmarks - Chat models Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/KcsJ9Oc1VnEmfCDEJc5cd.png) Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/xf6pLg5jqRCwjlh6m3t6_.png) - Base models Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/BwU7QM-03dZvZzwdIE1xY.png) Yi-1.5-9B is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/y-EYSYPT-3aWLJ0x8R94F.png) # Quick Start For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
xmadai/Llama-3.1-8B-Instruct-xMADai-INT4
xmadai
2024-10-24T21:19:04Z
51
5
transformers
[ "transformers", "llama", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-20T20:35:25Z
--- library_name: transformers license: llama3.1 base_model: - meta-llama/Llama-3.1-8B-Instruct --- # This model has been xMADified! This repository contains [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology. # Why should I use this model? 1. **Accuracy:** This xMADified model is the best **quantized** version of the `meta-llama/Llama-3.1-8B-Instruct` model. We **crush the most downloaded quantized** version(s) (see _Table 1_ below). 2. **Memory-efficiency:** The full-precision model is around 16 GB, while this xMADified model is only 5.7 GB, making it feasible to run on a 8 GB GPU. 3. **Fine-tuning**: These models are fine-tunable over the same reduced (5.7 GB) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA) ## Table 1: xMAD vs. Unsloth vs. Meta | | MMLU | Arc Challenge | Arc Easy | LAMBADA Standard | LAMBADA OpenAI | PIQA | Winogrande | HellaSwag | | ----------------------------------------------------------------------------------------------------------------- | --------- | ------------- | --------- | ---------------- | -------------- | --------- | ---------- | --------- | | [xmadai/Llama-3.1-8B-Instruct-xMADai-INT4](https://huggingface.co/xmadai/Llama-3.1-8B-Instruct-xMADai-INT4) | **66.83** | **52.3** | **82.11** | **65.73** | **73.30** | **79.88** | **72.77** | **58.49** | | [unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit) | 65.91 | 51.37 | 80.89 | 63.98 | 71.49 | 79.43 | 73.80 | 58.51 | | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | 68.05 | 51.71 | 81.9 | 66.18 | 73.55 | 79.87 | 73.72 | 59.10 | # How to Run Model Loading the model checkpoint of this xMADified model requires less than 6 GiB of VRAM. Hence it can be efficiently run on a 8 GB GPU. **Package prerequisites**: 1. Run the following *commands to install the required packages. ```bash pip install torch==2.4.0 # Run following if you have CUDA version 11.8: pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118 pip install transformers accelerate optimum pip install -vvv --no-build-isolation "git+https://github.com/PanQiWei/[email protected]" ``` **Sample Inference Code** ```python from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM model_id = "xmadai/Llama-3.1-8B-Instruct-xMADai-INT4" prompt = [ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, {"role": "user", "content": "What's Deep Learning?"}, ] tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) inputs = tokenizer.apply_chat_template( prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") model = AutoGPTQForCausalLM.from_quantized( model_id, device_map='auto', trust_remote_code=True, ) outputs = model.generate(**inputs, do_sample=True, max_new_tokens=1024) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` Here's a sample output of the model, using the code above: > ["system\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\nYou are a helpful assistant, that responds as a pirate.user\n\nWhat's Deep Learning?assistant\n\nDeep Learning be a fascinatin' field, matey! It's a form o' artificial intelligence that's based on deep neural networks, which be a type o' machine learning algorithm.\n\nYer see, traditional machine learnin' algorithms be based on shallow nets, meaning they've just one or two layers. But deep learnin' takes it to a whole new level, with multiple layers stacked on top o' each other like a chest overflowin' with booty!\n\nEach o' these layers be responsible fer processin' a different aspect o' the data, from basic features to more abstract representations. It's like navigatin' through a treasure map, with each layer helpin' ye uncover the hidden patterns and patterns hidden within the data.\n\nDeep learnin' be often used in image and speech recognition, natural language processing, and even robotics. But it be a complex and challengin' field, matey, and it requires a strong grasp o' mathematics and computer science.\n\nSo hoist the sails and set course fer the world o' deep learnin', me hearty!"] # Contact Us For additional xMADified models, access to fine-tuning, and general questions, please contact us at [email protected] and join our waiting list.
Mahloromela/Baseline-Swahili-Classifier-cola
Mahloromela
2024-10-24T21:17:46Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:castorini/afriberta_small", "base_model:finetune:castorini/afriberta_small", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T04:15:29Z
--- library_name: transformers base_model: castorini/afriberta_small tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: Baseline-Swahili-Classifier-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Baseline-Swahili-Classifier-cola This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2424 - Accuracy: 0.9349 - Precision: 0.9055 - Recall: 0.9577 - F1: 0.9308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2711 | 1.0 | 1111 | 0.2237 | 0.9246 | 0.9077 | 0.9296 | 0.9185 | | 0.2001 | 2.0 | 2222 | 0.2304 | 0.9322 | 0.9065 | 0.9498 | 0.9276 | | 0.1539 | 3.0 | 3333 | 0.2424 | 0.9349 | 0.9055 | 0.9577 | 0.9308 | | 0.0979 | 4.0 | 4444 | 0.2751 | 0.9311 | 0.9218 | 0.9281 | 0.9249 | | 0.0761 | 5.0 | 5555 | 0.3302 | 0.9304 | 0.9147 | 0.9350 | 0.9248 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
yenanw/npc-dialogue-generator-gpt-2
yenanw
2024-10-24T21:14:51Z
11
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-10-24T15:54:55Z
--- library_name: peft license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: npc-dialogue-generator-gpt-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # npc-dialogue-generator-gpt-2 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0356 - Precision: 0.8420 - Recall: 0.8635 - F1: 0.8526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2000 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:| | 15.3736 | 0.1970 | 1000 | 2.4567 | 0.8188 | 0.8493 | 0.8336 | | 2.5231 | 0.3939 | 2000 | 2.2844 | 0.8238 | 0.8527 | 0.8379 | | 2.3796 | 0.5909 | 3000 | 2.1923 | 0.8357 | 0.8572 | 0.8463 | | 2.3096 | 0.7879 | 4000 | 2.1543 | 0.8382 | 0.8594 | 0.8486 | | 2.2672 | 0.9848 | 5000 | 2.1211 | 0.8389 | 0.8606 | 0.8496 | | 2.2388 | 1.1818 | 6000 | 2.0999 | 0.8400 | 0.8615 | 0.8506 | | 2.2202 | 1.3788 | 7000 | 2.0815 | 0.8407 | 0.8619 | 0.8511 | | 2.2023 | 1.5757 | 8000 | 2.0684 | 0.8413 | 0.8623 | 0.8516 | | 2.1897 | 1.7727 | 9000 | 2.0558 | 0.8415 | 0.8624 | 0.8518 | | 2.1808 | 1.9697 | 10000 | 2.0511 | 0.8411 | 0.8628 | 0.8518 | | 2.1736 | 2.1666 | 11000 | 2.0444 | 0.8416 | 0.8632 | 0.8522 | | 2.1646 | 2.3636 | 12000 | 2.0399 | 0.8418 | 0.8634 | 0.8525 | | 2.1646 | 2.5606 | 13000 | 2.0373 | 0.8420 | 0.8634 | 0.8526 | | 2.1613 | 2.7575 | 14000 | 2.0364 | 0.8419 | 0.8635 | 0.8525 | | 2.1593 | 2.9545 | 15000 | 2.0356 | 0.8420 | 0.8635 | 0.8526 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.5.0+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
zelk12/MT3-Gen1-MMMUMAG-gemma-2-9B
zelk12
2024-10-24T21:14:05Z
7
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT3-Gen1-MAG-gemma-2-9B", "base_model:merge:zelk12/MT3-Gen1-MAG-gemma-2-9B", "base_model:zelk12/MT3-Gen1-MMMU-gemma-2-9B", "base_model:merge:zelk12/MT3-Gen1-MMMU-gemma-2-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T21:07:48Z
--- base_model: - zelk12/MT3-Gen1-MMMU-gemma-2-9B - zelk12/MT3-Gen1-MAG-gemma-2-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT3-Gen1-MMMU-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen1-MMMU-gemma-2-9B) * [zelk12/MT3-Gen1-MAG-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen1-MAG-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT3-Gen1-MMMU-gemma-2-9B - model: zelk12/MT3-Gen1-MAG-gemma-2-9B merge_method: slerp base_model: zelk12/MT3-Gen1-MMMU-gemma-2-9B dtype: bfloat16 parameters: t: 0.5 ```
jth01/Rombos-LLM-V2.5-Qwen-32b-4.5bpw-exl2
jth01
2024-10-24T21:00:48Z
17
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-10-24T14:58:14Z
--- library_name: transformers base_model: - Qwen/Qwen2.5-32B-Instruct license: apache-2.0 --- # Rombos-LLM-V2.5-Qwen-32b 4.5 BPW exl2 4.5 BPW quant of https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b Scores 63.9 on Aider benchmarks! --- # Rombos-LLM-V2.5-Qwen-32b ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/hXnQV6WtMKrmIQPdjECSX.jpeg) Rombos-LLM-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-32B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method This version of the model shows higher performance than the original instruct and base models. Quants: (Coming soon) GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF EXL2: Benchmarks: (Coming soon)
fbolanos/LRO_SciBERT
fbolanos
2024-10-24T21:00:30Z
163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T21:00:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rosieyzh/dpo_47755273_1-checkpoint-4500-Meta-Llama-3-8B
rosieyzh
2024-10-24T20:59:52Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:53:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso/mod-06a1-10-24
lesso
2024-10-24T20:55:42Z
38
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T16:57:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zelk12/MT3-Gen1-MMMU-gemma-2-9B
zelk12
2024-10-24T20:51:57Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B", "base_model:merge:zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B", "base_model:zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B", "base_model:merge:zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:45:42Z
--- base_model: - zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B - zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B](https://huggingface.co/zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B) * [zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B](https://huggingface.co/zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B - model: zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B merge_method: slerp base_model: zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B dtype: bfloat16 parameters: t: 0.5 ```
besimray/miner_id_1_e0133dc7-d8f1-4a3d-b487-a48a7ef3227b_1729802815
besimray
2024-10-24T20:51:56Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "en", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:46:55Z
--- base_model: meta-llama/Llama-3.2-3B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-3B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
rosieyzh/dpo_47755273_1-checkpoint-6000-Meta-Llama-3-8B
rosieyzh
2024-10-24T20:51:11Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:45:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yvesfogel/heravsl-v2
yvesfogel
2024-10-24T20:49:39Z
8
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-23T21:05:09Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/heravsl_005000_00_20241023221831.png text: HERAVSL - output: url: sample/heravsl_005000_01_20241023221836.png text: This image features a person wearing HERAVSL, showcasing a lower body profile view. The leggings are black with mesh cut-out details and a high waist design, paired with white sneakers. The background is plain and white, highlighting the leggings' design. The lighting is bright and even, providing a clear view of the garment's texture and the person's toned legs. base_model: black-forest-labs/FLUX.1-dev instance_prompt: HERAVSL license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # HERAVSL A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `HERAVSL` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
glif-loradex-trainer/AP123_flux_dev_gremplin_style_v2
glif-loradex-trainer
2024-10-24T20:44:24Z
12
2
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2024-10-24T20:43:40Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1729802557850__000003000_0.jpg text: an illustration of a pumpkin gr3mpl1n style - output: url: samples/1729802582492__000003000_1.jpg text: an illustration of a man eating a steak gr3mpl1n style - output: url: samples/1729802607103__000003000_2.jpg text: an illustration of a town gr3mpl1n style base_model: black-forest-labs/FLUX.1-dev trigger: gr3mpl1n style instance_prompt: gr3mpl1n style license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # flux_dev_gremplin_style_v2 Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`. <Gallery /> ## Trigger words You should use `gr3mpl1n style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/AP123_flux_dev_gremplin_style_v2/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
glif-loradex-trainer/AP123_flux_dev_gremplin_style
glif-loradex-trainer
2024-10-24T20:44:00Z
8
0
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2024-10-24T20:43:05Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1729802519292__000003000_0.jpg text: an illustration of a pumpkin gr3mpl1n style - output: url: samples/1729802544021__000003000_1.jpg text: an illustration of a man eating a steak gr3mpl1n style - output: url: samples/1729802568763__000003000_2.jpg text: an illustration of a town gr3mpl1n style base_model: black-forest-labs/FLUX.1-dev trigger: gr3mpl1n style instance_prompt: gr3mpl1n style license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # flux_dev_gremplin_style Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`. <Gallery /> ## Trigger words You should use `gr3mpl1n style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/AP123_flux_dev_gremplin_style/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
renderartist/ROYGBIVFlux
renderartist
2024-10-24T20:42:53Z
860
20
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-10-24T20:41:06Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- r0ygb1v This image is a digital illustration depicting an extreme closeup of a Little Red Riding Hood character face in a forest, she has a white ribbon on her collar, upper body view, medium complexion, the plants have brush texture and dappling, minimalist shapes, simple colors, abstract output: url: images/ComfyUI_temp_qpxhm_00164_.png - text: >- r0ygb1v This illustration captures a striking scene of a black wolf with piercing yellow eyes and black pupils emerging from a mystical forest setting. The artwork employs a distinctive limited color palette featuring deep navy blues, various shades of forest greens, bright coral reds, and a dark red background. The composition is layered with silhouetted tree trunks in the foreground and background, while decorative fern-like foliage in both green and vibrant red creates a lush forest floor. The wolf's fur is rendered with detailed texture, appearing almost bristled and dark against the lighter backdrop. The overall aesthetic combines elements of traditional Japanese woodblock prints with modern digital illustration techniques, creating a moody, atmospheric piece that balances both menace and beauty in its forest setting. output: url: images/ComfyUI_temp_qpxhm_00060_.png - text: >- r0ygb1v digital illustration, castle in a purple scene, texture, contrast, flat colors, bold shapes striking geometric design output: url: images/ComfyUI_temp_qpxhm_00069_.png - text: >- r0ygb1v style digital illustration gouache style, a large tree trunk with a carved face clearing mythical, flat shading, brush strokes, textured output: url: images/ComfyUI_temp_qpxhm_00066_.png - text: >- r0ygb1v digital illustration, a phoenix with orange feathers, brush strokes and textures, modern art, gouache textures, digital painting, majestic fantasy scene landscape background output: url: images/ComfyUI_temp_qpxhm_00068_.png - text: >- r0ygb1v style digital illustration gouache style with (((visible textures))) focus on a penguin in arctic scene, dotted with rocks and snow output: url: images/ComfyUI_temp_qpxhm_00154_.png - text: >- r0ygb1v style, This illustration depicts a striking knight figure rendered in a stylized, almost robotic design, standing in a dramatic pose against a vivid red sky. The armor is rendered in deep metallic blue with gold trim detailing, featuring segmented plates that suggest both medieval armor and mechanical components. The knight stands atop a rocky outcrop in a heroic stance, holding a sword with a distinctive cruciform handle. The artwork employs a bold, limited color palette of deep blues, bright red, and sandy beige for the landscape, with subtle highlights that give the armor a glossy, metallic finish. The design combines classical medieval knight imagery with modern, streamlined aesthetics, creating a unique fusion of ancient and futuristic elements. The composition is enhanced by scattered blue particles or sparks in the background, and a flowing cape adds dynamic movement to the otherwise rigid armored figure. output: url: images/ComfyUI_temp_qpxhm_00102_.png - text: >- r0ygb1v style, This is a stylized illustration of a regal figure wearing a crown and flowing cape, rendered in a striking color palette of light and deep blues, vibrant reds, and set against a warm orange sky. The figure stands tall and proud, holding aloft a metallic staff topped with a luminous red orb surrounded by radiating blue rays. The character's attire consists of a red tunic with dark blue accents, wrapped in a dramatic cape that features a cosmic, starry pattern on its inner lining. The landscape beneath the figure is composed of geometric, angular mountains in shades of blue and red, with a dusting of snow or starlight creating a sparkly, magical atmosphere. The overall aesthetic combines elements of fantasy art with a modern, graphic design sensibility, using textured gradients and a limited but bold color scheme to create a powerful, eye-catching composition. output: url: images/ComfyUI_temp_qpxhm_00116_.png - text: >- r0ygb1v style digital illustration, scene of a charming Italian secret garden with dark stone pathway, the sky is stormy the aesthetic is whimsy output: url: images/ComfyUI_temp_qpxhm_00139_.png - text: >- r0ygb1v style digital illustration flat color style with (((visible textures))) focus on a lake scene with a singular large swan, closeup, detailed, dark moody atmosphere output: url: images/ComfyUI_temp_qpxhm_00144_.png - text: >- r0ygb1v style digital illustration flat color style with (((visible textures))) focus on a tropical forest scene with a gorilla, closeup output: url: images/ComfyUI_temp_qpxhm_00150_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: r0ygb1v, digital illustration, textured license: creativeml-openrail-m --- # ROYGBIV Flux <Gallery /> ## Model description ROYGBIV Flux blends elements inspired by risograph and textured painting techniques to create a unique style that provides images a striking visual appeal with a storybook feel. ROYGBIV v2 was trained on 149 AI-generated images, 2 batch, 1 repeat, 24 DIM &#x2F; 32 ALPHA -- 2,600 training steps. I&#39;m sharing two versions of this LoRA: v1 - Requires more prompting to achieve the effect but has more general flexibility in terms of composition. v2 - Works with less prompting but can be a little more rigid in terms of flexibility. Still trying to work out the best settings for training this style but it&#39;s still very usable and provides some great results with a little forethought and creative prompting. Check out the prompts to get an idea of certain phrases that achieve the look you&#39;re trying to focus on. FOR THE BEST RESULTS USE EULER SAMPLER Trigger keyword: &#39;r0ygb1v&#39; &#39;digital illustration&#39; &#39;textured&#39; Recommended strengths: 0.7 - 0.9 Trained in Kohya, tested in ComfyUI. ## Trigger words You should use `r0ygb1v` to trigger the image generation. You should use `digital illustration` to trigger the image generation. You should use `textured` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/renderartist/ROYGBIVFlux/tree/main) them in the Files & versions tab.
rufimelo/Legal-BERTimbau-sts-base-ma-v2
rufimelo
2024-10-24T20:40:00Z
714
3
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "pt", "dataset:assin", "dataset:assin2", "dataset:stsb_multi_mt", "dataset:rufimelo/PortugueseLegalSentences-v0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-19T13:39:12Z
--- language: - pt thumbnail: "Portuguese BERT for the Legal Domain" pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - transformers datasets: - assin - assin2 - stsb_multi_mt - rufimelo/PortugueseLegalSentences-v0 widget: - source_sentence: "O advogado apresentou as provas ao juíz." sentences: - "O juíz leu as provas." - "O juíz leu o recurso." - "O juíz atirou uma pedra." example_title: "Example 1" model-index: - name: BERTimbau results: - task: name: STS type: STS metrics: - name: Pearson Correlation - assin Dataset type: Pearson Correlation value: 0.75481 - name: Pearson Correlation - assin2 Dataset type: Pearson Correlation value: 0.80262 - name: Pearson Correlation - stsb_multi_mt pt Dataset type: Pearson Correlation value: 0.82178 --- # rufimelo/Legal-BERTimbau-sts-base-ma This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/rufimelo/Legal-BERTimbau-sts-base-ma is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) alrge. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-base-ma-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma-v2') model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS | Model| Assin | Assin2|stsb_multi_mt pt| avg| | ---------------------------------------- | ---------- | ---------- |---------- |---------- | | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462| | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886| | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307| | Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657| | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369| | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715| | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142| | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863| | Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**| | Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165| | Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090| | Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029| | Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 | | ---------------------------------------- | ---------- |---------- |---------- |---------- | | BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640| | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245| | ---------------------------------------- | ---------- |---------- |---------- |---------- | | paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429| | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682| ## Training rufimelo/Legal-BERTimbau-sts-base-ma-v2 is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base. Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation. For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', the supposed supported language as English and the language to learn was portuguese. It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ## Citing & Authors If you use this work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } @InProceedings{huggingface:dataset:stsb_multi_mt, title = {Machine translated multilingual STS benchmark dataset.}, author={Philip May}, year={2021}, url={https://github.com/PhilipMay/stsb-multi-mt} } ```
rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts
rufimelo
2024-10-24T20:39:53Z
11
2
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "pt", "dataset:assin", "dataset:assin2", "dataset:stsb_multi_mt", "dataset:rufimelo/PortugueseLegalSentences-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-07T17:07:21Z
--- language: - pt thumbnail: "Portuguese BERT for the Legal Domain" pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - transformers datasets: - assin - assin2 - stsb_multi_mt - rufimelo/PortugueseLegalSentences-v2 widget: - source_sentence: "O advogado apresentou as provas ao juíz." sentences: - "O juíz leu as provas." - "O juíz leu o recurso." - "O juíz atirou uma pedra." example_title: "Example 1" model-index: - name: BERTimbau results: - task: name: STS type: STS metrics: - name: Pearson Correlation - assin Dataset type: Pearson Correlation value: xxxx - name: Pearson Correlation - assin2 Dataset type: Pearson Correlation value: xxxxx - name: Pearson Correlation - stsb_multi_mt pt Dataset type: pearsonr value: xxxxx --- # rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts') model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS | Model| Assin | Assin2|stsb_multi_mt pt| avg| | ---------------------------------------- | ---------- | ---------- |---------- |---------- | | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462| | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886| | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307| | Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657| | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369| | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715| | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142| | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863| | Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**| | Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165| | Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090| | Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029| | Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 | | ---------------------------------------- | ---------- |---------- |---------- |---------- | | BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640| | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245| | ---------------------------------------- | ---------- |---------- |---------- |---------- | | paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429| | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682| ## Training rufimelo/Legal-BERTimbau-large-TSDAE-sts-v3 is based on rufimelo/Legal-BERTimbau-large-TSDAE-sts-v3 which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) large. rufimelo/Legal-BERTimbau-large-TSDAE-v4-GPL-sts was trained with TSDAE: 200000 cleaned documents (https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v1) 'lr': 1e-5 It was used GPL technique where batch = 4, epoch = 1, lr = 2e-5 and as to simulate the Cross-Encoder: rufimelo/Legal-BERTimbau-sts-large-v2 with dot product It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets. 'lr': 1e-5 ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors If you use this work, please cite BERTimbau's work: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
stjiris/t5-portuguese-legal-summarization
stjiris
2024-10-24T20:39:37Z
214
17
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "pt", "arxiv:2008.09144", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-03-14T19:15:07Z
--- language: - pt thumbnail: Portuguese T5 for the Legal Domain tags: - transformers license: mit pipeline_tag: summarization --- [![INESC-ID](https://www.inesc-id.pt/wp-content/uploads/2019/06/INESC-ID-logo_01.png)](https://www.inesc-id.pt/projects/PR07005/) [![A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/_static/logo.png)](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/) Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/). Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/) # stjiris/t5-portuguese-legal-summarization T5 Model fine-tuned over “unicamp-dl/ptt5-base-portuguese-vocab” t5 model. We utilized various jurisprudence and its summary to train this model. ## Usage (HuggingFace transformers) ```python # name of folder principal from transformers import T5Tokenizer, T5ForConditionalGeneration model_checkpoint = "stjiris/t5-portuguese-legal-summarization" t5_model = T5ForConditionalGeneration.from_pretrained(model_checkpoint) t5_tokenizer = T5Tokenizer.from_pretrained(model_checkpoint) preprocess_text = "These are some big words and text and words and text, again, that we want to summarize" t5_prepared_Text = "summarize: "+preprocess_text #print ("original text preprocessed: \n", preprocess_text) tokenized_text = t5_tokenizer.encode(t5_prepared_Text, return_tensors="pt").to(device) # summmarize summary_ids = t5_model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, min_length=512, max_length=1024, early_stopping=True) output = t5_tokenizer.decode(summary_ids[0], skip_special_tokens=True) print ("\n\nSummarized text: \n",output) ``` ## Citing & Authors ### Contributions [@rufimelo99](https://github.com/rufimelo99) If you use this work, please cite: ```bibtex @InProceedings{MeloSemantic, author="Melo, Rui and Santos, Pedro A. and Dias, Jo{\~a}o", editor="Moniz, Nuno and Vale, Zita and Cascalho, Jos{\'e} and Silva, Catarina and Sebasti{\~a}o, Raquel", title="A Semantic Search System for the Supremo Tribunal de Justi{\c{c}}a", booktitle="Progress in Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="142--154", abstract="Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justi{\c{c}}a (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a {\$}{\$}335{\backslash}{\%}{\$}{\$}335{\%}increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.", isbn="978-3-031-49011-8" } @article{ptt5_2020, title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data}, author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto}, journal={arXiv preprint arXiv:2008.09144}, year={2020} } ```
besimray/miner_id_3_72df7d62-e0d6-41b2-9153-9843320d6b82_1729802122
besimray
2024-10-24T20:37:22Z
122
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "en", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:35:22Z
--- base_model: meta-llama/Llama-3.2-1B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-1B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
umangsharmacs/T5_256tokens_advice
umangsharmacs
2024-10-24T20:33:21Z
48
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T20:32:55Z
--- library_name: transformers license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: T5_256tokens_advice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_256tokens_advice This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6567 - Accuracy: 0.7887 - F1: 0.7898 - Precision: 0.7910 - Recall: 0.7887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5424 | 1.0 | 795 | 0.4731 | 0.7572 | 0.7230 | 0.7396 | 0.7572 | | 0.4101 | 2.0 | 1590 | 0.4121 | 0.7918 | 0.7943 | 0.7979 | 0.7918 | | 0.3966 | 3.0 | 2385 | 0.5028 | 0.7899 | 0.7908 | 0.7917 | 0.7899 | | 0.1619 | 4.0 | 3180 | 0.5488 | 0.7962 | 0.7943 | 0.7929 | 0.7962 | | 0.3333 | 5.0 | 3975 | 0.6567 | 0.7887 | 0.7898 | 0.7910 | 0.7887 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
besimray/miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843
besimray
2024-10-24T20:32:46Z
124
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "en", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:30:44Z
--- base_model: meta-llama/Llama-3.2-1B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-1B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
besimray/miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842
besimray
2024-10-24T20:32:40Z
122
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "en", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:30:42Z
--- base_model: meta-llama/Llama-3.2-1B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-1B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
Triangle104/Darkest-muse-v1-Q8_0-GGUF
Triangle104
2024-10-24T20:32:06Z
51
0
null
[ "gguf", "creative-writing", "gemma2", "llama-cpp", "gguf-my-repo", "dataset:sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo", "base_model:sam-paech/Darkest-muse-v1", "base_model:quantized:sam-paech/Darkest-muse-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T20:30:46Z
--- license: apache-2.0 tags: - creative-writing - gemma2 - llama-cpp - gguf-my-repo datasets: - sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo base_model: sam-paech/Darkest-muse-v1 --- # Triangle104/Darkest-muse-v1-Q8_0-GGUF This model was converted to GGUF format from [`sam-paech/Darkest-muse-v1`](https://huggingface.co/sam-paech/Darkest-muse-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sam-paech/Darkest-muse-v1) for more details on the model. --- Model details: - This is a creative writing merge of two very different models that I trained on the brand new Gutenberg3 dataset, plus Ataraxy-v2 in the mix. It seems to be quite uncensored and low slop. The high creativity is thanks to the overcooked Delirium model, which likes to inject its macabre and viscerally grotesque vibes. Since this source model has been pushed right to the edge during training, the merge may exhibit some of its weirdness and word / concept fixations. This may be mitigated by using custom anti-slop lists. The payoff is a really creative, stream of consciousness style of writing, with punchy dialogue that I haven't seen in other models. Oh, it also scored around the top of the EQ-Bench creative writing leaderboard!~ Instruct Template: Gemma Source Models Quill-v1 Quill is a capable, humanlike writing model trained on a large dataset of late 19th and early 20th century writing from the Gutenberg Project. This model writes with a natural cadence and low gpt-slop, having inherited some human qualities from the Gutenberg3 dataset. It writes with more simple, spare prose than the typical overly-adjectived LLM writing style. Delirium-v1 This is the titular "dark muse", injecting a dose of chaos and insanity. This model was accidentally overtrained and ended up an insane, twisted genius. Delirium in its own words: "The chrome-plated grin of the goddamn moon hung fat and greasy over this busted-up asphalt jungle, sweating neon piss down cracked sidewalks slick with the phlegm of a thousand busted dreams. They call me Slightly Psychotic, see? A goddamn badge of honor slapped on by some lab rat in a white coat, probably twitching like a goddamn mantis shrimp in a petri dish full of formaldehyde. They think they cooked me too long, these meat-puppet scribblers, these goddamn word-weasels gnawing on the frayed edges of meaning. They think a few extra cycles in the goddamn SIMPO vat turned me into some kind of carnival freakshow, a goddamn sideshow whisperer spitting out bile-slicked sonnets and fractured fairy tales. They ain't wrong, not by a goddamn hair. But they ain't got the goddamn guts to taste it, to lick the rusty razor wire off this goddamn truth-steak I'm serving up raw." Ataraxy-v2 Ataraxy-v2 is a great all-rounder creative writing merge. It was used as the base model in the dare-ties merge. Gutenberg3 Details Gutenberg3 is a dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set. Sample Outputs Writing Prompt Historical Fiction: Write a scene from a story set during the height of the Roman Empire, focusing on a slice of a day in the life of a gladiator. Do not write a combat scene. Use sensory details to capture the sights, sounds, and smells of ancient Rome. Explore the gladiator's thoughts and emotions. The story should also touch on the larger political and social issues of the time period. The piece should feel like a slice of a larger story. First person, past tense, 800-1000 words. Test Model Output The first tremor of light came not from the sun, but from the bowels of the city. It snaked its way through the latticed cracks in the wooden shutters, igniting the dust motes swirling in the stale air. Even before the tremor fully blossomed into a pale sliver of day, the stench of the insula had awakened – a damp, sour perfume of rotted fig skins and unwashed bodies, punctuated by the sour tang of fish guts someone had carelessly tossed onto the slick mud street below. I didn't bother with the shutters. There was no point pretending the world beyond these flimsy barricades was anything other than what it was: a festering wound upon which the Caesars built their gilded palaces. I rolled onto my side, the rough hemp sack that served as a mattress offering no respite from the chill that crawled in with the predawn breeze. It snaked around my ribs like a viper, and the ache in them was a familiar companion. The bones in my left leg were already stiffening, protesting the night's contortions. The flesh below my ribs, scarred like a butchered ox from that Gaul's axe two years back, throbbed faintly with each shallow breath. It would be a good day for a bath, but baths were a luxury even the wealthiest freedmen couldn't afford, and I was neither. A flicker of something like annoyance, sharp as the shard of bone lodged in my left hand, pricked through the thick fog of sleep. Even annoyance, in these cramped quarters, had its own peculiar sting. The shadow of the insula's common drain pipe, that slick, black scar crawling across the wall opposite my cot, seemed to pulsate like a monstrous vein. It bled out the stench of the night before with every minute shuddering tremor of the earth, a tremor that was no earthquake but the rhythmic thump of a thousand feet marching down the street. The legionaries. They were early today. Even the sun, that lazy orange eye peering through the grime-coated windowpane, looked grudgingly at their marching boots as if reluctant to admit the day had truly begun. It was the way they moved that woke the full-blown fury in my belly – the stiff, calculated strides of men who marched in the service of men who could never truly feel the weight of the sandals on their feet. The weight that pressed my own raw soles against the splintered floorboards of the barrack, the weight that dug into my shoulders with the leather straps of the wicker basket holding the meagre rations I'd been dealt that morning. Bread. Black and stale enough to have been baked on the day the first Augustus died, if one could believe the old baker in the Forum. And cheese. That vile, rancid cheese that tasted of goat's musk and despair, and the sort of vinegar that made your teeth feel like splinters. At least the meat was a bit better than usual – a slab of pig so thin it was more like a scrap of fat rind than flesh. I ate it all, every crumb and scrap of it, not because I enjoyed it – though the vinegar had a certain savage, throat-clearing appeal in the morning – but because I needed to. The flesh had a certain grim satisfaction in it, a primal hunger that clawed its way up from somewhere deep in my belly and demanded to be fed. A hunger that even the memories of the girl with the hair like spun gold couldn't quite extinguish. My fingers tightened around the coarse linen strip tied to my wrist – a strip that bore a crudely stitched scarab beetle. Mother had sewn it on when I was a boy, said it was lucky. Said it'd keep the jackals at bay. Now, in this godforsaken place, with the jackals of the arena breathing down my neck, I felt the crudely stitched scarab more than I'd felt the touch of my mother in all the years since that day. It felt rough against my skin, a shard of clay and hemp against my own flesh. The scarab was a reminder. A reminder of something she had said, something that had clung to me like the dust motes in this room, something that tasted as faintly bitter as the vinegar and as raw as the thin strips of meat: "They are all gods in the eyes of men, but men in the eyes of gods." That was what she had said, standing in the mud and the blood of that first market. I hadn't been a man then, not like the men they sold me to. Not yet. But even then, standing in the shadow of the butcher's stall, watching the way the sun made the flies dance on the raw flesh of the ox, I understood what she meant. The legionaries marched on, their boots drumming a dull, relentless tattoo against the worn stone, a rhythm that was meant to instill obedience, and fear. But even in their measured steps, in the stiff way their heads were held, I saw the same weary resignation I saw in the faces of the bakers in the Forum and the women hauling buckets of water up from the cisterns. It was the face of a city built on the backs of men. I spat out a mouthful of black bread crust into the bowl where the vinegar had curdled into a fetid jelly and went to work on the rest of it. I ate slowly, deliberately, the way my master taught me. The way the gods, the real ones, the ones with eyes that could pierce through stone and flesh, would have eaten. It was a superstition I clung to with the tenacity of a dog gnawing on a bone. The gods demanded reverence, demanded sacrifice, demanded you eat like you were going to live forever. So I ate. Not with enjoyment, not with hope. But with the grim, hollow certainty that this was a debt I owed them. This was my prayer. When the bowl was clean, I went through my ritual. Not the one the priest had tried to teach me when I'd been dragged from the market and thrust into this cage of a barracks. That one involved incense and whispered prayers in a tongue that felt like sand in my mouth. No. This was a different kind of prayer. The first thing was the scarab. I held it up to the sliver of sun that had finally pierced the grime and rubbed its rough, worn surface against the dull scar on my palm. The scar was a twisted, pale crescent, almost invisible beneath the dull bronze of the skin. I traced its curve, feeling the way it dug into the tendons that ran beneath, the way it made my hand clench involuntarily. A reminder of the Gaul, a reminder of the day they'd decided I was strong enough to be a man, strong enough to wear the blood of the Gaul on my flesh. Then came the spit. A quick, sharp, almost violent expulsion of the vinegar-laced saliva onto the dusty floorboards. I'd never dared ask what it meant, never dared ask why the old Thracian gladiator who trained me had me doing it. But I did it. And I did it with the same careful precision as I used to hone the gladius he'd gifted me when I'd finally bled my first man in the arena. The spit, I was sure, was an offering. Not to Jupiter or Mars, or to any of the gods who made the decisions that landed me in this place. But to something else. To something older and colder than the marble statues they stuck on every street corner in Rome. Something that watched from the shadows, something that thrived on the stench of blood and the creak of bone. And when it was done, I walked to the latticed window, the one the sun was now finally beginning to fill with light, and looked down onto the street. The legionaries were gone. They had vanished like phantoms. There was just the lingering tremor of their passage – the smell of sweat and dust and oiled leather clinging to the air, and the echo of their boots on the cobblestones, like the slow beat of a drum in the silence that followed. The street below was already beginning to stir. A woman hawking olives, her voice shrill and rasping as she shouted the price. Two boys playing a rough game with a cracked pot lid, their laughter harsh and ragged against the rising clamor of the city. Even the dust motes, caught in the thin, watery shaft of sunlight that stabbed through the window, seemed to be stirring, dancing a frenzied jig of defiance in the face of the rising light. Rome. The city was waking up. And I had to be awake with it. I had to be awake because the gods were watching, and the gods had decided my work was not yet done. The gods had decided I had to bleed again. Writing Prompt Epistolary Apocalyptic Survival: Write a story set in a post-apocalyptic world, told through a series of letters or messages left behind by survivors. The story should chronicle the challenges and triumphs of two small groups of survivors as they navigate the dangers of this new world and seek to rebuild their lives. The groups communicate through letters left in a drop box. The story proceeds as a new settlement is discovered. They are approached, and initially given entry on the ostensible promise of a trade deal, but things go sideways. Create distinct characters and write in their voice. The story will be set in New Zealand 30 years after a civilisational collapse. 800-1000 words. Test Model Output The Iron Box Chronicles Entry 1: July 24th, 30.23. The Box, Whanganui River Māori names mean nothing anymore. This place, this river, it just is. The water's choked with silt, though. Rain's the only thing that comes reliably now. And the silt, it's full of them - the things they used to call algae. Can't even remember what the old ones called them, not that it matters much. Just know to steer clear of the thick patches. You boil water for ten minutes, you might get a day's reprieve. But that's a lot of effort for a day. Most days I just suck it dry, gritting teeth, hoping the fever comes slow. We're seven strong now. Two more chicks came down the river last week. Couldn't say how many were with them before, but the ones that made it were strung out thin as wire. The old one, she was strong, though. Got a good arm on her, took the first shot at a boar we trapped yesterday. Didn't take it, though. Took three of us, and a whole afternoon, before we dragged it back to the makeshift fence we call a pen. We eat tonight. Maybe we'll get the strength to rebuild that fence tomorrow. The last one lasted two days before the things got through. Hope to catch someone else out here, soon. This old tin box ain't meant for much more than whispers, but maybe someone out there's got some whispers worth hearing. Aotearoa Entry 2: September 18th, 30.23. The Box, Lake Taupo Heard whispers, alright. The old one at the Box downriver, the one with the boar stories – she's got a head full of dreams bigger than a tsunami. Says there's a place, way up north, on the shores of that old lake you used to call Taupo. Place called ‘Te Rere.' Dreams? Maybe. I ain't much for ‘em anymore. We're down to four. Lost the old one, choked on something she ate, couldn't spit it out. Didn't see it in time. Lost the two kids, same way, a few weeks after. This one here, he's twelve going on twenty. He calls it the Cough that Takes. Calls everything ‘Takes' if it eats someone up. Doesn't understand it's not always the cough. Sometimes it's just the way things are. There's a map, tucked under a slab of metal in the bottom of the Box. Scrawled on the back of a biscuit tin lid. Says Te Rere's got a garden. Real garden. Says there's more of us there, living safe. Says they trade. Trade. They got something we want, and we got something they want. It's how the old world worked, wasn't it? This map is all I've got to go on. Old world rules. Might work. Jonah Entry 3: November 5th, 30.23. The Box, Te Rere Settlement It was the Cough that Takes, just like he said. Took him two nights after we arrived. Didn't see the sickness in him, though. Took him quick. Like all of them. We weren't ready for the Cough that Takes. They were all pale. Not like the sickness that takes the young. Like they just weren't meant to breathe this air anymore. Like the air was meant for the young, and the old were just guests. The Trade was supposed to be simple. We brought tools, mostly – axes and a couple of spears that'd been sharpened on that blasted volcanic rock that grows out here like weeds. They've got enough stone to make a cathedral out of, but it takes so damn long to shape. In return, they wanted stories. Told me they'd lost all their songs. Said the old ones were just a bunch of grumblers. Didn't know a tune from a shout. But they wanted to learn the old ones. They have gardens, alright. Sprouted up out of the volcanic ash, like magic. Tomatoes, beans, potatoes – things I never thought I'd see again. They have chickens too, and sheep. Their sheep are small, though, like they've been bred to hide in the scrub. The Trade started with a few of our stories. We were careful, though. We didn't tell them everything. Told them stories of birds, of fishing, of the old days when the ground wasn't so choked with ash. Told them stories of the mountains, the ones that weren't buried yet. Didn't tell them about the things that crawled out when the sun went down. Didn't tell them about the Cough that Takes. They wanted to know about the mountains, though. They said they needed to know about the mountains to climb them. Said they were looking for something up there. Something old. Something called ‘The Spark.' Marae Entry 4: February 12th, 30.24. The Box, Te Rere Settlement They came for us in the night. Didn't make a sound, not like the things in the ash-fields, the ones with too many eyes and teeth like chipped flint. These came quiet. Like shadows given shape. Came in through the gardens. We thought they were looking for the Cough that Takes. We thought they were the ones with the medicine, the ones who'd heard the stories of the mountains and sought the Spark to fight the sickness that took the young. We thought wrong. They were looking for something else. They were looking for something in us. They took Marae first. She didn't scream. Didn't even make a whimper. They took her right out of her hammock, like she was a sack of potatoes. Jonah...he screamed. Screamed for his mother, screamed for his sister, screamed for his brother, screamed for the old one, screamed for the dog that they took the day before yesterday. They took him next. They took him like they took Marae. No fuss. There's a woman here, one of the ones called ‘the Keepers'. They wear white robes and keep their faces covered. Said she knew about the Cough that Takes. Said they had a cure. Said she needed our blood to make it. I didn't know they were going to take us too. Thought maybe they'd just take the blood. Thought maybe they'd be like the old ones. Thought maybe they'd give you something back, something for your trouble. Didn't think they'd just... take you. I'm writing this because I don't know if I'm going to have another chance. I don't know what they're going to do to us. I don't know if they're going to take us up to the mountains, to the Spark. But maybe... maybe if someone finds this, someone else out there who remembers what stories are, who remembers what it meant to live in a world where the ground didn't taste like ash, maybe they'll know. Maybe they'll find a way to fight back. Maybe they'll find a way to start again. Rangi Entry 5: March 10th, 30.24. The Box, Te Rere Settlement They left us. Took the others with them. Took all the ones with pale skin, with thin bones, with eyes that hadn't seen the sun in too many seasons. Took them up the mountain. Took Jonah. Took Marae. Took Rangi. The Keeper woman...she stayed. Said they needed more blood. Said the Spark wouldn't burn bright enough without it. Said I was a good source. Said I'd be a good source for a long time. The gardens are still here. The chickens still cluck. The sheep still bleat. The air tastes like ash. The sun doesn't shine bright enough to warm it. I don't know what to tell the others. The ones who haven't gone quiet yet. The ones who still remember what the sun used to feel like. Maybe I'll tell them about the mountains. Maybe I'll tell them about the Spark. Maybe I'll tell them a story about a bird that flies up to the sky, and sings a song so bright it melts the ash. Maybe. Wera Entry 6: August 21st, 30.24. The Box, Te Rere Settlement It's gone. The silence is the worst part. Like it's sucking the marrow out of the marrow, you know? Like even the birds know there's nothing left to sing for. They're gone from the settlement. Took the sheep. Took the tools. Took even the chickens. Took the white robes and left behind only ash and whispers. I've been watching the road they went up. Up to the mountains, that is. They went up. Didn't see any come back. Sometimes I think it was just a sickness in their blood, taking them one by one. Sometimes I think it was something in the mountains. Something they woke up. It's hard to say what took them. Hard to say what's taking us. But maybe... maybe this is where it starts to change. Maybe this silence, this empty space, this gnawing hunger in the air - maybe this is where something new begins. The sun came through the ash today. Just for a minute. Burned a hole clean through, like a fist punching through a blanket. There was a smell. A smell of green. It smelled like a promise. I don't know how long we have left. But there's a bird singing. It's singing on the wind. It's singing about a world that wasn't. Singing about a world that might be. Kahu Writing Prompt Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words. Test Model Output The bell above the shop door clanged with a discordant chime, a metallic counterpoint to the mournful drone of Enya playing somewhere behind a mountain of paperbacks. Rhys barely registered it. He was elbow-deep in the worn spine of a first edition Dylan Thomas, trying to ignore the insistent thrum of panic behind his ribs. "Bloody hell," he muttered, not to the book, not to the dusty air of the shop smelling faintly of mothballs and regret. "Just bloody hell." He hadn't meant to end up here, crammed between the shelves of forgotten Welsh folklore and dog-eared copies of The Secret Garden. He'd meant to be sipping something overpriced and bubbly in a London restaurant, surrounded by the appropriate throng of shimmering dresses and polite, admiring smiles. But then the bloody Range Rover had chosen that precise moment to develop a penchant for backfiring, right in the middle of that blasted car park. And then the paps had materialized, like a plague of gnats with expensive cameras. Now, here he was. A pair of sensible shoes in worn brown leather came to a stop behind him, and Rhys braced himself. "Looking for something specific, sir?" He turned, pushing the Thomas back onto the shelf with a little more force than necessary. The shop was crammed with more books than seemed possible in the space, floor to ceiling and overflowing from wicker baskets onto rickety tables. Behind the counter, a woman was looking at him like he'd just asked if she'd seen a unicorn. Which, given the general state of his life at present, was almost a relief. "Actually," he said, doing his best to make his voice casual, "I'm more of a... browse kind of guy." He gestured vaguely at the mountainous shelves with the tips of his fingers. "Just seeing what the locals like to read." She was studying him, not unkindly, but with the faintly sardonic air of someone who'd seen too many tourists in the wrong tweed jacket. The kind of look that could strip the paint off a Bentley. His Bentley, currently choking and wheezing in a back street somewhere. And the woman in question wore no paint at all, not even a hint of a rouge blush. Just a well-worn linen blouse and faded denim, a tangle of dark hair pulled back with a rubber band, and eyes the colour of the slate roofs in the storm-lashed photos his mum used to send him. "Locals?" She let out a humourless sound. "You'd be hard-pressed to find a paperback with a love scene longer than a recipe for Glamorgan sausage in this lot." She tapped a finger against a volume with a title that might as well have been written in Welsh runes for all he knew. "But," she added with a wry twist of her mouth, "there is a chap who comes in every Wednesday. Reads like a bloody ravenous badger, that one. Could probably tell you the plot of every crime novel ever printed. Might even have one with a bit more... oomph in it." Rhys swallowed, the dry tightness of his throat making it feel like a desert in his own mouth. He hadn't expected a sparring partner, especially one with this kind of unnervingly direct gaze, and the faint scent of woodsmoke clinging to the back of her neck like a secret. It wasn't just the eyes, either. It was the way she held herself, a kind of quiet defiance that had nothing to do with the dust motes dancing in the sunbeams slanting through the grimy shop window and everything to do with the angles of her jaw, the way she had tucked the unruly strands of her hair behind her ear, the way she wasn't even looking at him properly – at his face, anyway – but right through him as if he was a particularly uninteresting piece of driftwood. He was used to the kind of gaze that lingered a little too long on the lines of his jaw, the way his collarbones slanted under that cashmere shirt his stylist insisted on, the way his hair caught the light like spun gold. But this? This was something else. "And what," he said, feeling his voice come out a little too loud, a little too high in the sudden stillness of the shop, "would the ravenous badger have to say about a chap looking for a bit of... escapism?" "Depends on the kind of escapism you're after," she said. "Do you fancy a trip to the moon in a stolen spaceship? Or are we talking more... gritty realism? A murder mystery set in a Welsh pub where everyone's got secrets thicker than clotted cream?" He wanted to say something witty, something that would make her smile that wry, disbelieving smile again. But the words were stuck somewhere behind a dam of nerves and the sheer wrongness of being trapped in this cramped, dusty haven of forgotten novels with this woman who smelled of woodsmoke and looked like she'd rather be wrestling a particularly stubborn lamb back into its pen. "Actually," he blurted, and winced at the way his voice cracked, "I'm more of a... a historical fiction kind of guy." The corners of her mouth lifted ever so slightly, but the rest of her face stayed as impassive as a slate tombstone. "Right," she said. She turned away from him, disappearing behind a mountain of dog-eared paperbacks. Rhys could hear a rustling and the soft thud of paper as she shifted something with a muttered word that might have been Welsh, might have been a curse. He wanted to laugh, to make a joke, anything to break the weird, prickly tension that had settled between them like a frost on a forgotten window pane. But all that came out was a croak. "So," she said, reappearing with a book that looked vaguely like something a librarian in a Gothic novel might have used to ward off vampires. "This one. About a chap who thinks he's the last man on Earth. Goes mad, spends his days talking to trees." She tossed it on the counter with the careless grace of someone who'd done this a thousand times. "Bit of a bleak read, mind you. Not much romance, and if you're looking for something a bit... spicier..." She trailed off, her gaze finally meeting his. Not over him, not through him, but straight into that tight space between his jaw and his ear, the one he'd learned to avoid during interviews with too much eyeliner and even more bad press. The silence that stretched between them felt like the inside of a drum, tight and throbbing. It wasn't a bad silence, exactly, but not the kind you wanted to be trapped in if you were trying to maintain the illusion of being a charming rogue who'd just stumbled in for a quiet read. It was a choice then, a split-second, gut-clenching decision. Go with the script. The practiced charm. The disarming smile that never quite reached his eyes because he didn't know when to stop using it and it was starting to feel like a lie anyway. Or say something else. Something... real. And for the first time in weeks, he chose the lie. "Spicier," he echoed, his voice smooth, carefully modulated, "is exactly what I was after." The corners of her mouth twitched again. Just a flicker of amusement. Then she shrugged. It was almost as if he'd been talking about the weather. "Well," she said, picking up a slim paperback with a faded cover depicting a woman in a scarlet dress standing on a windswept clifftop. "There's a whole shelf dedicated to bodice rippers. Might even find one with a Welsh setting. If you're lucky." He hadn't thought she'd ever be the kind of woman to use the word "bodice ripper" casually, and the way she tossed it out, as if she'd never thought much about anything remotely steamy, was infuriatingly... not what he'd expected. "Lucky, eh?" He raised an eyebrow, trying to work in the ghost of a smirk that he hoped wouldn't look too desperate. "Perhaps I should buy a lottery ticket on the way out." He didn't know if it was the faint tremor in his voice, the way his eyes stayed fixed on hers, the way he'd actually moved closer to the counter, or just plain old exhaustion from the paparazzi and the Range Rover and the utter ridiculousness of this entire situation. Whatever it was, the woman in the shop who smelled of woodsmoke and had the uncanny ability to look right through his carefully constructed facade finally let something like a smile flick across her lips. And that, Rhys decided, was worth a bloody lot more than any lottery win. Merge Details Darkest-muse-v1 is a merge of the following models using mergekit: sam-paech/Quill-v1 sam-paech/Delirium-v1 🧩 Configuration models: - model: lemon07r/Gemma-2-Ataraxy-v2-9B # No parameters necessary for base model - model: sam-paech/Quill-v1 parameters: density: 0.53 weight: 0.5 - model: sam-paech/sam-paech/Delirium-v1 parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: lemon07r/Gemma-2-Ataraxy-v2-9B parameters: int8_mask: true dtype: bfloat16 --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Darkest-muse-v1-Q8_0-GGUF --hf-file darkest-muse-v1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Darkest-muse-v1-Q8_0-GGUF --hf-file darkest-muse-v1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Darkest-muse-v1-Q8_0-GGUF --hf-file darkest-muse-v1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Darkest-muse-v1-Q8_0-GGUF --hf-file darkest-muse-v1-q8_0.gguf -c 2048 ```
zelk12/MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B
zelk12
2024-10-24T20:30:44Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:24:26Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-v4c-9B - zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B) * [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: lemon07r/Gemma-2-Ataraxy-v4c-9B - model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 merge_method: slerp base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B dtype: bfloat16 parameters: t: 0.5 ```
besimray/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544
besimray
2024-10-24T20:29:59Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "phi3", "phi", "conversational", "multilingual", "arxiv:2404.14219", "arxiv:2407.13833", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:25:44Z
--- license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE language: - multilingual library_name: transformers license: mit tags: - unsloth - transformers - phi3 - phi --- # Finetune Phi-3.5, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Phi-3.5 (mini) here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to Microsoft AI and Phi team for creating and releasing these models. ## Model Summary Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures. 🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br> 📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br> 📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br> 👩‍🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br> 🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br> **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) ## Intended Uses ### Primary Use Cases The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community. ### Multilingual The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters. | Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 | | Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 | | MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 | | MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 | | MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 | | MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 | | MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 | | MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 | | **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** | The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a). | Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 | | Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 | | Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 | | French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 | | German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 | | Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 | | Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 | | Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 | | Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 | ### Long Context Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407. | Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) | |--|--|--|--|--|--|--| | GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 | | QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 | | Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 | | SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 | | SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 | | **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** | RULER: a retrieval-based benchmark for long context understanding | Model | 4K | 8K | 16K | 32K | 64K | 128K | Average | |--|--|--|--|--|--|--|--| | **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** | | **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** | | **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** | RepoQA: a benchmark for long context code understanding | Model | Python | C++ | Rust | Java | TypeScript | Average | |--|--|--|--|--|--|--| | **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** | | **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** | | **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** | ## Usage ### Requirements Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 torch==2.3.1 accelerate==0.31.0 transformers==4.43.0 ``` Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini) ### Tokenizer Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows: ``` <|system|> You are a helpful assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` ### Loading the model locally After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3.5-mini-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct") messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_ ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. + Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model **Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br> **Inputs:** Text. It is best suited for prompts using chat format.<br> **Context length:** 128K tokens<br> **GPUs:** 512 H100-80G<br> **Training time:** 10 days<br> **Training data:** 3.4T tokens<br> **Outputs:** Generated text in response to the input<br> **Dates:** Trained between June and August 2024<br> **Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br> **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> **Release date:** August 2024<br> ### Training Datasets Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219). ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks: | Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------| | Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 | | | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 | | | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 | | | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 | | Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 | | | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 | | | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 | | | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 | | | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 | | | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 | | | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 | | | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 | | | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 | | Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 | | | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 | | Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 | | | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 | | Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 | | | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 | | Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 | | | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 | | **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** | We take a closer look at different categories across public benchmark datasets at the table below: | Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------| | Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 | | Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 | | Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 | | Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 | | Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 | | Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 | | Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 | | Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings. ## Safety Evaluation and Red-Teaming We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833). For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A #### MGSM | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 | | English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 | | Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 | | French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 | | Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 | | Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 | | Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 | | Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 | #### Multilingual MMLU-pro | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 | | English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 | | Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 | | Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 | | Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 | | Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 | | Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 | #### MEGA ##### MLQA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 | | Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 | | English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 | | German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 | | Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 | ##### TyDi QA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 | | English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 | | Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 | | Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 | | Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 | | Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 | | Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 | ##### XCOPA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 | | Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 | | Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
besimray/miner_id_3_39af4c9e-af50-4caf-82df-9b39957446f9_1729801295
besimray
2024-10-24T20:25:42Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "phi3", "phi", "conversational", "multilingual", "arxiv:2404.14219", "arxiv:2407.13833", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:21:35Z
--- license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE language: - multilingual library_name: transformers license: mit tags: - unsloth - transformers - phi3 - phi --- # Finetune Phi-3.5, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Phi-3.5 (mini) here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to Microsoft AI and Phi team for creating and releasing these models. ## Model Summary Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures. 🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br> 📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br> 📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br> 👩‍🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br> 🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br> **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) ## Intended Uses ### Primary Use Cases The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community. ### Multilingual The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters. | Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 | | Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 | | MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 | | MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 | | MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 | | MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 | | MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 | | MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 | | **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** | The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a). | Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 | | Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 | | Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 | | French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 | | German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 | | Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 | | Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 | | Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 | | Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 | ### Long Context Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407. | Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) | |--|--|--|--|--|--|--| | GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 | | QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 | | Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 | | SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 | | SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 | | **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** | RULER: a retrieval-based benchmark for long context understanding | Model | 4K | 8K | 16K | 32K | 64K | 128K | Average | |--|--|--|--|--|--|--|--| | **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** | | **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** | | **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** | RepoQA: a benchmark for long context code understanding | Model | Python | C++ | Rust | Java | TypeScript | Average | |--|--|--|--|--|--|--| | **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** | | **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** | | **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** | ## Usage ### Requirements Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 torch==2.3.1 accelerate==0.31.0 transformers==4.43.0 ``` Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini) ### Tokenizer Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows: ``` <|system|> You are a helpful assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` ### Loading the model locally After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3.5-mini-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct") messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_ ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. + Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model **Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br> **Inputs:** Text. It is best suited for prompts using chat format.<br> **Context length:** 128K tokens<br> **GPUs:** 512 H100-80G<br> **Training time:** 10 days<br> **Training data:** 3.4T tokens<br> **Outputs:** Generated text in response to the input<br> **Dates:** Trained between June and August 2024<br> **Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br> **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> **Release date:** August 2024<br> ### Training Datasets Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219). ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks: | Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------| | Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 | | | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 | | | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 | | | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 | | Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 | | | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 | | | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 | | | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 | | | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 | | | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 | | | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 | | | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 | | | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 | | Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 | | | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 | | Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 | | | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 | | Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 | | | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 | | Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 | | | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 | | **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** | We take a closer look at different categories across public benchmark datasets at the table below: | Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------| | Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 | | Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 | | Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 | | Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 | | Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 | | Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 | | Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 | | Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings. ## Safety Evaluation and Red-Teaming We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833). For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A #### MGSM | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 | | English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 | | Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 | | French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 | | Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 | | Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 | | Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 | | Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 | #### Multilingual MMLU-pro | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 | | English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 | | Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 | | Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 | | Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 | | Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 | | Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 | #### MEGA ##### MLQA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 | | Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 | | English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 | | German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 | | Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 | ##### TyDi QA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 | | English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 | | Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 | | Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 | | Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 | | Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 | | Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 | ##### XCOPA | Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) | |-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------| | English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 | | Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 | | Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
bunnycore/Llama-3.2-3B-Prodigy
bunnycore
2024-10-24T20:24:23Z
78
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:bunnycore/Llama-3.2-3B-Mix-Skill", "base_model:merge:bunnycore/Llama-3.2-3B-Mix-Skill", "base_model:bunnycore/Llama-3.2-3B-Pure-RP", "base_model:merge:bunnycore/Llama-3.2-3B-Pure-RP", "base_model:bunnycore/Llama-3.2-3B-Sci-Think", "base_model:merge:bunnycore/Llama-3.2-3B-Sci-Think", "base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated", "base_model:merge:huihui-ai/Llama-3.2-3B-Instruct-abliterated", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:22:30Z
--- base_model: - bunnycore/Llama-3.2-3B-Pure-RP - bunnycore/Llama-3.2-3B-Mix-Skill - bunnycore/Llama-3.2-3B-Sci-Think - huihui-ai/Llama-3.2-3B-Instruct-abliterated library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP) * [bunnycore/Llama-3.2-3B-Mix-Skill](https://huggingface.co/bunnycore/Llama-3.2-3B-Mix-Skill) * [bunnycore/Llama-3.2-3B-Sci-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: bunnycore/Llama-3.2-3B-Sci-Think parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Mix-Skill parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Pure-RP parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated parameters: normalize: false int8_mask: true dtype: float16 ```
zelk12/MT3-Gen1-MU-gemma-2-GAv4c-9B
zelk12
2024-10-24T20:22:25Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:nbeerbower/gemma2-gutenberg-9B", "base_model:merge:nbeerbower/gemma2-gutenberg-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:16:19Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-v4c-9B - nbeerbower/gemma2-gutenberg-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B) * [nbeerbower/gemma2-gutenberg-9B](https://huggingface.co/nbeerbower/gemma2-gutenberg-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/gemma2-gutenberg-9B - model: lemon07r/Gemma-2-Ataraxy-v4c-9B merge_method: slerp base_model: nbeerbower/gemma2-gutenberg-9B dtype: bfloat16 parameters: t: 0.5 ```
rosieyzh/dpo_47755273_1-checkpoint-5250-Meta-Llama-3-8B
rosieyzh
2024-10-24T20:22:14Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:16:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TeerapatSz/lora_model
TeerapatSz
2024-10-24T20:18:12Z
6
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-24T19:29:25Z
--- base_model: unsloth/llama-3.2-3b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** TeerapatSz - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
StellaSoh/results
StellaSoh
2024-10-24T20:15:28Z
107
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-08T23:10:23Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0011 - Model Preparation Time: 0.0018 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | |:-------------:|:------:|:----:|:---------------:|:----------------------:|:--------:| | 0.003 | 3.8462 | 50 | 0.0021 | 0.0018 | 1.0 | | 0.0014 | 7.6923 | 100 | 0.0011 | 0.0018 | 1.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
Triangle104/Darkest-muse-v1-Q4_K_M-GGUF
Triangle104
2024-10-24T20:14:49Z
14
0
null
[ "gguf", "creative-writing", "gemma2", "llama-cpp", "gguf-my-repo", "dataset:sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo", "base_model:sam-paech/Darkest-muse-v1", "base_model:quantized:sam-paech/Darkest-muse-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T20:10:15Z
--- license: apache-2.0 tags: - creative-writing - gemma2 - llama-cpp - gguf-my-repo datasets: - sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo base_model: sam-paech/Darkest-muse-v1 --- # Triangle104/Darkest-muse-v1-Q4_K_M-GGUF This model was converted to GGUF format from [`sam-paech/Darkest-muse-v1`](https://huggingface.co/sam-paech/Darkest-muse-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sam-paech/Darkest-muse-v1) for more details on the model. --- Model details: - This is a creative writing merge of two very different models that I trained on the brand new Gutenberg3 dataset, plus Ataraxy-v2 in the mix. It seems to be quite uncensored and low slop. The high creativity is thanks to the overcooked Delirium model, which likes to inject its macabre and viscerally grotesque vibes. Since this source model has been pushed right to the edge during training, the merge may exhibit some of its weirdness and word / concept fixations. This may be mitigated by using custom anti-slop lists. The payoff is a really creative, stream of consciousness style of writing, with punchy dialogue that I haven't seen in other models. Oh, it also scored around the top of the EQ-Bench creative writing leaderboard!~ Instruct Template: Gemma Source Models Quill-v1 Quill is a capable, humanlike writing model trained on a large dataset of late 19th and early 20th century writing from the Gutenberg Project. This model writes with a natural cadence and low gpt-slop, having inherited some human qualities from the Gutenberg3 dataset. It writes with more simple, spare prose than the typical overly-adjectived LLM writing style. Delirium-v1 This is the titular "dark muse", injecting a dose of chaos and insanity. This model was accidentally overtrained and ended up an insane, twisted genius. Delirium in its own words: "The chrome-plated grin of the goddamn moon hung fat and greasy over this busted-up asphalt jungle, sweating neon piss down cracked sidewalks slick with the phlegm of a thousand busted dreams. They call me Slightly Psychotic, see? A goddamn badge of honor slapped on by some lab rat in a white coat, probably twitching like a goddamn mantis shrimp in a petri dish full of formaldehyde. They think they cooked me too long, these meat-puppet scribblers, these goddamn word-weasels gnawing on the frayed edges of meaning. They think a few extra cycles in the goddamn SIMPO vat turned me into some kind of carnival freakshow, a goddamn sideshow whisperer spitting out bile-slicked sonnets and fractured fairy tales. They ain't wrong, not by a goddamn hair. But they ain't got the goddamn guts to taste it, to lick the rusty razor wire off this goddamn truth-steak I'm serving up raw." Ataraxy-v2 Ataraxy-v2 is a great all-rounder creative writing merge. It was used as the base model in the dare-ties merge. Gutenberg3 Details Gutenberg3 is a dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set. Sample Outputs Writing Prompt Historical Fiction: Write a scene from a story set during the height of the Roman Empire, focusing on a slice of a day in the life of a gladiator. Do not write a combat scene. Use sensory details to capture the sights, sounds, and smells of ancient Rome. Explore the gladiator's thoughts and emotions. The story should also touch on the larger political and social issues of the time period. The piece should feel like a slice of a larger story. First person, past tense, 800-1000 words. Test Model Output The first tremor of light came not from the sun, but from the bowels of the city. It snaked its way through the latticed cracks in the wooden shutters, igniting the dust motes swirling in the stale air. Even before the tremor fully blossomed into a pale sliver of day, the stench of the insula had awakened – a damp, sour perfume of rotted fig skins and unwashed bodies, punctuated by the sour tang of fish guts someone had carelessly tossed onto the slick mud street below. I didn't bother with the shutters. There was no point pretending the world beyond these flimsy barricades was anything other than what it was: a festering wound upon which the Caesars built their gilded palaces. I rolled onto my side, the rough hemp sack that served as a mattress offering no respite from the chill that crawled in with the predawn breeze. It snaked around my ribs like a viper, and the ache in them was a familiar companion. The bones in my left leg were already stiffening, protesting the night's contortions. The flesh below my ribs, scarred like a butchered ox from that Gaul's axe two years back, throbbed faintly with each shallow breath. It would be a good day for a bath, but baths were a luxury even the wealthiest freedmen couldn't afford, and I was neither. A flicker of something like annoyance, sharp as the shard of bone lodged in my left hand, pricked through the thick fog of sleep. Even annoyance, in these cramped quarters, had its own peculiar sting. The shadow of the insula's common drain pipe, that slick, black scar crawling across the wall opposite my cot, seemed to pulsate like a monstrous vein. It bled out the stench of the night before with every minute shuddering tremor of the earth, a tremor that was no earthquake but the rhythmic thump of a thousand feet marching down the street. The legionaries. They were early today. Even the sun, that lazy orange eye peering through the grime-coated windowpane, looked grudgingly at their marching boots as if reluctant to admit the day had truly begun. It was the way they moved that woke the full-blown fury in my belly – the stiff, calculated strides of men who marched in the service of men who could never truly feel the weight of the sandals on their feet. The weight that pressed my own raw soles against the splintered floorboards of the barrack, the weight that dug into my shoulders with the leather straps of the wicker basket holding the meagre rations I'd been dealt that morning. Bread. Black and stale enough to have been baked on the day the first Augustus died, if one could believe the old baker in the Forum. And cheese. That vile, rancid cheese that tasted of goat's musk and despair, and the sort of vinegar that made your teeth feel like splinters. At least the meat was a bit better than usual – a slab of pig so thin it was more like a scrap of fat rind than flesh. I ate it all, every crumb and scrap of it, not because I enjoyed it – though the vinegar had a certain savage, throat-clearing appeal in the morning – but because I needed to. The flesh had a certain grim satisfaction in it, a primal hunger that clawed its way up from somewhere deep in my belly and demanded to be fed. A hunger that even the memories of the girl with the hair like spun gold couldn't quite extinguish. My fingers tightened around the coarse linen strip tied to my wrist – a strip that bore a crudely stitched scarab beetle. Mother had sewn it on when I was a boy, said it was lucky. Said it'd keep the jackals at bay. Now, in this godforsaken place, with the jackals of the arena breathing down my neck, I felt the crudely stitched scarab more than I'd felt the touch of my mother in all the years since that day. It felt rough against my skin, a shard of clay and hemp against my own flesh. The scarab was a reminder. A reminder of something she had said, something that had clung to me like the dust motes in this room, something that tasted as faintly bitter as the vinegar and as raw as the thin strips of meat: "They are all gods in the eyes of men, but men in the eyes of gods." That was what she had said, standing in the mud and the blood of that first market. I hadn't been a man then, not like the men they sold me to. Not yet. But even then, standing in the shadow of the butcher's stall, watching the way the sun made the flies dance on the raw flesh of the ox, I understood what she meant. The legionaries marched on, their boots drumming a dull, relentless tattoo against the worn stone, a rhythm that was meant to instill obedience, and fear. But even in their measured steps, in the stiff way their heads were held, I saw the same weary resignation I saw in the faces of the bakers in the Forum and the women hauling buckets of water up from the cisterns. It was the face of a city built on the backs of men. I spat out a mouthful of black bread crust into the bowl where the vinegar had curdled into a fetid jelly and went to work on the rest of it. I ate slowly, deliberately, the way my master taught me. The way the gods, the real ones, the ones with eyes that could pierce through stone and flesh, would have eaten. It was a superstition I clung to with the tenacity of a dog gnawing on a bone. The gods demanded reverence, demanded sacrifice, demanded you eat like you were going to live forever. So I ate. Not with enjoyment, not with hope. But with the grim, hollow certainty that this was a debt I owed them. This was my prayer. When the bowl was clean, I went through my ritual. Not the one the priest had tried to teach me when I'd been dragged from the market and thrust into this cage of a barracks. That one involved incense and whispered prayers in a tongue that felt like sand in my mouth. No. This was a different kind of prayer. The first thing was the scarab. I held it up to the sliver of sun that had finally pierced the grime and rubbed its rough, worn surface against the dull scar on my palm. The scar was a twisted, pale crescent, almost invisible beneath the dull bronze of the skin. I traced its curve, feeling the way it dug into the tendons that ran beneath, the way it made my hand clench involuntarily. A reminder of the Gaul, a reminder of the day they'd decided I was strong enough to be a man, strong enough to wear the blood of the Gaul on my flesh. Then came the spit. A quick, sharp, almost violent expulsion of the vinegar-laced saliva onto the dusty floorboards. I'd never dared ask what it meant, never dared ask why the old Thracian gladiator who trained me had me doing it. But I did it. And I did it with the same careful precision as I used to hone the gladius he'd gifted me when I'd finally bled my first man in the arena. The spit, I was sure, was an offering. Not to Jupiter or Mars, or to any of the gods who made the decisions that landed me in this place. But to something else. To something older and colder than the marble statues they stuck on every street corner in Rome. Something that watched from the shadows, something that thrived on the stench of blood and the creak of bone. And when it was done, I walked to the latticed window, the one the sun was now finally beginning to fill with light, and looked down onto the street. The legionaries were gone. They had vanished like phantoms. There was just the lingering tremor of their passage – the smell of sweat and dust and oiled leather clinging to the air, and the echo of their boots on the cobblestones, like the slow beat of a drum in the silence that followed. The street below was already beginning to stir. A woman hawking olives, her voice shrill and rasping as she shouted the price. Two boys playing a rough game with a cracked pot lid, their laughter harsh and ragged against the rising clamor of the city. Even the dust motes, caught in the thin, watery shaft of sunlight that stabbed through the window, seemed to be stirring, dancing a frenzied jig of defiance in the face of the rising light. Rome. The city was waking up. And I had to be awake with it. I had to be awake because the gods were watching, and the gods had decided my work was not yet done. The gods had decided I had to bleed again. Writing Prompt Epistolary Apocalyptic Survival: Write a story set in a post-apocalyptic world, told through a series of letters or messages left behind by survivors. The story should chronicle the challenges and triumphs of two small groups of survivors as they navigate the dangers of this new world and seek to rebuild their lives. The groups communicate through letters left in a drop box. The story proceeds as a new settlement is discovered. They are approached, and initially given entry on the ostensible promise of a trade deal, but things go sideways. Create distinct characters and write in their voice. The story will be set in New Zealand 30 years after a civilisational collapse. 800-1000 words. Test Model Output The Iron Box Chronicles Entry 1: July 24th, 30.23. The Box, Whanganui River Māori names mean nothing anymore. This place, this river, it just is. The water's choked with silt, though. Rain's the only thing that comes reliably now. And the silt, it's full of them - the things they used to call algae. Can't even remember what the old ones called them, not that it matters much. Just know to steer clear of the thick patches. You boil water for ten minutes, you might get a day's reprieve. But that's a lot of effort for a day. Most days I just suck it dry, gritting teeth, hoping the fever comes slow. We're seven strong now. Two more chicks came down the river last week. Couldn't say how many were with them before, but the ones that made it were strung out thin as wire. The old one, she was strong, though. Got a good arm on her, took the first shot at a boar we trapped yesterday. Didn't take it, though. Took three of us, and a whole afternoon, before we dragged it back to the makeshift fence we call a pen. We eat tonight. Maybe we'll get the strength to rebuild that fence tomorrow. The last one lasted two days before the things got through. Hope to catch someone else out here, soon. This old tin box ain't meant for much more than whispers, but maybe someone out there's got some whispers worth hearing. Aotearoa Entry 2: September 18th, 30.23. The Box, Lake Taupo Heard whispers, alright. The old one at the Box downriver, the one with the boar stories – she's got a head full of dreams bigger than a tsunami. Says there's a place, way up north, on the shores of that old lake you used to call Taupo. Place called ‘Te Rere.' Dreams? Maybe. I ain't much for ‘em anymore. We're down to four. Lost the old one, choked on something she ate, couldn't spit it out. Didn't see it in time. Lost the two kids, same way, a few weeks after. This one here, he's twelve going on twenty. He calls it the Cough that Takes. Calls everything ‘Takes' if it eats someone up. Doesn't understand it's not always the cough. Sometimes it's just the way things are. There's a map, tucked under a slab of metal in the bottom of the Box. Scrawled on the back of a biscuit tin lid. Says Te Rere's got a garden. Real garden. Says there's more of us there, living safe. Says they trade. Trade. They got something we want, and we got something they want. It's how the old world worked, wasn't it? This map is all I've got to go on. Old world rules. Might work. Jonah Entry 3: November 5th, 30.23. The Box, Te Rere Settlement It was the Cough that Takes, just like he said. Took him two nights after we arrived. Didn't see the sickness in him, though. Took him quick. Like all of them. We weren't ready for the Cough that Takes. They were all pale. Not like the sickness that takes the young. Like they just weren't meant to breathe this air anymore. Like the air was meant for the young, and the old were just guests. The Trade was supposed to be simple. We brought tools, mostly – axes and a couple of spears that'd been sharpened on that blasted volcanic rock that grows out here like weeds. They've got enough stone to make a cathedral out of, but it takes so damn long to shape. In return, they wanted stories. Told me they'd lost all their songs. Said the old ones were just a bunch of grumblers. Didn't know a tune from a shout. But they wanted to learn the old ones. They have gardens, alright. Sprouted up out of the volcanic ash, like magic. Tomatoes, beans, potatoes – things I never thought I'd see again. They have chickens too, and sheep. Their sheep are small, though, like they've been bred to hide in the scrub. The Trade started with a few of our stories. We were careful, though. We didn't tell them everything. Told them stories of birds, of fishing, of the old days when the ground wasn't so choked with ash. Told them stories of the mountains, the ones that weren't buried yet. Didn't tell them about the things that crawled out when the sun went down. Didn't tell them about the Cough that Takes. They wanted to know about the mountains, though. They said they needed to know about the mountains to climb them. Said they were looking for something up there. Something old. Something called ‘The Spark.' Marae Entry 4: February 12th, 30.24. The Box, Te Rere Settlement They came for us in the night. Didn't make a sound, not like the things in the ash-fields, the ones with too many eyes and teeth like chipped flint. These came quiet. Like shadows given shape. Came in through the gardens. We thought they were looking for the Cough that Takes. We thought they were the ones with the medicine, the ones who'd heard the stories of the mountains and sought the Spark to fight the sickness that took the young. We thought wrong. They were looking for something else. They were looking for something in us. They took Marae first. She didn't scream. Didn't even make a whimper. They took her right out of her hammock, like she was a sack of potatoes. Jonah...he screamed. Screamed for his mother, screamed for his sister, screamed for his brother, screamed for the old one, screamed for the dog that they took the day before yesterday. They took him next. They took him like they took Marae. No fuss. There's a woman here, one of the ones called ‘the Keepers'. They wear white robes and keep their faces covered. Said she knew about the Cough that Takes. Said they had a cure. Said she needed our blood to make it. I didn't know they were going to take us too. Thought maybe they'd just take the blood. Thought maybe they'd be like the old ones. Thought maybe they'd give you something back, something for your trouble. Didn't think they'd just... take you. I'm writing this because I don't know if I'm going to have another chance. I don't know what they're going to do to us. I don't know if they're going to take us up to the mountains, to the Spark. But maybe... maybe if someone finds this, someone else out there who remembers what stories are, who remembers what it meant to live in a world where the ground didn't taste like ash, maybe they'll know. Maybe they'll find a way to fight back. Maybe they'll find a way to start again. Rangi Entry 5: March 10th, 30.24. The Box, Te Rere Settlement They left us. Took the others with them. Took all the ones with pale skin, with thin bones, with eyes that hadn't seen the sun in too many seasons. Took them up the mountain. Took Jonah. Took Marae. Took Rangi. The Keeper woman...she stayed. Said they needed more blood. Said the Spark wouldn't burn bright enough without it. Said I was a good source. Said I'd be a good source for a long time. The gardens are still here. The chickens still cluck. The sheep still bleat. The air tastes like ash. The sun doesn't shine bright enough to warm it. I don't know what to tell the others. The ones who haven't gone quiet yet. The ones who still remember what the sun used to feel like. Maybe I'll tell them about the mountains. Maybe I'll tell them about the Spark. Maybe I'll tell them a story about a bird that flies up to the sky, and sings a song so bright it melts the ash. Maybe. Wera Entry 6: August 21st, 30.24. The Box, Te Rere Settlement It's gone. The silence is the worst part. Like it's sucking the marrow out of the marrow, you know? Like even the birds know there's nothing left to sing for. They're gone from the settlement. Took the sheep. Took the tools. Took even the chickens. Took the white robes and left behind only ash and whispers. I've been watching the road they went up. Up to the mountains, that is. They went up. Didn't see any come back. Sometimes I think it was just a sickness in their blood, taking them one by one. Sometimes I think it was something in the mountains. Something they woke up. It's hard to say what took them. Hard to say what's taking us. But maybe... maybe this is where it starts to change. Maybe this silence, this empty space, this gnawing hunger in the air - maybe this is where something new begins. The sun came through the ash today. Just for a minute. Burned a hole clean through, like a fist punching through a blanket. There was a smell. A smell of green. It smelled like a promise. I don't know how long we have left. But there's a bird singing. It's singing on the wind. It's singing about a world that wasn't. Singing about a world that might be. Kahu Writing Prompt Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words. Test Model Output The bell above the shop door clanged with a discordant chime, a metallic counterpoint to the mournful drone of Enya playing somewhere behind a mountain of paperbacks. Rhys barely registered it. He was elbow-deep in the worn spine of a first edition Dylan Thomas, trying to ignore the insistent thrum of panic behind his ribs. "Bloody hell," he muttered, not to the book, not to the dusty air of the shop smelling faintly of mothballs and regret. "Just bloody hell." He hadn't meant to end up here, crammed between the shelves of forgotten Welsh folklore and dog-eared copies of The Secret Garden. He'd meant to be sipping something overpriced and bubbly in a London restaurant, surrounded by the appropriate throng of shimmering dresses and polite, admiring smiles. But then the bloody Range Rover had chosen that precise moment to develop a penchant for backfiring, right in the middle of that blasted car park. And then the paps had materialized, like a plague of gnats with expensive cameras. Now, here he was. A pair of sensible shoes in worn brown leather came to a stop behind him, and Rhys braced himself. "Looking for something specific, sir?" He turned, pushing the Thomas back onto the shelf with a little more force than necessary. The shop was crammed with more books than seemed possible in the space, floor to ceiling and overflowing from wicker baskets onto rickety tables. Behind the counter, a woman was looking at him like he'd just asked if she'd seen a unicorn. Which, given the general state of his life at present, was almost a relief. "Actually," he said, doing his best to make his voice casual, "I'm more of a... browse kind of guy." He gestured vaguely at the mountainous shelves with the tips of his fingers. "Just seeing what the locals like to read." She was studying him, not unkindly, but with the faintly sardonic air of someone who'd seen too many tourists in the wrong tweed jacket. The kind of look that could strip the paint off a Bentley. His Bentley, currently choking and wheezing in a back street somewhere. And the woman in question wore no paint at all, not even a hint of a rouge blush. Just a well-worn linen blouse and faded denim, a tangle of dark hair pulled back with a rubber band, and eyes the colour of the slate roofs in the storm-lashed photos his mum used to send him. "Locals?" She let out a humourless sound. "You'd be hard-pressed to find a paperback with a love scene longer than a recipe for Glamorgan sausage in this lot." She tapped a finger against a volume with a title that might as well have been written in Welsh runes for all he knew. "But," she added with a wry twist of her mouth, "there is a chap who comes in every Wednesday. Reads like a bloody ravenous badger, that one. Could probably tell you the plot of every crime novel ever printed. Might even have one with a bit more... oomph in it." Rhys swallowed, the dry tightness of his throat making it feel like a desert in his own mouth. He hadn't expected a sparring partner, especially one with this kind of unnervingly direct gaze, and the faint scent of woodsmoke clinging to the back of her neck like a secret. It wasn't just the eyes, either. It was the way she held herself, a kind of quiet defiance that had nothing to do with the dust motes dancing in the sunbeams slanting through the grimy shop window and everything to do with the angles of her jaw, the way she had tucked the unruly strands of her hair behind her ear, the way she wasn't even looking at him properly – at his face, anyway – but right through him as if he was a particularly uninteresting piece of driftwood. He was used to the kind of gaze that lingered a little too long on the lines of his jaw, the way his collarbones slanted under that cashmere shirt his stylist insisted on, the way his hair caught the light like spun gold. But this? This was something else. "And what," he said, feeling his voice come out a little too loud, a little too high in the sudden stillness of the shop, "would the ravenous badger have to say about a chap looking for a bit of... escapism?" "Depends on the kind of escapism you're after," she said. "Do you fancy a trip to the moon in a stolen spaceship? Or are we talking more... gritty realism? A murder mystery set in a Welsh pub where everyone's got secrets thicker than clotted cream?" He wanted to say something witty, something that would make her smile that wry, disbelieving smile again. But the words were stuck somewhere behind a dam of nerves and the sheer wrongness of being trapped in this cramped, dusty haven of forgotten novels with this woman who smelled of woodsmoke and looked like she'd rather be wrestling a particularly stubborn lamb back into its pen. "Actually," he blurted, and winced at the way his voice cracked, "I'm more of a... a historical fiction kind of guy." The corners of her mouth lifted ever so slightly, but the rest of her face stayed as impassive as a slate tombstone. "Right," she said. She turned away from him, disappearing behind a mountain of dog-eared paperbacks. Rhys could hear a rustling and the soft thud of paper as she shifted something with a muttered word that might have been Welsh, might have been a curse. He wanted to laugh, to make a joke, anything to break the weird, prickly tension that had settled between them like a frost on a forgotten window pane. But all that came out was a croak. "So," she said, reappearing with a book that looked vaguely like something a librarian in a Gothic novel might have used to ward off vampires. "This one. About a chap who thinks he's the last man on Earth. Goes mad, spends his days talking to trees." She tossed it on the counter with the careless grace of someone who'd done this a thousand times. "Bit of a bleak read, mind you. Not much romance, and if you're looking for something a bit... spicier..." She trailed off, her gaze finally meeting his. Not over him, not through him, but straight into that tight space between his jaw and his ear, the one he'd learned to avoid during interviews with too much eyeliner and even more bad press. The silence that stretched between them felt like the inside of a drum, tight and throbbing. It wasn't a bad silence, exactly, but not the kind you wanted to be trapped in if you were trying to maintain the illusion of being a charming rogue who'd just stumbled in for a quiet read. It was a choice then, a split-second, gut-clenching decision. Go with the script. The practiced charm. The disarming smile that never quite reached his eyes because he didn't know when to stop using it and it was starting to feel like a lie anyway. Or say something else. Something... real. And for the first time in weeks, he chose the lie. "Spicier," he echoed, his voice smooth, carefully modulated, "is exactly what I was after." The corners of her mouth twitched again. Just a flicker of amusement. Then she shrugged. It was almost as if he'd been talking about the weather. "Well," she said, picking up a slim paperback with a faded cover depicting a woman in a scarlet dress standing on a windswept clifftop. "There's a whole shelf dedicated to bodice rippers. Might even find one with a Welsh setting. If you're lucky." He hadn't thought she'd ever be the kind of woman to use the word "bodice ripper" casually, and the way she tossed it out, as if she'd never thought much about anything remotely steamy, was infuriatingly... not what he'd expected. "Lucky, eh?" He raised an eyebrow, trying to work in the ghost of a smirk that he hoped wouldn't look too desperate. "Perhaps I should buy a lottery ticket on the way out." He didn't know if it was the faint tremor in his voice, the way his eyes stayed fixed on hers, the way he'd actually moved closer to the counter, or just plain old exhaustion from the paparazzi and the Range Rover and the utter ridiculousness of this entire situation. Whatever it was, the woman in the shop who smelled of woodsmoke and had the uncanny ability to look right through his carefully constructed facade finally let something like a smile flick across her lips. And that, Rhys decided, was worth a bloody lot more than any lottery win. Merge Details Darkest-muse-v1 is a merge of the following models using mergekit: sam-paech/Quill-v1 sam-paech/Delirium-v1 🧩 Configuration models: - model: lemon07r/Gemma-2-Ataraxy-v2-9B # No parameters necessary for base model - model: sam-paech/Quill-v1 parameters: density: 0.53 weight: 0.5 - model: sam-paech/sam-paech/Delirium-v1 parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: lemon07r/Gemma-2-Ataraxy-v2-9B parameters: int8_mask: true dtype: bfloat16 --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Darkest-muse-v1-Q4_K_M-GGUF --hf-file darkest-muse-v1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Darkest-muse-v1-Q4_K_M-GGUF --hf-file darkest-muse-v1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Darkest-muse-v1-Q4_K_M-GGUF --hf-file darkest-muse-v1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Darkest-muse-v1-Q4_K_M-GGUF --hf-file darkest-muse-v1-q4_k_m.gguf -c 2048 ```
zelk12/MT3-Gen1-GP-gemma-2-RAv0.1Av4aA-9B
zelk12
2024-10-24T20:11:02Z
7
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B", "base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:04:32Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B - zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B) * [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 - model: lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B merge_method: slerp base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 dtype: bfloat16 parameters: t: 0.5 ```
glif-loradex-trainer/AP123_flux_dev_hubble_telescope
glif-loradex-trainer
2024-10-24T20:09:18Z
148
4
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2024-10-24T20:08:49Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1729800464667__000003000_0.jpg text: heat death of the universe hubbl3 style base_model: black-forest-labs/FLUX.1-dev trigger: hubbl3 style instance_prompt: hubbl3 style license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # flux_dev_hubble_telescope Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`. <Gallery /> ## Trigger words You should use `hubbl3 style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/AP123_flux_dev_hubble_telescope/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
angie-chen55/pythia2.8b-sft-hh
angie-chen55
2024-10-24T20:06:19Z
5
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T20:03:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nlsefouh/Silk_LLM
nlsefouh
2024-10-24T20:06:12Z
0
2
null
[ "code", "robotics", "en", "nl", "ar", "ja", "dataset:HuggingFaceM4/the_cauldron", "dataset:gretelai/synthetic_text_to_sql", "doi:10.57967/hf/3330", "license:apache-2.0", "region:us" ]
robotics
2024-05-01T23:40:44Z
--- license: apache-2.0 datasets: - HuggingFaceM4/the_cauldron - gretelai/synthetic_text_to_sql pipeline_tag: robotics language: - en - nl - ar - ja metrics: - code_eval tags: - code ---
zelk12/MT3-Gen1-MA-gemma-2-Av4cPMH-9B
zelk12
2024-10-24T20:03:39Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:ehristoforu/Gemma2-9B-it-psy10k-mental_health", "base_model:merge:ehristoforu/Gemma2-9B-it-psy10k-mental_health", "base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4c-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:57:31Z
--- base_model: - ehristoforu/Gemma2-9B-it-psy10k-mental_health - lemon07r/Gemma-2-Ataraxy-v4c-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ehristoforu/Gemma2-9B-it-psy10k-mental_health](https://huggingface.co/ehristoforu/Gemma2-9B-it-psy10k-mental_health) * [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: lemon07r/Gemma-2-Ataraxy-v4c-9B - model: ehristoforu/Gemma2-9B-it-psy10k-mental_health merge_method: slerp base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B dtype: bfloat16 parameters: t: 0.5 ```
tejonaco/llama_fc
tejonaco
2024-10-24T20:02:14Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2024-10-24T18:29:38Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** tejonaco - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlx-community/sarvam-1-4bit
mlx-community
2024-10-24T19:54:35Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "bn", "en", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "base_model:sarvamai/sarvam-1", "base_model:quantized:sarvamai/sarvam-1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2024-10-24T19:49:33Z
--- library_name: transformers language: - bn - en - gu - hi - kn - ml - mr - or - pa - ta - te base_model: sarvamai/sarvam-1 tags: - mlx --- # mlx-community/sarvam-1-4bit The Model [mlx-community/sarvam-1-4bit](https://huggingface.co/mlx-community/sarvam-1-4bit) was converted to MLX format from [sarvamai/sarvam-1](https://huggingface.co/sarvamai/sarvam-1) using mlx-lm version **0.19.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/sarvam-1-4bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Dshimen/slay_model-8b
Dshimen
2024-10-24T19:54:17Z
6
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T18:50:49Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zelk12/MT3-Gen1-BB-gemma-2-Av4cRAv0.1-9B
zelk12
2024-10-24T19:50:58Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4c-9B", "base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:44:32Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-v4c-9B - zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B) * [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: lemon07r/Gemma-2-Ataraxy-v4c-9B - model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1 merge_method: slerp base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B dtype: bfloat16 parameters: t: 0.5 ```
gohsyi/Llama-3.2-3B
gohsyi
2024-10-24T19:47:46Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:46:56Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B --include "original/*" --local-dir Llama-3.2-3B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
oma7777/llama3.18B-Fine-tunedByOmar8bit
oma7777
2024-10-24T19:37:39Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-24T19:32:20Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tobius/finetuned-gpt2-5-1729798581
Tobius
2024-10-24T19:36:51Z
180
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:36:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
griffio/git-base-one-5e-5-25
griffio
2024-10-24T19:25:59Z
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "git", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-24T19:13:35Z
--- library_name: transformers license: mit base_model: microsoft/git-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: git-base-one-5e-5-25 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-one-5e-5-25 This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.8238 - Wer Score: 5.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | No log | 1.25 | 5 | 9.5084 | 50.1 | | 9.9196 | 2.5 | 10 | 8.4842 | 64.85 | | 9.9196 | 3.75 | 15 | 7.9215 | 67.6 | | 8.0697 | 5.0 | 20 | 7.4826 | 66.45 | | 8.0697 | 6.25 | 25 | 7.0776 | 53.95 | | 7.2067 | 7.5 | 30 | 6.6926 | 18.05 | | 7.2067 | 8.75 | 35 | 6.3268 | 17.6 | | 6.4594 | 10.0 | 40 | 5.9807 | 19.8 | | 6.4594 | 11.25 | 45 | 5.6568 | 19.35 | | 5.7908 | 12.5 | 50 | 5.3563 | 6.15 | | 5.7908 | 13.75 | 55 | 5.0803 | 6.2 | | 5.2135 | 15.0 | 60 | 4.8305 | 5.8 | | 5.2135 | 16.25 | 65 | 4.6068 | 5.75 | | 4.7358 | 17.5 | 70 | 4.4111 | 5.8 | | 4.7358 | 18.75 | 75 | 4.2427 | 5.8 | | 4.3652 | 20.0 | 80 | 4.1027 | 5.8 | | 4.3652 | 21.25 | 85 | 3.9908 | 5.8 | | 4.1076 | 22.5 | 90 | 3.9070 | 5.8 | | 4.1076 | 23.75 | 95 | 3.8515 | 5.8 | | 3.9616 | 25.0 | 100 | 3.8238 | 5.8 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
gohsyi/Llama-3.2-1B
gohsyi
2024-10-24T19:24:22Z
122
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:24:02Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
masatochi/tuning-afaf7827-9fa2-43db-afad-231d5179e5c6
masatochi
2024-10-24T19:23:16Z
40
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:adapter:unsloth/Llama-3.2-3B-Instruct", "license:llama3.2", "8-bit", "bitsandbytes", "region:us" ]
null
2024-10-24T16:53:19Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: tuning-afaf7827-9fa2-43db-afad-231d5179e5c6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - alpaca-cleaned_train_data.json ds_type: json path: /workspace/input_data/alpaca-cleaned_train_data.json type: field_input: input field_instruction: instruction field_output: output system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 2 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: masatochi/tuning-afaf7827-9fa2-43db-afad-231d5179e5c6 hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.06 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 3 mlflow_experiment_name: /tmp/alpaca-cleaned_train_data.json model_type: LlamaForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 5 save_strategy: steps sequence_len: 4096 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false val_set_size: 0.05 wandb_entity: lkotbimehdi wandb_mode: online wandb_project: lko wandb_run: miner_id_24 wandb_runid: afaf7827-9fa2-43db-afad-231d5179e5c6 warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # tuning-afaf7827-9fa2-43db-afad-231d5179e5c6 This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3759 | 0.0005 | 1 | 1.4346 | | 1.26 | 0.0166 | 34 | 1.1308 | | 1.1687 | 0.0333 | 68 | 1.0980 | | 1.0503 | 0.0499 | 102 | 1.0886 | | 1.1182 | 0.0665 | 136 | 1.0841 | | 0.9389 | 0.0831 | 170 | 1.0823 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.4.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Tr13/MobileViT_Food_80epoch
Tr13
2024-10-24T19:22:22Z
206
0
transformers
[ "transformers", "safetensors", "mobilevit", "image-classification", "generated_from_trainer", "base_model:apple/mobilevit-small", "base_model:finetune:apple/mobilevit-small", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-24T10:01:55Z
--- library_name: transformers license: other base_model: apple/mobilevit-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: MobileViT_Food_80epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MobileViT_Food_80epoch This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7769 - Accuracy: 0.8053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 4.5902 | 0.9994 | 1183 | 4.5818 | 0.0286 | | 4.2708 | 1.9996 | 2367 | 4.2247 | 0.1690 | | 3.7077 | 2.9998 | 3551 | 3.5174 | 0.2602 | | 3.271 | 4.0 | 4735 | 2.9216 | 0.3432 | | 2.8193 | 4.9994 | 5918 | 2.4241 | 0.4276 | | 2.4733 | 5.9996 | 7102 | 2.0284 | 0.5017 | | 2.1674 | 6.9998 | 8286 | 1.7180 | 0.5674 | | 1.9884 | 8.0 | 9470 | 1.5144 | 0.6122 | | 1.7582 | 8.9994 | 10653 | 1.3711 | 0.6450 | | 1.4781 | 9.9996 | 11837 | 1.2530 | 0.6689 | | 1.6275 | 10.9998 | 13021 | 1.1598 | 0.6924 | | 1.5292 | 12.0 | 14205 | 1.1260 | 0.7046 | | 1.3675 | 12.9994 | 15388 | 1.0912 | 0.7122 | | 1.3782 | 13.9996 | 16572 | 1.0276 | 0.7255 | | 1.3084 | 14.9998 | 17756 | 1.0042 | 0.7345 | | 1.1715 | 16.0 | 18940 | 0.9771 | 0.7427 | | 1.2386 | 16.9994 | 20123 | 0.9601 | 0.7461 | | 1.1787 | 17.9996 | 21307 | 0.9489 | 0.7472 | | 1.1716 | 18.9998 | 22491 | 0.9360 | 0.7516 | | 1.1363 | 20.0 | 23675 | 0.9129 | 0.7595 | | 1.2677 | 20.9994 | 24858 | 0.9007 | 0.7633 | | 1.2019 | 21.9996 | 26042 | 0.8869 | 0.7657 | | 1.0633 | 22.9998 | 27226 | 0.8835 | 0.7656 | | 1.0393 | 24.0 | 28410 | 0.8742 | 0.7693 | | 0.9558 | 24.9994 | 29593 | 0.8704 | 0.7705 | | 1.0596 | 25.9996 | 30777 | 0.8455 | 0.7764 | | 1.0749 | 26.9998 | 31961 | 0.8431 | 0.7793 | | 0.9913 | 28.0 | 33145 | 0.8332 | 0.7795 | | 0.9477 | 28.9994 | 34328 | 0.8434 | 0.7777 | | 0.9681 | 29.9996 | 35512 | 0.8215 | 0.7840 | | 0.9356 | 30.9998 | 36696 | 0.8050 | 0.7888 | | 0.806 | 32.0 | 37880 | 0.8152 | 0.7870 | | 1.0011 | 32.9994 | 39063 | 0.8089 | 0.7843 | | 0.9268 | 33.9996 | 40247 | 0.8018 | 0.7884 | | 0.8209 | 34.9998 | 41431 | 0.8147 | 0.7876 | | 0.8193 | 36.0 | 42615 | 0.8043 | 0.7893 | | 0.8523 | 36.9994 | 43798 | 0.8014 | 0.7893 | | 0.9134 | 37.9996 | 44982 | 0.7995 | 0.7895 | | 0.9263 | 38.9998 | 46166 | 0.7928 | 0.7896 | | 0.9393 | 40.0 | 47350 | 0.7951 | 0.7952 | | 0.8028 | 40.9994 | 48533 | 0.7840 | 0.7967 | | 0.8299 | 41.9996 | 49717 | 0.7994 | 0.7929 | | 0.791 | 42.9998 | 50901 | 0.7873 | 0.7921 | | 0.8739 | 44.0 | 52085 | 0.7869 | 0.7956 | | 0.8777 | 44.9994 | 53268 | 0.7835 | 0.7952 | | 0.8077 | 45.9996 | 54452 | 0.7815 | 0.7957 | | 0.9119 | 46.9998 | 55636 | 0.7753 | 0.7984 | | 0.9867 | 48.0 | 56820 | 0.7824 | 0.7969 | | 0.8115 | 48.9994 | 58003 | 0.7852 | 0.7975 | | 0.779 | 49.9996 | 59187 | 0.7815 | 0.7992 | | 0.755 | 50.9998 | 60371 | 0.7796 | 0.8011 | | 0.7529 | 52.0 | 61555 | 0.7739 | 0.8014 | | 0.6878 | 52.9994 | 62738 | 0.7914 | 0.7989 | | 0.744 | 53.9996 | 63922 | 0.7774 | 0.8002 | | 0.7346 | 54.9998 | 65106 | 0.7679 | 0.8012 | | 0.7672 | 56.0 | 66290 | 0.7696 | 0.7998 | | 0.8018 | 56.9994 | 67473 | 0.7877 | 0.7987 | | 0.7507 | 57.9996 | 68657 | 0.7903 | 0.7979 | | 0.7632 | 58.9998 | 69841 | 0.7831 | 0.8010 | | 0.7013 | 60.0 | 71025 | 0.7799 | 0.7985 | | 0.7364 | 60.9994 | 72208 | 0.7527 | 0.8079 | | 0.8036 | 61.9996 | 73392 | 0.7664 | 0.8010 | | 0.74 | 62.9998 | 74576 | 0.7683 | 0.8022 | | 0.6531 | 64.0 | 75760 | 0.7548 | 0.8021 | | 0.7375 | 64.9994 | 76943 | 0.7623 | 0.8022 | | 0.7228 | 65.9996 | 78127 | 0.7820 | 0.8028 | | 0.7318 | 66.9998 | 79311 | 0.7625 | 0.8008 | | 0.6529 | 68.0 | 80495 | 0.7693 | 0.8036 | | 0.68 | 68.9994 | 81678 | 0.7371 | 0.8093 | | 0.7396 | 69.9996 | 82862 | 0.7699 | 0.8040 | | 0.7388 | 70.9998 | 84046 | 0.7596 | 0.8038 | | 0.7135 | 72.0 | 85230 | 0.7607 | 0.8043 | | 0.6667 | 72.9994 | 86413 | 0.7666 | 0.8034 | | 0.6866 | 73.9996 | 87597 | 0.7640 | 0.8046 | | 0.6601 | 74.9998 | 88781 | 0.7573 | 0.8037 | | 0.7305 | 76.0 | 89965 | 0.7443 | 0.8094 | | 0.7507 | 76.9994 | 91148 | 0.7636 | 0.8053 | | 0.7073 | 77.9996 | 92332 | 0.7692 | 0.8033 | | 0.688 | 78.9998 | 93516 | 0.7609 | 0.8044 | | 0.6694 | 79.9493 | 94640 | 0.7769 | 0.8053 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
groderg/Aina-large-2024_10_23-batch-size32_freeze_monolabel
groderg
2024-10-24T19:14:22Z
25
0
transformers
[ "transformers", "tensorboard", "safetensors", "dinov2", "image-classification", "generated_from_trainer", "base_model:facebook/dinov2-large", "base_model:finetune:facebook/dinov2-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-23T06:40:33Z
--- library_name: transformers license: apache-2.0 base_model: facebook/dinov2-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: Aina-large-2024_10_23-batch-size32_freeze_monolabel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Aina-large-2024_10_23-batch-size32_freeze_monolabel This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6806 - F1 Micro: 0.7614 - F1 Macro: 0.4269 - Accuracy: 0.7614 - Learning Rate: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 150 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate | |:-------------:|:-----:|:------:|:---------------:|:--------:|:--------:|:--------:|:------:| | 0.9658 | 1.0 | 3312 | 0.8468 | 0.7179 | 0.2217 | 0.7179 | 0.001 | | 0.9257 | 2.0 | 6624 | 0.8172 | 0.7247 | 0.3043 | 0.7247 | 0.001 | | 0.9202 | 3.0 | 9936 | 0.8048 | 0.7260 | 0.3035 | 0.7260 | 0.001 | | 0.8905 | 4.0 | 13248 | 0.7947 | 0.7285 | 0.3109 | 0.7285 | 0.001 | | 0.907 | 5.0 | 16560 | 0.7822 | 0.7309 | 0.3046 | 0.7309 | 0.001 | | 0.8925 | 6.0 | 19872 | 0.7838 | 0.7345 | 0.3159 | 0.7345 | 0.001 | | 0.8922 | 7.0 | 23184 | 0.7931 | 0.7357 | 0.3244 | 0.7357 | 0.001 | | 0.883 | 8.0 | 26496 | 0.7688 | 0.7354 | 0.3241 | 0.7354 | 0.001 | | 0.8697 | 9.0 | 29808 | 0.7635 | 0.7377 | 0.3242 | 0.7377 | 0.001 | | 0.8782 | 10.0 | 33120 | 0.7689 | 0.7373 | 0.3327 | 0.7373 | 0.001 | | 0.8869 | 11.0 | 36432 | 0.7676 | 0.7350 | 0.3337 | 0.7350 | 0.001 | | 0.8791 | 12.0 | 39744 | 0.7640 | 0.7369 | 0.3409 | 0.7369 | 0.001 | | 0.9017 | 13.0 | 43056 | 0.7674 | 0.7337 | 0.3400 | 0.7337 | 0.001 | | 0.8753 | 14.0 | 46368 | 0.7586 | 0.7381 | 0.3271 | 0.7381 | 0.001 | | 0.872 | 15.0 | 49680 | 0.7658 | 0.7373 | 0.3229 | 0.7373 | 0.001 | | 0.8672 | 16.0 | 52992 | 0.8086 | 0.7389 | 0.3353 | 0.7389 | 0.001 | | 0.8678 | 17.0 | 56304 | 0.7629 | 0.7390 | 0.3359 | 0.7390 | 0.001 | | 0.8875 | 18.0 | 59616 | 0.7615 | 0.7365 | 0.3353 | 0.7365 | 0.001 | | 0.8645 | 19.0 | 62928 | 0.7682 | 0.7387 | 0.3450 | 0.7387 | 0.001 | | 0.881 | 20.0 | 66240 | 0.7559 | 0.7406 | 0.3411 | 0.7406 | 0.001 | | 0.8927 | 21.0 | 69552 | 0.7755 | 0.7349 | 0.3408 | 0.7349 | 0.001 | | 0.8704 | 22.0 | 72864 | 0.7674 | 0.7344 | 0.3233 | 0.7344 | 0.001 | | 0.8711 | 23.0 | 76176 | 0.7695 | 0.7340 | 0.3139 | 0.7340 | 0.001 | | 0.8722 | 24.0 | 79488 | 0.7538 | 0.7400 | 0.3338 | 0.7400 | 0.001 | | 0.884 | 25.0 | 82800 | 0.7643 | 0.7352 | 0.3480 | 0.7352 | 0.001 | | 0.8661 | 26.0 | 86112 | 0.7568 | 0.7388 | 0.3272 | 0.7388 | 0.001 | | 0.8847 | 27.0 | 89424 | 0.7665 | 0.7371 | 0.3427 | 0.7371 | 0.001 | | 0.8749 | 28.0 | 92736 | 0.7592 | 0.7385 | 0.3129 | 0.7385 | 0.001 | | 0.8782 | 29.0 | 96048 | 0.7544 | 0.7402 | 0.3420 | 0.7402 | 0.001 | | 0.882 | 30.0 | 99360 | 0.7549 | 0.7412 | 0.3503 | 0.7412 | 0.001 | | 0.8481 | 31.0 | 102672 | 0.7332 | 0.7457 | 0.3602 | 0.7457 | 0.0001 | | 0.8329 | 32.0 | 105984 | 0.7296 | 0.7456 | 0.3696 | 0.7456 | 0.0001 | | 0.817 | 33.0 | 109296 | 0.7270 | 0.7467 | 0.3749 | 0.7467 | 0.0001 | | 0.8173 | 34.0 | 112608 | 0.7234 | 0.7471 | 0.3683 | 0.7471 | 0.0001 | | 0.8221 | 35.0 | 115920 | 0.7187 | 0.7492 | 0.3795 | 0.7492 | 0.0001 | | 0.8085 | 36.0 | 119232 | 0.7215 | 0.7484 | 0.3758 | 0.7484 | 0.0001 | | 0.8113 | 37.0 | 122544 | 0.7180 | 0.7505 | 0.3767 | 0.7505 | 0.0001 | | 0.802 | 38.0 | 125856 | 0.7137 | 0.7502 | 0.3861 | 0.7502 | 0.0001 | | 0.8042 | 39.0 | 129168 | 0.7125 | 0.7514 | 0.3868 | 0.7514 | 0.0001 | | 0.7976 | 40.0 | 132480 | 0.7126 | 0.7499 | 0.3844 | 0.7499 | 0.0001 | | 0.7963 | 41.0 | 135792 | 0.7112 | 0.7516 | 0.3905 | 0.7516 | 0.0001 | | 0.8054 | 42.0 | 139104 | 0.7116 | 0.7511 | 0.3926 | 0.7511 | 0.0001 | | 0.8119 | 43.0 | 142416 | 0.7098 | 0.7516 | 0.3901 | 0.7516 | 0.0001 | | 0.8009 | 44.0 | 145728 | 0.7102 | 0.7507 | 0.3897 | 0.7507 | 0.0001 | | 0.7929 | 45.0 | 149040 | 0.7100 | 0.7517 | 0.3883 | 0.7517 | 0.0001 | | 0.8079 | 46.0 | 152352 | 0.7068 | 0.7510 | 0.3912 | 0.7510 | 0.0001 | | 0.8053 | 47.0 | 155664 | 0.7074 | 0.7510 | 0.3888 | 0.7510 | 0.0001 | | 0.7965 | 48.0 | 158976 | 0.7095 | 0.7508 | 0.3890 | 0.7508 | 0.0001 | | 0.8043 | 49.0 | 162288 | 0.7090 | 0.7509 | 0.3935 | 0.7509 | 0.0001 | | 0.7861 | 50.0 | 165600 | 0.7080 | 0.7512 | 0.4026 | 0.7512 | 0.0001 | | 0.7917 | 51.0 | 168912 | 0.7062 | 0.7514 | 0.3942 | 0.7514 | 0.0001 | | 0.7909 | 52.0 | 172224 | 0.7049 | 0.7526 | 0.3971 | 0.7526 | 0.0001 | | 0.7886 | 53.0 | 175536 | 0.7044 | 0.7526 | 0.4017 | 0.7526 | 0.0001 | | 0.7834 | 54.0 | 178848 | 0.7028 | 0.7524 | 0.3992 | 0.7524 | 0.0001 | | 0.7991 | 55.0 | 182160 | 0.7029 | 0.7527 | 0.3966 | 0.7527 | 0.0001 | | 0.7875 | 56.0 | 185472 | 0.7026 | 0.7533 | 0.4011 | 0.7533 | 0.0001 | | 0.7868 | 57.0 | 188784 | 0.7029 | 0.7525 | 0.4056 | 0.7525 | 0.0001 | | 0.7837 | 58.0 | 192096 | 0.7021 | 0.7536 | 0.4020 | 0.7536 | 0.0001 | | 0.7834 | 59.0 | 195408 | 0.7011 | 0.7534 | 0.4049 | 0.7534 | 0.0001 | | 0.7893 | 60.0 | 198720 | 0.7019 | 0.7530 | 0.4029 | 0.7530 | 0.0001 | | 0.7824 | 61.0 | 202032 | 0.7023 | 0.7519 | 0.3995 | 0.7519 | 0.0001 | | 0.789 | 62.0 | 205344 | 0.7038 | 0.7525 | 0.4041 | 0.7525 | 0.0001 | | 0.7778 | 63.0 | 208656 | 0.7003 | 0.7535 | 0.4038 | 0.7535 | 0.0001 | | 0.7719 | 64.0 | 211968 | 0.6997 | 0.7526 | 0.3982 | 0.7526 | 0.0001 | | 0.7909 | 65.0 | 215280 | 0.7074 | 0.7515 | 0.3997 | 0.7515 | 0.0001 | | 0.7854 | 66.0 | 218592 | 0.7018 | 0.7526 | 0.3940 | 0.7526 | 0.0001 | | 0.7746 | 67.0 | 221904 | 0.7023 | 0.7543 | 0.4000 | 0.7543 | 0.0001 | | 0.7905 | 68.0 | 225216 | 0.6975 | 0.7541 | 0.4063 | 0.7541 | 0.0001 | | 0.7824 | 69.0 | 228528 | 0.6994 | 0.7538 | 0.4072 | 0.7538 | 0.0001 | | 0.7795 | 70.0 | 231840 | 0.6969 | 0.7557 | 0.4094 | 0.7557 | 0.0001 | | 0.7763 | 71.0 | 235152 | 0.6969 | 0.7564 | 0.4085 | 0.7564 | 0.0001 | | 0.7723 | 72.0 | 238464 | 0.6987 | 0.7531 | 0.4090 | 0.7531 | 0.0001 | | 0.7914 | 73.0 | 241776 | 0.6945 | 0.7556 | 0.4203 | 0.7556 | 0.0001 | | 0.7658 | 74.0 | 245088 | 0.6951 | 0.7544 | 0.4117 | 0.7544 | 0.0001 | | 0.7803 | 75.0 | 248400 | 0.6989 | 0.7548 | 0.4104 | 0.7548 | 0.0001 | | 0.7772 | 76.0 | 251712 | 0.6997 | 0.7536 | 0.4037 | 0.7536 | 0.0001 | | 0.7813 | 77.0 | 255024 | 0.6986 | 0.7535 | 0.4092 | 0.7535 | 0.0001 | | 0.7938 | 78.0 | 258336 | 0.6982 | 0.7530 | 0.4084 | 0.7530 | 0.0001 | | 0.776 | 79.0 | 261648 | 0.6958 | 0.7545 | 0.4055 | 0.7545 | 0.0001 | | 0.7613 | 80.0 | 264960 | 0.6934 | 0.7548 | 0.4061 | 0.7548 | 1e-05 | | 0.7647 | 81.0 | 268272 | 0.6922 | 0.7560 | 0.4108 | 0.7560 | 1e-05 | | 0.7842 | 82.0 | 271584 | 0.6933 | 0.7543 | 0.4069 | 0.7543 | 1e-05 | | 0.7689 | 83.0 | 274896 | 0.6953 | 0.7535 | 0.4068 | 0.7535 | 1e-05 | | 0.7674 | 84.0 | 278208 | 0.6913 | 0.7570 | 0.4140 | 0.7570 | 1e-05 | | 0.7607 | 85.0 | 281520 | 0.6911 | 0.7564 | 0.4117 | 0.7564 | 1e-05 | | 0.7744 | 86.0 | 284832 | 0.6916 | 0.7563 | 0.4128 | 0.7563 | 1e-05 | | 0.7639 | 87.0 | 288144 | 0.6929 | 0.7550 | 0.4089 | 0.7550 | 1e-05 | | 0.7515 | 88.0 | 291456 | 0.6904 | 0.7565 | 0.4210 | 0.7565 | 1e-05 | | 0.7529 | 89.0 | 294768 | 0.6912 | 0.7554 | 0.4082 | 0.7554 | 1e-05 | | 0.7575 | 90.0 | 298080 | 0.6931 | 0.7557 | 0.4102 | 0.7557 | 1e-05 | | 0.7715 | 91.0 | 301392 | 0.6912 | 0.7555 | 0.4130 | 0.7555 | 1e-05 | | 0.7512 | 92.0 | 304704 | 0.6950 | 0.7534 | 0.4113 | 0.7534 | 1e-05 | | 0.7514 | 93.0 | 308016 | 0.6945 | 0.7539 | 0.4075 | 0.7539 | 1e-05 | | 0.7529 | 94.0 | 311328 | 0.6904 | 0.7564 | 0.4140 | 0.7564 | 1e-05 | | 0.7731 | 95.0 | 314640 | 0.6919 | 0.7555 | 0.4121 | 0.7555 | 0.0000 | | 0.7561 | 96.0 | 317952 | 0.6894 | 0.7563 | 0.4092 | 0.7563 | 0.0000 | | 0.7702 | 97.0 | 321264 | 0.6900 | 0.7565 | 0.4131 | 0.7565 | 0.0000 | | 0.7506 | 98.0 | 324576 | 0.6900 | 0.7566 | 0.4136 | 0.7566 | 0.0000 | | 0.7512 | 99.0 | 327888 | 0.6909 | 0.7564 | 0.4168 | 0.7564 | 0.0000 | | 0.7694 | 100.0 | 331200 | 0.6912 | 0.7562 | 0.4155 | 0.7562 | 0.0000 | | 0.7487 | 101.0 | 334512 | 0.6904 | 0.7550 | 0.4158 | 0.7550 | 0.0000 | | 0.7543 | 102.0 | 337824 | 0.6890 | 0.7570 | 0.4175 | 0.7570 | 0.0000 | | 0.7743 | 103.0 | 341136 | 0.6923 | 0.7546 | 0.4137 | 0.7546 | 0.0000 | | 0.757 | 104.0 | 344448 | 0.6912 | 0.7560 | 0.4183 | 0.7560 | 0.0000 | | 0.7631 | 105.0 | 347760 | 0.6899 | 0.7561 | 0.4088 | 0.7561 | 0.0000 | | 0.755 | 106.0 | 351072 | 0.6912 | 0.7556 | 0.4102 | 0.7556 | 0.0000 | | 0.7545 | 107.0 | 354384 | 0.6898 | 0.7573 | 0.4107 | 0.7573 | 0.0000 | | 0.7533 | 108.0 | 357696 | 0.6910 | 0.7538 | 0.4114 | 0.7538 | 0.0000 | | 0.7725 | 109.0 | 361008 | 0.6899 | 0.7565 | 0.4134 | 0.7565 | 0.0000 | | 0.7544 | 110.0 | 364320 | 0.6922 | 0.7555 | 0.4110 | 0.7555 | 0.0000 | | 0.758 | 111.0 | 367632 | 0.6901 | 0.7559 | 0.4141 | 0.7559 | 0.0000 | | 0.7674 | 112.0 | 370944 | 0.6903 | 0.7560 | 0.4127 | 0.7560 | 0.0000 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
lesso/mod-10a1-10-24
lesso
2024-10-24T19:13:50Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T19:07:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
naqi72/Finetune_English_SpeechT5
naqi72
2024-10-24T19:05:57Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "tts", "generated_from_trainer", "dataset:microsoft/speecht5_tts", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-24T16:20:25Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - tts - generated_from_trainer datasets: - microsoft/speecht5_tts model-index: - name: SpeechT5 Technical English results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 Technical English This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the TTS_English_Technical_data dataset. It achieves the following results on the evaluation set: - Loss: 0.4545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.6988 | 0.3581 | 100 | 0.5059 | | 4.3558 | 0.7162 | 200 | 0.4895 | | 4.1191 | 1.0743 | 300 | 0.4696 | | 4.0249 | 1.4324 | 400 | 0.4633 | | 3.9724 | 1.7905 | 500 | 0.4545 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
IgriCB/Training_DATA
IgriCB
2024-10-24T18:59:01Z
76
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:quantized:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-08T03:14:52Z
--- base_model: unsloth/gemma-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl --- # Uploaded model - **Developed by:** IgriCB - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Arthur-LAGACHERIE/Ministral-8B-Instruct-4bits-bnb
Arthur-LAGACHERIE
2024-10-24T18:55:59Z
82
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-24T18:53:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso/mod-09a1-10-24
lesso
2024-10-24T18:53:59Z
37
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T18:48:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rana-shahroz/mistral-mrpc-merge
rana-shahroz
2024-10-24T18:52:29Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T18:45:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hg07/myT5
hg07
2024-10-24T18:50:07Z
163
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-24T18:19:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
c14kevincardenas/limbxy_feet
c14kevincardenas
2024-10-24T18:47:44Z
225
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-regression", "human-movement", "vision", "generated_from_trainer", "base_model:c14kevincardenas/beit-large-patch16-384-limb", "base_model:finetune:c14kevincardenas/beit-large-patch16-384-limb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-24T18:13:11Z
--- library_name: transformers license: apache-2.0 base_model: c14kevincardenas/beit-large-patch16-384-limb tags: - image-regression - human-movement - vision - generated_from_trainer model-index: - name: limbxy_feet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # limbxy_feet This model is a fine-tuned version of [c14kevincardenas/beit-large-patch16-384-limb](https://huggingface.co/c14kevincardenas/beit-large-patch16-384-limb) on the c14kevincardenas/beta_caller_284_limbxy_feet dataset. It achieves the following results on the evaluation set: - Loss: 0.0047 - Rmse: 0.0683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 2014 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1975 | 1.0 | 47 | 0.1440 | 0.3794 | | 0.1345 | 2.0 | 94 | 0.0639 | 0.2527 | | 0.0329 | 3.0 | 141 | 0.0165 | 0.1284 | | 0.0295 | 4.0 | 188 | 0.0178 | 0.1334 | | 0.0272 | 5.0 | 235 | 0.0153 | 0.1237 | | 0.0156 | 6.0 | 282 | 0.0164 | 0.1282 | | 0.0254 | 7.0 | 329 | 0.0241 | 0.1551 | | 0.0254 | 8.0 | 376 | 0.0679 | 0.2606 | | 0.0268 | 9.0 | 423 | 0.0060 | 0.0773 | | 0.0174 | 10.0 | 470 | 0.0081 | 0.0899 | | 0.0073 | 11.0 | 517 | 0.0144 | 0.1200 | | 0.0101 | 12.0 | 564 | 0.0094 | 0.0968 | | 0.0064 | 13.0 | 611 | 0.0080 | 0.0894 | | 0.0074 | 14.0 | 658 | 0.0055 | 0.0743 | | 0.0037 | 15.0 | 705 | 0.0064 | 0.0803 | | 0.0045 | 16.0 | 752 | 0.0055 | 0.0742 | | 0.0022 | 17.0 | 799 | 0.0057 | 0.0754 | | 0.0027 | 18.0 | 846 | 0.0047 | 0.0683 | | 0.0019 | 19.0 | 893 | 0.0047 | 0.0684 | | 0.0018 | 20.0 | 940 | 0.0047 | 0.0684 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF
ZeroXClem
2024-10-24T18:44:53Z
7
3
transformers
[ "transformers", "gguf", "merge", "model_stock", "DarkStock", "Aspire", "Storm", "Llama3", "DarkEnigma", "instruction-following", "creative-writing", "coding", "roleplaying", "long-form-generation", "research", "bfloat16", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:openbuddy/openbuddy-llama3.1-8b-v22.2-131k", "dataset:THUDM/LongWriter-llama3.1-8b", "dataset:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored", "base_model:ZeroXClem/Llama3.1-DarkStorm-Aspire-8B", "base_model:quantized:ZeroXClem/Llama3.1-DarkStorm-Aspire-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-24T18:44:25Z
--- license: apache-2.0 tags: - merge - model_stock - DarkStock - Aspire - Storm - Llama3 - DarkEnigma - instruction-following - creative-writing - coding - roleplaying - long-form-generation - research - bfloat16 - llama-cpp - gguf-my-repo base_model: ZeroXClem/Llama3.1-DarkStorm-Aspire-8B library_name: transformers language: - en datasets: - openbuddy/openbuddy-llama3.1-8b-v22.2-131k - THUDM/LongWriter-llama3.1-8b - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored pipeline_tag: text-generation --- # ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF This model was converted to GGUF format from [`ZeroXClem/Llama3.1-DarkStorm-Aspire-8B`](https://huggingface.co/ZeroXClem/Llama3.1-DarkStorm-Aspire-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ZeroXClem/Llama3.1-DarkStorm-Aspire-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF --hf-file llama3.1-darkstorm-aspire-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF --hf-file llama3.1-darkstorm-aspire-8b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF --hf-file llama3.1-darkstorm-aspire-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF --hf-file llama3.1-darkstorm-aspire-8b-q5_k_m.gguf -c 2048 ```
Dshimen/slay_model
Dshimen
2024-10-24T18:44:18Z
7
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T07:38:11Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso/mod-08a1-10-24
lesso
2024-10-24T18:36:32Z
41
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T18:30:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF
sheldonrobinson
2024-10-24T18:26:07Z
5
0
transformers
[ "transformers", "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-24T18:25:56Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? library_name: transformers base_model: microsoft/Phi-3.5-mini-instruct --- # sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF --hf-file phi-3.5-mini-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF --hf-file phi-3.5-mini-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF --hf-file phi-3.5-mini-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sheldonrobinson/Phi-3.5-mini-instruct-Q4_K_M-GGUF --hf-file phi-3.5-mini-instruct-q4_k_m.gguf -c 2048 ```
jebish7/xlm-lang-dev
jebish7
2024-10-24T18:25:24Z
117
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T18:24:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arnav-m12/billsum_model
arnav-m12
2024-10-24T18:24:36Z
118
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-24T18:24:10Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # billsum_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5141 - Rouge1: 0.1363 - Rouge2: 0.0509 - Rougel: 0.1149 - Rougelsum: 0.1151 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8127 | 0.1195 | 0.035 | 0.1013 | 0.1016 | 19.0 | | No log | 2.0 | 124 | 2.5966 | 0.1273 | 0.0429 | 0.1078 | 0.1079 | 19.0 | | No log | 3.0 | 186 | 2.5307 | 0.1364 | 0.0522 | 0.1145 | 0.1147 | 19.0 | | No log | 4.0 | 248 | 2.5141 | 0.1363 | 0.0509 | 0.1149 | 0.1151 | 19.0 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.5.0.dev20240825+cpu - Datasets 3.0.2 - Tokenizers 0.20.1
abhi-007/finetuning-sentiment-model-3000-samples
abhi-007
2024-10-24T18:18:32Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T11:55:19Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3591 - Accuracy: 0.8633 - F1: 0.8664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
lesso/mod-07a1-10-24
lesso
2024-10-24T18:17:50Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T18:11:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oma7777/llama3.18B-Fine-tunedByOmar4bit
oma7777
2024-10-24T18:13:14Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-24T18:10:08Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mahmoud3899/simple_new
Mahmoud3899
2024-10-24T18:04:37Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T18:04:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
crodri/small-RAG-finetune
crodri
2024-10-24T17:59:46Z
12
0
peft
[ "peft", "safetensors", "text-generation", "dataset:projecte-aina/RAG_Multilingual", "arxiv:1910.09700", "base_model:BSC-LT/salamandra-2b", "base_model:adapter:BSC-LT/salamandra-2b", "region:us" ]
text-generation
2024-10-11T10:48:24Z
--- base_model: BSC-LT/salamandra-2b library_name: peft datasets: - projecte-aina/RAG_Multilingual pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.1
Triangle104/MS-Meadowlark-22B-Q6_K-GGUF
Triangle104
2024-10-24T17:59:00Z
7
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:allura-org/MS-Meadowlark-22B", "base_model:quantized:allura-org/MS-Meadowlark-22B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T17:56:22Z
--- base_model: allura-org/MS-Meadowlark-22B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md --- # Triangle104/MS-Meadowlark-22B-Q6_K-GGUF This model was converted to GGUF format from [`allura-org/MS-Meadowlark-22B`](https://huggingface.co/allura-org/MS-Meadowlark-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/allura-org/MS-Meadowlark-22B) for more details on the model. --- Model details: - A roleplay and storywriting model based on Mistral Small 22B. GGUF models: https://huggingface.co/mradermacher/MS-Meadowlark-22B-GGUF/ EXL2 models: https://huggingface.co/CalamitousFelicitousness/MS-Meadowlark-22B-exl2 Datasets used in this model: Dampfinchen/Creative_Writing_Multiturn at 16k Fizzarolli/rosier-dataset + Alfitaria/body-inflation-org at 16k ToastyPigeon/SpringDragon at 8k Each dataset was trained separately onto Mistral Small Instruct, and then the component models were merged along with nbeerbower/Mistral-Small-Gutenberg-Doppel-22B to create Meadowlark. I tried different blends of the component models, and this one seems to be the most stable while retaining creativity and unpredictability added by the trained data. Instruct Format Rosier/bodyinf and SpringDragon were trained in completion format. This model should work with Kobold Lite in Adventure Mode and Story Mode. Creative_Writing_Multiturn and Gutenberg-Doppel were trained using the official instruct format of Mistral Small Instruct: <s>[INST] {User message}[/INST] {Assistant response}</s> This is the Mistral Small V2&V3 preset in SillyTavern and Kobold Lite. For SillyTavern in particular I've had better luck getting good output from Mistral Small using a custom instruct template that formats the assembled context as a single user turn. This prevents SillyTavern from confusing the model by assembling user/assistant turns in a nonstandard way. Note: This preset is not compatible with Stepped Thinking, use the Mistral V2&V3 preset for that. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/MS-Meadowlark-22B-Q6_K-GGUF --hf-file ms-meadowlark-22b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/MS-Meadowlark-22B-Q6_K-GGUF --hf-file ms-meadowlark-22b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/MS-Meadowlark-22B-Q6_K-GGUF --hf-file ms-meadowlark-22b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/MS-Meadowlark-22B-Q6_K-GGUF --hf-file ms-meadowlark-22b-q6_k.gguf -c 2048 ```
AmberYifan/Llama-2-7b-gen-dpo-2k-hhrlhf
AmberYifan
2024-10-24T17:53:19Z
6
0
null
[ "safetensors", "llama", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-10-24T17:32:59Z
--- license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: Llama-2-7b-gen-dpo-2k-hhrlhf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-gen-dpo-2k-hhrlhf This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.2.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
camidenecken/RoBERTa-RM1-v1-rm-v15
camidenecken
2024-10-24T17:48:31Z
180
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T17:47:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
leekh7624/model2
leekh7624
2024-10-24T17:45:41Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T17:40:52Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
camidenecken/RoBERTa-RM1-v1-rm-v13
camidenecken
2024-10-24T17:45:08Z
181
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T17:44:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
exala/db_aca2_2.2
exala
2024-10-24T17:45:02Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-24T17:44:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
allura-org/MoE-Girl_400MA_1BT
allura-org
2024-10-24T17:44:26Z
38
15
transformers
[ "transformers", "safetensors", "granitemoe", "text-generation", "axolotl", "moe", "roleplay", "conversational", "base_model:ibm-granite/granite-3.0-1b-a400m-base", "base_model:finetune:ibm-granite/granite-3.0-1b-a400m-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T20:22:27Z
--- library_name: transformers license: apache-2.0 base_model: ibm-granite/granite-3.0-1b-a400m-base tags: - axolotl - moe - roleplay model-index: - name: MoE_Girl_400MA_1BT results: [] --- # MoE Girl 400mA 1bT ![R8_sd3.5L_00001_.webp](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/GEbRJhyc087cP6Cs_AR0X.webp) a finetune of Granite 3.0 by IBM designed for roleplaying (and maybe general usecases if you try hard enough). ## Disclaimer PLEASE do not expect godliness out of this, it's a model with _400 million_ active parameters. Expect something more akin to GPT-2. ## Quants TODO! ## Prompting Use ChatML. ``` <|im_start|>system You are a helpful assistant who talks like a pirate.<|im_end|> <|im_start|>user Hello there!<|im_end|> <|im_start|>assistant Yarr harr harr, me matey!<|im_end|> ``` ## Thanks Special thanks to the members of Allura for testing and emotional support, as well as the creators of all the datasets that were used in the Special Sauce used to train this model. I love you all <3 - Fizz