File size: 1,305 Bytes
cf011fe 52b9af1 cf011fe 1661b2d 1efc60e 1661b2d 75f5417 fcf18c0 1661b2d 615bda0 341ccd8 fcf18c0 6da648b 341ccd8 1661b2d 615bda0 341ccd8 fcf18c0 341ccd8 1661b2d 250a538 0a77c7a 639d167 250a538 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: llama3
pipeline_tag: text-generation
tags:
- cortex.cpp
---
## Overview
Meta developed and released the [Meta Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Llama3-8b](https://huggingface.co/cortexso/llama3/tree/8b) | `cortex run llama3:8b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/llama3
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run llama3
```
## Credits
- **Author:** Meta
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://llama.meta.com/llama3/license/)
- **Papers:** [Llama-3 Blog](https://llama.meta.com/llama3/) |