Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ inference: false
|
|
5 |
|
6 |
# FalconLite2 Model
|
7 |
|
8 |
-
FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3), making it suitable for applications that require high performance in resource-constrained environments.
|
9 |
|
10 |
FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below:
|
11 |
|Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Inference framework|
|
@@ -24,7 +24,7 @@ FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite),
|
|
24 |
- **Model License:** Apache 2.0
|
25 |
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
|
26 |
|
27 |
-
## Deploy
|
28 |
SSH login to an AWS `g5.12x` instance with the [Deep Learning AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-ami-gpu-pytorch-2-0-ubuntu-20-04/).
|
29 |
|
30 |
### Start TGI server
|
@@ -43,7 +43,8 @@ pip install -r requirements-client.txt
|
|
43 |
# test short context
|
44 |
python falconlite_client.py
|
45 |
|
46 |
-
# test long context
|
|
|
47 |
python falconlite_client.py -l
|
48 |
```
|
49 |
**Important** - Use the prompt template below for FalconLite2:
|
@@ -53,6 +54,9 @@ python falconlite_client.py -l
|
|
53 |
|
54 |
**Important** - When using FalconLite2 for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
|
55 |
|
|
|
|
|
|
|
56 |
## Evalution Result ##
|
57 |
We evaluated FalconLite2 against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts.
|
58 |
|
|
|
5 |
|
6 |
# FalconLite2 Model
|
7 |
|
8 |
+
FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3), making it suitable for applications that require high performance in resource-constrained environments. You can also deploy FalconLite2 directly on SageMaker endpoints.
|
9 |
|
10 |
FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below:
|
11 |
|Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Inference framework|
|
|
|
24 |
- **Model License:** Apache 2.0
|
25 |
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
|
26 |
|
27 |
+
## Deploy FalconLite2 on EC2 ##
|
28 |
SSH login to an AWS `g5.12x` instance with the [Deep Learning AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-ami-gpu-pytorch-2-0-ubuntu-20-04/).
|
29 |
|
30 |
### Start TGI server
|
|
|
43 |
# test short context
|
44 |
python falconlite_client.py
|
45 |
|
46 |
+
# test long context of 13400 tokens,
|
47 |
+
# which are copied from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/)
|
48 |
python falconlite_client.py -l
|
49 |
```
|
50 |
**Important** - Use the prompt template below for FalconLite2:
|
|
|
54 |
|
55 |
**Important** - When using FalconLite2 for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
|
56 |
|
57 |
+
## Deploy FalconLite2 on Amazon SageMaker ##
|
58 |
+
To deploy FalconLite2 on a SageMaker endpoint, please follow [this notebook](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/falconlite2/sm_deploy.ipynb) running on a SageMaker Notebook instance (e.g. `g5.xlarge`).
|
59 |
+
|
60 |
## Evalution Result ##
|
61 |
We evaluated FalconLite2 against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts.
|
62 |
|