{ "cells": [ { "cell_type": "markdown", "id": "50e1d1d5-3bdd-4224-9f93-bf5d9a83f424", "metadata": {}, "source": [ "# Create an LLM-powered Chatbot using OpenVINO\n", "\n", "In the rapidly evolving world of artificial intelligence (AI), chatbots have emerged as powerful tools for businesses to enhance customer interactions and streamline operations. \n", "Large Language Models (LLMs) are artificial intelligence systems that can understand and generate human language. They use deep learning algorithms and massive amounts of data to learn the nuances of language and produce coherent and relevant responses.\n", "While a decent intent-based chatbot can answer basic, one-touch inquiries like order management, FAQs, and policy questions, LLM chatbots can tackle more complex, multi-touch questions. LLM enables chatbots to provide support in a conversational manner, similar to how humans do, through contextual memory. Leveraging the capabilities of Language Models, chatbots are becoming increasingly intelligent, capable of understanding and responding to human language with remarkable accuracy.\n", "\n", "Previously, we already discussed how to build an instruction-following pipeline using OpenVINO and Optimum Intel, please check out [Dolly example](../dolly-2-instruction-following) for reference.\n", "In this tutorial, we consider how to use the power of OpenVINO for running Large Language Models for chat. We will use a pre-trained model from the [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) library. To simplify the user experience, the [Hugging Face Optimum Intel](https://huggingface.co/docs/optimum/intel/index) library is used to convert the models to OpenVINO™ IR format.\n", "\n", "\n", "The tutorial consists of the following steps:\n", "\n", "- Install prerequisites\n", "- Download and convert the model from a public source using the [OpenVINO integration with Hugging Face Optimum](https://huggingface.co/blog/openvino).\n", "- Compress model weights to 4-bit or 8-bit data types using [NNCF](https://github.com/openvinotoolkit/nncf)\n", "- Create a chat inference pipeline\n", "- Run chat pipeline\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [Prerequisites](#Prerequisites)\n", "- [Select model for inference](#Select-model-for-inference)\n", "- [Convert model using Optimum-CLI tool](#Convert-model-using-Optimum-CLI-tool)\n", "- [Compress model weights](#Compress-model-weights)\n", " - [Weights Compression using Optimum-CLI](#Weights-Compression-using-Optimum-CLI)\n", "- [Select device for inference and model variant](#Select-device-for-inference-and-model-variant)\n", "- [Instantiate Model using Optimum Intel](#Instantiate-Model-using-Optimum-Intel)\n", "- [Run Chatbot](#Run-Chatbot)\n", "\n" ] }, { "cell_type": "markdown", "id": "5df233b0-0369-4fff-9952-7957a90394a5", "metadata": {}, "source": [ "## Prerequisites\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Install required dependencies" ] }, { "cell_type": "code", "execution_count": 1, "id": "563ecf9f-346b-4f14-85ef-c66ff0c95f65", "metadata": { "tags": [] }, "outputs": [], "source": [ "import os\n", "\n", "os.environ[\"GIT_CLONE_PROTECTION_ACTIVE\"] = \"false\"\n", "\n", "%pip install -Uq pip\n", "%pip uninstall -q -y optimum optimum-intel\n", "%pip install --pre -Uq openvino openvino-tokenizers[transformers] --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly\n", "%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu\\\n", "\"git+https://github.com/huggingface/optimum-intel.git\"\\\n", "\"git+https://github.com/openvinotoolkit/nncf.git\"\\\n", "\"torch>=2.1\"\\\n", "\"datasets\" \\\n", "\"accelerate\"\\\n", "\"gradio>=4.19\"\\\n", "\"onnx\" \"einops\" \"transformers_stream_generator\" \"tiktoken\" \"transformers>=4.38.1\" \"bitsandbytes\"" ] }, { "cell_type": "code", "execution_count": 1, "id": "f39ca954-61d2-45c5-a7f9-7fce1acc277f", "metadata": {}, "outputs": [], "source": [ "import os\n", "from pathlib import Path\n", "import requests\n", "import shutil\n", "\n", "# fetch model configuration\n", "\n", "config_shared_path = Path(\"../../utils/llm_config.py\")\n", "config_dst_path = Path(\"llm_config.py\")\n", "\n", "if not config_dst_path.exists():\n", " if config_shared_path.exists():\n", " try:\n", " os.symlink(config_shared_path, config_dst_path)\n", " except Exception:\n", " shutil.copy(config_shared_path, config_dst_path)\n", " else:\n", " r = requests.get(url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/llm_config.py\")\n", " with open(\"llm_config.py\", \"w\") as f:\n", " f.write(r.text)\n", "elif not os.path.islink(config_dst_path):\n", " print(\"LLM config will be updated\")\n", " if config_shared_path.exists():\n", " shutil.copy(config_shared_path, config_dst_path)\n", " else:\n", " r = requests.get(url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/llm_config.py\")\n", " with open(\"llm_config.py\", \"w\") as f:\n", " f.write(r.text)" ] }, { "cell_type": "markdown", "id": "81983176-e571-4652-ba21-4bd608c35146", "metadata": {}, "source": [ "## Select model for inference\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "The tutorial supports different models, you can select one from the provided options to compare the quality of open source LLM solutions.\n", ">**Note**: conversion of some models can require additional actions from user side and at least 64GB RAM for conversion.\n", "\n", "The available options are:\n", "\n", "* **tiny-llama-1b-chat** - This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens with the adoption of the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. More details about model can be found in [model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)\n", "* **mini-cpm-2b-dpo** - MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. After Direct Preference Optimization (DPO) fine-tuning, MiniCPM outperforms many popular 7b, 13b and 70b models. More details can be found in [model_card](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16).\n", "* **gemma-2b-it** - Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. This model is instruction-tuned version of 2B parameters model. More details about model can be found in [model card](https://huggingface.co/google/gemma-2b-it).\n", ">**Note**: run model with demo, you will need to accept license agreement. \n", ">You must be a registered user in 🤗 Hugging Face Hub. Please visit [HuggingFace model card](https://huggingface.co/google/gemma-2b-it), carefully read terms of usage and click accept button. You will need to use an access token for the code below to run. For more information on access tokens, refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).\n", ">You can login on Hugging Face Hub in notebook environment, using following code:\n", " \n", "```python\n", " ## login to huggingfacehub to get access to pretrained model \n", "\n", " from huggingface_hub import notebook_login, whoami\n", "\n", " try:\n", " whoami()\n", " print('Authorization token already provided')\n", " except OSError:\n", " notebook_login()\n", "```\n", "* **phi3-mini-instruct<|end|>** - The Phi-3-Mini is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. More details about model can be found in [model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), [Microsoft blog](https://aka.ms/phi3blog-april) and [technical report](https://aka.ms/phi3-tech-report).\n", "* **red-pajama-3b-chat** - A 2.8B parameter pre-trained language model based on GPT-NEOX architecture. It was developed by Together Computer and leaders from the open-source AI community. The model is fine-tuned on OASST1 and Dolly2 datasets to enhance chatting ability. More details about model can be found in [HuggingFace model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1).\n", "* **gemma-7b-it** - Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. This model is instruction-tuned version of 7B parameters model. More details about model can be found in [model card](https://huggingface.co/google/gemma-7b-it).\n", ">**Note**: run model with demo, you will need to accept license agreement. \n", ">You must be a registered user in 🤗 Hugging Face Hub. Please visit [HuggingFace model card](https://huggingface.co/google/gemma-7b-it), carefully read terms of usage and click accept button. You will need to use an access token for the code below to run. For more information on access tokens, refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).\n", ">You can login on Hugging Face Hub in notebook environment, using following code:\n", " \n", "```python\n", " ## login to huggingfacehub to get access to pretrained model \n", "\n", " from huggingface_hub import notebook_login, whoami\n", "\n", " try:\n", " whoami()\n", " print('Authorization token already provided')\n", " except OSError:\n", " notebook_login()\n", "```\n", "\n", "* **llama-2-7b-chat** - LLama 2 is the second generation of LLama models developed by Meta. Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. llama-2-7b-chat is 7 billions parameters version of LLama 2 finetuned and optimized for dialogue use case. More details about model can be found in the [paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/), [repository](https://github.com/facebookresearch/llama) and [HuggingFace model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).\n", ">**Note**: run model with demo, you will need to accept license agreement. \n", ">You must be a registered user in 🤗 Hugging Face Hub. Please visit [HuggingFace model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), carefully read terms of usage and click accept button. You will need to use an access token for the code below to run. For more information on access tokens, refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).\n", ">You can login on Hugging Face Hub in notebook environment, using following code:\n", " \n", "```python\n", " ## login to huggingfacehub to get access to pretrained model \n", "\n", " from huggingface_hub import notebook_login, whoami\n", "\n", " try:\n", " whoami()\n", " print('Authorization token already provided')\n", " except OSError:\n", " notebook_login()\n", "```\n", "* **llama-3-8b-instruct** - Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. More details about model can be found in [Meta blog post](https://ai.meta.com/blog/meta-llama-3/), [model website](https://llama.meta.com/llama3) and [model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).\n", ">**Note**: run model with demo, you will need to accept license agreement. \n", ">You must be a registered user in 🤗 Hugging Face Hub. Please visit [HuggingFace model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), carefully read terms of usage and click accept button. You will need to use an access token for the code below to run. For more information on access tokens, refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).\n", ">You can login on Hugging Face Hub in notebook environment, using following code:\n", " \n", "```python\n", " ## login to huggingfacehub to get access to pretrained model \n", "\n", " from huggingface_hub import notebook_login, whoami\n", "\n", " try:\n", " whoami()\n", " print('Authorization token already provided')\n", " except OSError:\n", " notebook_login()\n", "```\n", "* **qwen1.5-0.5b-chat/qwen1.5-1.8b-chat/qwen1.5-7b-chat** - Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. Qwen1.5 is a language model series including decoder language models of different model sizes. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention. You can find more details about model in the [model repository](https://huggingface.co/Qwen).\n", "* **qwen-7b-chat** - Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.\n", "* **mpt-7b-chat** - MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT-7B-chat is a chatbot-like model for dialogue generation. It was built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. More details about the model can be found in [blog post](https://www.mosaicml.com/blog/mpt-7b), [repository](https://github.com/mosaicml/llm-foundry/) and [HuggingFace model card](https://huggingface.co/mosaicml/mpt-7b-chat).\n", "* **chatglm3-6b** - ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. You can find more details about model in the [model card](https://huggingface.co/THUDM/chatglm3-6b)\n", "* **mistral-7b** - The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. You can find more details about model in the [model card](https://huggingface.co/mistralai/Mistral-7B-v0.1), [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).\n", "* **zephyr-7b-beta** - Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-beta is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). You can find more details about model in [technical report](https://arxiv.org/abs/2310.16944) and [HuggingFace model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta).\n", "* **neural-chat-7b-v3-1** - Mistral-7b model fine-tuned using Intel Gaudi. The model fine-tuned on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned with [Direct Preference Optimization (DPO) algorithm](https://arxiv.org/abs/2305.18290). More details can be found in [model card](https://huggingface.co/Intel/neural-chat-7b-v3-1) and [blog post](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).\n", "* **notus-7b-v1** - Notus is a collection of fine-tuned models using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). and related [RLHF](https://huggingface.co/blog/rlhf) techniques. This model is the first version, fine-tuned with DPO over zephyr-7b-sft. Following a data-first approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO. Proposed approach for dataset creation helps to effectively fine-tune Notus-7b that surpasses Zephyr-7B-beta and Claude 2 on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). More details about model can be found in [model card](https://huggingface.co/argilla/notus-7b-v1).\n", "* **youri-7b-chat** - Youri-7b-chat is a Llama2 based model. [Rinna Co., Ltd.](https://rinna.co.jp/) conducted further pre-training for the Llama2 model with a mixture of English and Japanese datasets to improve Japanese task capability. The model is publicly released on Hugging Face hub. You can find detailed information at the [rinna/youri-7b-chat project page](https://huggingface.co/rinna/youri-7b). \n", "* **baichuan2-7b-chat** - Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc](https://www.baichuan-ai.com/home). It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.\n", "* **internlm2-chat-1.8b** - InternLM2 is the second generation InternLM series. Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. More details about model can be found in [model repository](https://huggingface.co/internlm).\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "f93282b6-f1f1-4153-84af-31aac79c3ef4", "metadata": { "tags": [] }, "outputs": [], "source": [ "from llm_config import SUPPORTED_LLM_MODELS\n", "import ipywidgets as widgets" ] }, { "cell_type": "code", "execution_count": 4, "id": "e02b34fb", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "74e7ca62c3894910835832538c2c6ee8", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Model Language:', options=('English', 'Chinese', 'Japanese'), value='English')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model_languages = list(SUPPORTED_LLM_MODELS)\n", "\n", "model_language = widgets.Dropdown(\n", " options=model_languages,\n", " value=model_languages[0],\n", " description=\"Model Language:\",\n", " disabled=False,\n", ")\n", "\n", "model_language" ] }, { "cell_type": "code", "execution_count": 5, "id": "8d22fedb-d1f6-4306-b910-efac5b849c7c", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8d6a6d46ece14b9a936614ff2dc79cdb", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Model:', index=2, options=('tiny-llama-1b-chat', 'gemma-2b-it', 'phi-3-mini-instruct', '…" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model_ids = list(SUPPORTED_LLM_MODELS[model_language.value])\n", "\n", "model_id = widgets.Dropdown(\n", " options=model_ids,\n", " value=model_ids[2],\n", " description=\"Model:\",\n", " disabled=False,\n", ")\n", "\n", "model_id" ] }, { "cell_type": "code", "execution_count": 6, "id": "906022ec-96bf-41a9-9447-789d2e875250", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Selected model phi-3-mini-instruct\n" ] } ], "source": [ "model_configuration = SUPPORTED_LLM_MODELS[model_language.value][model_id.value]\n", "print(f\"Selected model {model_id.value}\")" ] }, { "cell_type": "markdown", "id": "62af3e8a-915a-49b4-8007-803777ba9eaf", "metadata": {}, "source": [ "## Convert model using Optimum-CLI tool\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "🤗 [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) is the interface between the 🤗 [Transformers](https://huggingface.co/docs/transformers/index) and [Diffusers](https://huggingface.co/docs/diffusers/index) libraries and OpenVINO to accelerate end-to-end pipelines on Intel architectures. It provides ease-to-use cli interface for exporting models to [OpenVINO Intermediate Representation (IR)](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) format.\n", "\n", "The command bellow demonstrates basic command for model export with `optimum-cli`\n", "\n", "```\n", "optimum-cli export openvino --model --task \n", "```\n", "\n", "where `--model` argument is model id from HuggingFace Hub or local directory with model (saved using `.save_pretrained` method), `--task ` is one of [supported task](https://huggingface.co/docs/optimum/exporters/task_manager) that exported model should solve. For LLMs it will be `text-generation-with-past`. If model initialization requires to use remote code, `--trust-remote-code` flag additionally should be passed." ] }, { "cell_type": "markdown", "id": "13694bf8-ee7b-4186-a3e0-a8705be9733c", "metadata": {}, "source": [ "<|end|>## Compress model weights\n", "\n", "\n", "The [Weights Compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) algorithm is aimed at compressing the weights of the models and can be used to optimize the model footprint and performance of large models where the size of weights is relatively larger than the size of activations, for example, Large Language Models (LLM). Compared to INT8 compression, INT4 compression improves performance even more, but introduces a minor drop in prediction quality.\n", "\n", "\n", "### Weights Compression using Optimum-CLI\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "You can also apply fp16, 8-bit or 4-bit weight compression on the Linear, Convolutional and Embedding layers when exporting your model with the CLI by setting `--weight-format` to respectively fp16, int8 or int4. This type of optimization allows to reduce the memory footprint and inference latency.\n", "By default the quantization scheme for int8/int4 will be [asymmetric](https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/Quantization.md#asymmetric-quantization), to make it [symmetric](https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/Quantization.md#symmetric-quantization) you can add `--sym`.\n", "\n", "For INT4 quantization you can also specify the following arguments :\n", "- The `--group-size` parameter will define the group size to use for quantization, -1 it will results in per-column quantization.\n", "- The `--ratio` parameter controls the ratio between 4-bit and 8-bit quantization. If set to 0.9, it means that 90% of the layers will be quantized to int4 while 10% will be quantized to int8.\n", "\n", "Smaller group_size and ratio values usually improve accuracy at the sacrifice of the model size and inference latency.\n", "\n", ">**Note**: There may be no speedup for INT4/INT8 compressed models on dGPU." ] }, { "cell_type": "code", "execution_count": 7, "id": "91eb2ccf", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "70e7a7ed617544d485daec7c67534d91", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Checkbox(value=True, description='Prepare INT4 model')" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "165ddc05daf54902a64b5b3ce739cfdd", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Checkbox(value=False, description='Prepare INT8 model')" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3244a2e5eaab4fc49d333cc0051b9e14", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Checkbox(value=False, description='Prepare FP16 model')" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import Markdown, display\n", "\n", "prepare_int4_model = widgets.Checkbox(\n", " value=True,\n", " description=\"Prepare INT4 model\",\n", " disabled=False,\n", ")\n", "prepare_int8_model = widgets.Checkbox(\n", " value=False,\n", " description=\"Prepare INT8 model\",\n", " disabled=False,\n", ")\n", "prepare_fp16_model = widgets.Checkbox(\n", " value=False,\n", " description=\"Prepare FP16 model\",\n", " disabled=False,\n", ")\n", "\n", "display(prepare_int4_model)\n", "display(prepare_int8_model)\n", "display(prepare_fp16_model)" ] }, { "cell_type": "markdown", "id": "130a037a-7d98-4152-81ea-92ffb01da5a2", "metadata": {}, "source": [ "We can now save floating point and compressed model variants" ] }, { "cell_type": "code", "execution_count": 8, "id": "c4ef9112", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "pt_model_id = model_configuration[\"model_id\"]\n", "pt_model_name = model_id.value.split(\"-\")[0]\n", "fp16_model_dir = Path(model_id.value) / \"FP16\"\n", "int8_model_dir = Path(model_id.value) / \"INT8_compressed_weights\"\n", "int4_model_dir = Path(model_id.value) / \"INT4_compressed_weights\"\n", "\n", "\n", "def convert_to_fp16():\n", " if (fp16_model_dir / \"openvino_model.xml\").exists():\n", " return\n", " remote_code = model_configuration.get(\"remote_code\", False)\n", " export_command_base = \"optimum-cli export openvino --model {} --task text-generation-with-past --weight-format fp16\".format(pt_model_id)\n", " if remote_code:\n", " export_command_base += \" --trust-remote-code\"\n", " export_command = export_command_base + \" \" + str(fp16_model_dir)\n", " display(Markdown(\"**Export command:**\"))\n", " display(Markdown(f\"`{export_command}`\"))\n", " ! $export_command\n", "\n", "\n", "def convert_to_int8():\n", " if (int8_model_dir / \"openvino_model.xml\").exists():\n", " return\n", " int8_model_dir.mkdir(parents=True, exist_ok=True)\n", " remote_code = model_configuration.get(\"remote_code\", False)\n", " export_command_base = \"optimum-cli export openvino --model {} --task text-generation-with-past --weight-format int8\".format(pt_model_id)\n", " if remote_code:\n", " export_command_base += \" --trust-remote-code\"\n", " export_command = export_command_base + \" \" + str(int8_model_dir)\n", " display(Markdown(\"**Export command:**\"))\n", " display(Markdown(f\"`{export_command}`\"))\n", " ! $export_command\n", "\n", "\n", "def convert_to_int4():\n", " compression_configs = {\n", " \"zephyr-7b-beta\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"mistral-7b\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"minicpm-2b-dpo\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"gemma-2b-it\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"notus-7b-v1\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"neural-chat-7b-v3-1\": {\n", " \"sym\": True,\n", " \"group_size\": 64,\n", " \"ratio\": 0.6,\n", " },\n", " \"llama-2-chat-7b\": {\n", " \"sym\": True,\n", " \"group_size\": 128,\n", " \"ratio\": 0.8,\n", " },\n", " \"llama-3-8b-instruct\": {\n", " \"sym\": True,\n", " \"group_size\": 128,\n", " \"ratio\": 0.8,\n", " },\n", " \"gemma-7b-it\": {\n", " \"sym\": True,\n", " \"group_size\": 128,\n", " \"ratio\": 0.8,\n", " },\n", " \"chatglm2-6b\": {\n", " \"sym\": True,\n", " \"group_size\": 128,\n", " \"ratio\": 0.72,\n", " },\n", " \"qwen-7b-chat\": {\"sym\": True, \"group_size\": 128, \"ratio\": 0.6},\n", " \"red-pajama-3b-chat\": {\n", " \"sym\": False,\n", " \"group_size\": 128,\n", " \"ratio\": 0.5,\n", " },\n", " \"default\": {\n", " \"sym\": False,\n", " \"group_size\": 128,\n", " \"ratio\": 0.8,\n", " },\n", " }\n", "\n", " model_compression_params = compression_configs.get(model_id.value, compression_configs[\"default\"])\n", " if (int4_model_dir / \"openvino_model.xml\").exists():\n", " return\n", " remote_code = model_configuration.get(\"remote_code\", False)\n", " export_command_base = \"optimum-cli export openvino --model {} --task text-generation-with-past --weight-format int4\".format(pt_model_id)\n", " int4_compression_args = \" --group-size {} --ratio {}\".format(model_compression_params[\"group_size\"], model_compression_params[\"ratio\"])\n", " if model_compression_params[\"sym\"]:\n", " int4_compression_args += \" --sym\"\n", " export_command_base += int4_compression_args\n", " if remote_code:\n", " export_command_base += \" --trust-remote-code\"\n", " export_command = export_command_base + \" \" + str(int4_model_dir)\n", " display(Markdown(\"**Export command:**\"))\n", " display(Markdown(f\"`{export_command}`\"))\n", " ! $export_command\n", "\n", "\n", "if prepare_fp16_model.value:\n", " convert_to_fp16()\n", "if prepare_int8_model.value:\n", " convert_to_int8()\n", "if prepare_int4_model.value:\n", " convert_to_int4()" ] }, { "cell_type": "markdown", "id": "671a17d4", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "Let's compare model size for different compression types" ] }, { "cell_type": "code", "execution_count": 9, "id": "281f1d07-998e-4e13-ba95-0264564ede82", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Size of model with INT4 compressed weights is 2339.74 MB\n" ] } ], "source": [ "fp16_weights = fp16_model_dir / \"openvino_model.bin\"\n", "int8_weights = int8_model_dir / \"openvino_model.bin\"\n", "int4_weights = int4_model_dir / \"openvino_model.bin\"\n", "\n", "if fp16_weights.exists():\n", " print(f\"Size of FP16 model is {fp16_weights.stat().st_size / 1024 / 1024:.2f} MB\")\n", "for precision, compressed_weights in zip([8, 4], [int8_weights, int4_weights]):\n", " if compressed_weights.exists():\n", " print(f\"Size of model with INT{precision} compressed weights is {compressed_weights.stat().st_size / 1024 / 1024:.2f} MB\")\n", " if compressed_weights.exists() and fp16_weights.exists():\n", " print(f\"Compression rate for INT{precision} model: {fp16_weights.stat().st_size / compressed_weights.stat().st_size:.3f}\")" ] }, { "cell_type": "markdown", "id": "6d62f9f4-5434-4550-b372-c86b5a5089d5", "metadata": {}, "source": [ "## Select device for inference and model variant\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", ">**Note**: There may be no speedup for INT4/INT8 compressed models on dGPU." ] }, { "cell_type": "code", "execution_count": 6, "id": "837b4a3b-ccc3-4004-9577-2b2c7b802dea", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "fc7ca826334147d7bbe1694816e9d9e8", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Device:', options=('CPU', 'GPU', 'AUTO'), value='CPU')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import openvino as ov\n", "\n", "core = ov.Core()\n", "\n", "support_devices = core.available_devices\n", "if \"NPU\" in support_devices:\n", " support_devices.remove(\"NPU\")\n", "\n", "device = widgets.Dropdown(\n", " options=support_devices + [\"AUTO\"],\n", " value=\"CPU\",\n", " description=\"Device:\",\n", " disabled=False,\n", ")\n", "\n", "device" ] }, { "cell_type": "markdown", "id": "c53001e7-615f-4eb5-b831-4e2b2ff32826", "metadata": { "tags": [] }, "source": [ "The cell below demonstrates how to instantiate model based on selected variant of model weights and inference device" ] }, { "cell_type": "code", "execution_count": 11, "id": "3536a1a7", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "dec43f40e2524af48919e0e91a12e281", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Model to run:', options=('INT4',), value='INT4')" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "available_models = []\n", "if int4_model_dir.exists():\n", " available_models.append(\"INT4\")\n", "if int8_model_dir.exists():\n", " available_models.append(\"INT8\")\n", "if fp16_model_dir.exists():\n", " available_models.append(\"FP16\")\n", "\n", "model_to_run = widgets.Dropdown(\n", " options=available_models,\n", " value=available_models[0],\n", " description=\"Model to run:\",\n", " disabled=False,\n", ")\n", "\n", "model_to_run" ] }, { "cell_type": "markdown", "id": "f7f63327-f0f5-4e2d-bfc2-0f764f8c19a8", "metadata": {}, "source": [ "## Instantiate Model using Optimum Intel\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Optimum Intel can be used to load optimized models from the [Hugging Face Hub](https://huggingface.co/docs/optimum/intel/hf.co/models) and create pipelines to run an inference with OpenVINO Runtime using Hugging Face APIs. The Optimum Inference models are API compatible with Hugging Face Transformers models. This means we just need to replace `AutoModelForXxx` class with the corresponding `OVModelForXxx` class.\n", "\n", "Below is an example of the RedPajama model\n", "\n", "```diff\n", "-from transformers import AutoModelForCausalLM\n", "+from optimum.intel.openvino import OVModelForCausalLM\n", "from transformers import AutoTokenizer, pipeline\n", "\n", "model_id = \"togethercomputer/RedPajama-INCITE-Chat-3B-v1\"\n", "-model = AutoModelForCausalLM.from_pretrained(model_id)\n", "+model = OVModelForCausalLM.from_pretrained(model_id, export=True)\n", "```\n", "\n", "Model class initialization starts with calling `from_pretrained` method. When downloading and converting Transformers model, the parameter `export=True` should be added (as we already converted model before, we do not need to provide this parameter). We can save the converted model for the next usage with the `save_pretrained` method.\n", "Tokenizer class and pipelines API are compatible with Optimum models.\n", "\n", "You can find more details about OpenVINO LLM inference using HuggingFace Optimum API in [LLM inference guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html)." ] }, { "cell_type": "code", "execution_count": 12, "id": "7a041101-7336-40fd-96c9-cd298015a0f3", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'\n", "2024-04-23 22:13:04.208987: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2024-04-23 22:13:04.210866: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n", "2024-04-23 22:13:04.245998: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n", "2024-04-23 22:13:04.246894: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2024-04-23 22:13:04.941663: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n", "/home/ea/work/my_optimum_intel/optimum_env/lib/python3.8/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.\n", " warn(\"The installed version of bitsandbytes was compiled without GPU support. \"\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "/home/ea/work/my_optimum_intel/optimum_env/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:\n", " PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.1.2+cpu)\n", " Python 3.8.18 (you have 3.8.10)\n", " Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)\n", " Memory-efficient attention, SwiGLU, sparse and more won't be available.\n", " Set XFORMERS_MORE_DETAILS=1 for more details\n", "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Loading model from phi-3-mini-instruct/INT4_compressed_weights\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "The argument `trust_remote_code` is to be used along with export=True. It will be ignored.\n", "Compiling the model to CPU ...\n" ] } ], "source": [ "from transformers import AutoConfig, AutoTokenizer\n", "from optimum.intel.openvino import OVModelForCausalLM\n", "\n", "if model_to_run.value == \"INT4\":\n", " model_dir = int4_model_dir\n", "elif model_to_run.value == \"INT8\":\n", " model_dir = int8_model_dir\n", "else:\n", " model_dir = fp16_model_dir\n", "print(f\"Loading model from {model_dir}\")\n", "\n", "ov_config = {\"PERFORMANCE_HINT\": \"LATENCY\", \"NUM_STREAMS\": \"1\", \"CACHE_DIR\": \"\"}\n", "\n", "# On a GPU device a model is executed in FP16 precision. For red-pajama-3b-chat model there known accuracy\n", "# issues caused by this, which we avoid by setting precision hint to \"f32\".\n", "if model_id.value == \"red-pajama-3b-chat\" and \"GPU\" in core.available_devices and device.value in [\"GPU\", \"AUTO\"]:\n", " ov_config[\"INFERENCE_PRECISION_HINT\"] = \"f32\"\n", "\n", "model_name = model_configuration[\"model_id\"]\n", "tok = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)\n", "\n", "ov_model = OVModelForCausalLM.from_pretrained(\n", " model_dir,\n", " device=device.value,\n", " ov_config=ov_config,\n", " config=AutoConfig.from_pretrained(model_dir, trust_remote_code=True),\n", " trust_remote_code=True,\n", ")" ] }, { "cell_type": "code", "execution_count": 13, "id": "8f6f7596-5677-4931-875b-aaabfa23cabc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2 + 2 = 4\n" ] } ], "source": [ "tokenizer_kwargs = model_configuration.get(\"tokenizer_kwargs\", {})\n", "test_string = \"2 + 2 =\"\n", "input_tokens = tok(test_string, return_tensors=\"pt\", **tokenizer_kwargs)\n", "answer = ov_model.generate(**input_tokens, max_new_tokens=2)\n", "print(tok.batch_decode(answer, skip_special_tokens=True)[0])" ] }, { "cell_type": "markdown", "id": "24d622d0-be46-47c0-a762-88cb50ab15a9", "metadata": {}, "source": [ "## Run Chatbot\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Now, when model created, we can setup Chatbot interface using [Gradio](https://www.gradio.app/).\n", "The diagram below illustrates how the chatbot pipeline works\n", "\n", "![generation pipeline](https://user-images.githubusercontent.com/29454499/255523209-d9336491-c7ba-4dc1-98f0-07f23743ce89.png)\n", "\n", "As can be seen, the pipeline very similar to instruction-following with only changes that previous conversation history additionally passed as input with next user question for getting wider input context. On the first iteration, the user provided instructions joined to conversation history (if exists) converted to token ids using a tokenizer, then prepared input provided to the model. The model generates probabilities for all tokens in logits format The way the next token will be selected over predicted probabilities is driven by the selected decoding methodology. You can find more information about the most popular decoding methods in this [blog](https://huggingface.co/blog/how-to-generate). The result generation updates conversation history for next conversation step. it makes stronger connection of next question with previously provided and allows user to make clarifications regarding previously provided answers.https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html" ] }, { "cell_type": "markdown", "id": "725544ea-05ec-40d7-bbbc-1dc87cf57d04", "metadata": {}, "source": [ "There are several parameters that can control text generation quality: \n", " * `Temperature` is a parameter used to control the level of creativity in AI-generated text. By adjusting the `temperature`, you can influence the AI model's probability distribution, making the text more focused or diverse. \n", " Consider the following example: The AI model has to complete the sentence \"The cat is ____.\" with the following token probabilities: \n", "\n", " playing: 0.5 \n", " sleeping: 0.25 \n", " eating: 0.15 \n", " driving: 0.05 \n", " flying: 0.05 \n", "\n", " - **Low temperature** (e.g., 0.2): The AI model becomes more focused and deterministic, choosing tokens with the highest probability, such as \"playing.\" \n", " - **Medium temperature** (e.g., 1.0): The AI model maintains a balance between creativity and focus, selecting tokens based on their probabilities without significant bias, such as \"playing,\" \"sleeping,\" or \"eating.\" \n", " - **High temperature** (e.g., 2.0): The AI model becomes more adventurous, increasing the chances of selecting less likely tokens, such as \"driving\" and \"flying.\"\n", " * `Top-p`, also known as nucleus sampling, is a parameter used to control the range of tokens considered by the AI model based on their cumulative probability. By adjusting the `top-p` value, you can influence the AI model's token selection, making it more focused or diverse.\n", " Using the same example with the cat, consider the following top_p settings: \n", " - **Low top_p** (e.g., 0.5): The AI model considers only tokens with the highest cumulative probability, such as \"playing.\" \n", " - **Medium top_p** (e.g., 0.8): The AI model considers tokens with a higher cumulative probability, such as \"playing,\" \"sleeping,\" and \"eating.\" \n", " - **High top_p** (e.g., 1.0): The AI model considers all tokens, including those with lower probabilities, such as \"driving\" and \"flying.\" \n", " * `Top-k` is an another popular sampling strategy. In comparison with Top-P, which chooses from the smallest possible set of words whose cumulative probability exceeds the probability P, in Top-K sampling K most likely next words are filtered and the probability mass is redistributed among only those K next words. In our example with cat, if k=3, then only \"playing\", \"sleeping\" and \"eating\" will be taken into account as possible next word.\n", " * `Repetition Penalty` This parameter can help penalize tokens based on how frequently they occur in the text, including the input prompt. A token that has already appeared five times is penalized more heavily than a token that has appeared only one time. A value of 1 means that there is no penalty and values larger than 1 discourage repeated tokens.https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html" ] }, { "cell_type": "code", "execution_count": 14, "id": "01f8f7f8-072e-45dc-b7c9-18d8c3c47754", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running on local URL: http://127.0.0.1:7860\n", "\n", "To create a public link, set `share=True` in `launch()`.\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "from threading import Event, Thread\n", "from uuid import uuid4\n", "from typing import List, Tuple\n", "import gradio as gr\n", "from transformers import (\n", " AutoTokenizer,\n", " StoppingCriteria,\n", " StoppingCriteriaList,\n", " TextIteratorStreamer,\n", ")\n", "\n", "\n", "model_name = model_configuration[\"model_id\"]\n", "start_message = model_configuration[\"start_message\"]\n", "history_template = model_configuration.get(\"history_template\")\n", "current_message_template = model_configuration.get(\"current_message_template\")\n", "stop_tokens = model_configuration.get(\"stop_tokens\")\n", "tokenizer_kwargs = model_configuration.get(\"tokenizer_kwargs\", {})\n", "\n", "chinese_examples = [\n", " [\"你好!\"],\n", " [\"你是谁?\"],\n", " [\"请介绍一下上海\"],\n", " [\"请介绍一下英特尔公司\"],\n", " [\"晚上睡不着怎么办?\"],\n", " [\"给我讲一个年轻人奋斗创业最终取得成功的故事。\"],\n", " [\"给这个故事起一个标题。\"],\n", "]\n", "\n", "english_examples = [\n", " [\"Hello there! How are you doing?\"],\n", " [\"What is OpenVINO?\"],\n", " [\"Who are you?\"],\n", " [\"Can you explain to me briefly what is Python programming language?\"],\n", " [\"Explain the plot of Cinderella in a sentence.\"],\n", " [\"What are some common mistakes to avoid when writing code?\"],\n", " [\"Write a 100-word blog post on “Benefits of Artificial Intelligence and OpenVINO“\"],\n", "]\n", "\n", "japanese_examples = [\n", " [\"こんにちは!調子はどうですか?\"],\n", " [\"OpenVINOとは何ですか?\"],\n", " [\"あなたは誰ですか?\"],\n", " [\"Pythonプログラミング言語とは何か簡単に説明してもらえますか?\"],\n", " [\"シンデレラのあらすじを一文で説明してください。\"],\n", " [\"コードを書くときに避けるべきよくある間違いは何ですか?\"],\n", " [\"人工知能と「OpenVINOの利点」について100語程度のブログ記事を書いてください。\"],\n", "]\n", "\n", "examples = chinese_examples if (model_language.value == \"Chinese\") else japanese_examples if (model_language.value == \"Japanese\") else english_examples\n", "\n", "max_new_tokens = 256\n", "\n", "\n", "class StopOnTokens(StoppingCriteria):\n", " def __init__(self, token_ids):\n", " self.token_ids = token_ids\n", "\n", " def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n", " for stop_id in self.token_ids:\n", " if input_ids[0][-1] == stop_id:\n", " return True\n", " return False\n", "\n", "\n", "if stop_tokens is not None:\n", " if isinstance(stop_tokens[0], str):\n", " stop_tokens = tok.convert_tokens_to_ids(stop_tokens)\n", "\n", " stop_tokens = [StopOnTokens(stop_tokens)]\n", "\n", "\n", "def default_partial_text_processor(partial_text: str, new_text: str):\n", " \"\"\"\n", " helper for updating partially generated answer, used by default\n", "\n", " Params:\n", " partial_text: text buffer for storing previosly generated text\n", " new_text: text update for the current step\n", " Returns:\n", " updated text string\n", "\n", " \"\"\"\n", " partial_text += new_text\n", " return partial_text\n", "\n", "\n", "text_processor = model_configuration.get(\"partial_text_processor\", default_partial_text_processor)\n", "\n", "\n", "def convert_history_to_token(history: List[Tuple[str, str]]):\n", " \"\"\"\n", " function for conversion history stored as list pairs of user and assistant messages to tokens according to model expected conversation template\n", " Params:\n", " history: dialogue history\n", " Returns:\n", " history in token format\n", " \"\"\"\n", " if pt_model_name == \"baichuan2\":\n", " system_tokens = tok.encode(start_message)\n", " history_tokens = []\n", " for old_query, response in history[:-1]:\n", " round_tokens = []\n", " round_tokens.append(195)\n", " round_tokens.extend(tok.encode(old_query))\n", " round_tokens.append(196)\n", " round_tokens.extend(tok.encode(response))\n", " history_tokens = round_tokens + history_tokens\n", " input_tokens = system_tokens + history_tokens\n", " input_tokens.append(195)\n", " input_tokens.extend(tok.encode(history[-1][0]))\n", " input_tokens.append(196)\n", " input_token = torch.LongTensor([input_tokens])\n", " elif history_template is None:\n", " messages = [{\"role\": \"system\", \"content\": start_message}]\n", " for idx, (user_msg, model_msg) in enumerate(history):\n", " if idx == len(history) - 1 and not model_msg:\n", " messages.append({\"role\": \"user\", \"content\": user_msg})\n", " break\n", " if user_msg:\n", " messages.append({\"role\": \"user\", \"content\": user_msg})\n", " if model_msg:\n", " messages.append({\"role\": \"assistant\", \"content\": model_msg})\n", "\n", " input_token = tok.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_tensors=\"pt\")\n", " else:\n", " text = start_message + \"\".join(\n", " [\"\".join([history_template.format(num=round, user=item[0], assistant=item[1])]) for round, item in enumerate(history[:-1])]\n", " )\n", " text += \"\".join(\n", " [\n", " \"\".join(\n", " [\n", " current_message_template.format(\n", " num=len(history) + 1,\n", " user=history[-1][0],\n", " assistant=history[-1][1],\n", " )\n", " ]\n", " )\n", " ]\n", " )\n", " input_token = tok(text, return_tensors=\"pt\", **tokenizer_kwargs).input_ids\n", " return input_token\n", "\n", "\n", "def user(message, history):\n", " \"\"\"\n", " callback function for updating user messages in interface on submit button click\n", "\n", " Params:\n", " message: current message\n", " history: conversation history\n", " Returns:\n", " None\n", " \"\"\"\n", " # Append the user's message to the conversation history\n", " return \"\", history + [[message, \"\"]]\n", "\n", "\n", "def bot(history, temperature, top_p, top_k, repetition_penalty, conversation_id):\n", " \"\"\"\n", " callback function for running chatbot on submit button click\n", "\n", " Params:\n", " history: conversation history\n", " temperature: parameter for control the level of creativity in AI-generated text.\n", " By adjusting the `temperature`, you can influence the AI model's probability distribution, making the text more focused or diverse.\n", " top_p: parameter for control the range of tokens considered by the AI model based on their cumulative probability.\n", " top_k: parameter for control the range of tokens considered by the AI model based on their cumulative probability, selecting number of tokens with highest probability.\n", " repetition_penalty: parameter for penalizing tokens based on how frequently they occur in the text.\n", " conversation_id: unique conversation identifier.\n", "\n", " \"\"\"\n", "\n", " # Construct the input message string for the model by concatenating the current system message and conversation history\n", " # Tokenize the messages string\n", " input_ids = convert_history_to_token(history)\n", " if input_ids.shape[1] > 2000:\n", " history = [history[-1]]\n", " input_ids = convert_history_to_token(history)\n", " streamer = TextIteratorStreamer(tok, timeout=30.0, skip_prompt=True, skip_special_tokens=True)\n", " generate_kwargs = dict(\n", " input_ids=input_ids,\n", " max_new_tokens=max_new_tokens,\n", " temperature=temperature,\n", " do_sample=temperature > 0.0,\n", " top_p=top_p,\n", " top_k=top_k,\n", " repetition_penalty=repetition_penalty,\n", " streamer=streamer,\n", " )\n", " if stop_tokens is not None:\n", " generate_kwargs[\"stopping_criteria\"] = StoppingCriteriaList(stop_tokens)\n", "\n", " stream_complete = Event()\n", "\n", " def generate_and_signal_complete():\n", " \"\"\"\n", " genration function for single thread\n", " \"\"\"\n", " global start_time\n", " ov_model.generate(**generate_kwargs)\n", " stream_complete.set()\n", "\n", " t1 = Thread(target=generate_and_signal_complete)\n", " t1.start()\n", "\n", " # Initialize an empty string to store the generated text\n", " partial_text = \"\"\n", " for new_text in streamer:\n", " partial_text = text_processor(partial_text, new_text)\n", " history[-1][1] = partial_text\n", " yield history\n", "\n", "\n", "def request_cancel():\n", " ov_model.request.cancel()\n", "\n", "\n", "def get_uuid():\n", " \"\"\"\n", " universal unique identifier for thread\n", " \"\"\"\n", " return str(uuid4())\n", "\n", "\n", "with gr.Blocks(\n", " theme=gr.themes.Soft(),\n", " css=\".disclaimer {font-variant-caps: all-small-caps;}\",\n", ") as demo:\n", " conversation_id = gr.State(get_uuid)\n", " gr.Markdown(f\"\"\"

OpenVINO {model_id.value} Chatbot

\"\"\")\n", " chatbot = gr.Chatbot(height=500)\n", " with gr.Row():\n", " with gr.Column():\n", " msg = gr.Textbox(\n", " label=\"Chat Message Box\",\n", " placeholder=\"Chat Message Box\",\n", " show_label=False,\n", " container=False,\n", " )\n", " with gr.Column():\n", " with gr.Row():\n", " submit = gr.Button(\"Submit\")\n", " stop = gr.Button(\"Stop\")\n", " clear = gr.Button(\"Clear\")\n", " with gr.Row():\n", " with gr.Accordion(\"Advanced Options:\", open=False):\n", " with gr.Row():\n", " with gr.Column():\n", " with gr.Row():\n", " temperature = gr.Slider(\n", " label=\"Temperature\",\n", " value=0.1,\n", " minimum=0.0,\n", " maximum=1.0,\n", " step=0.1,\n", " interactive=True,\n", " info=\"Higher values produce more diverse outputs\",\n", " )\n", " with gr.Column():\n", " with gr.Row():\n", " top_p = gr.Slider(\n", " label=\"Top-p (nucleus sampling)\",\n", " value=1.0,\n", " minimum=0.0,\n", " maximum=1,\n", " step=0.01,\n", " interactive=True,\n", " info=(\n", " \"Sample from the smallest possible set of tokens whose cumulative probability \"\n", " \"exceeds top_p. Set to 1 to disable and sample from all tokens.\"\n", " ),\n", " )\n", " with gr.Column():\n", " with gr.Row():\n", " top_k = gr.Slider(\n", " label=\"Top-k\",\n", " value=50,\n", " minimum=0.0,\n", " maximum=200,\n", " step=1,\n", " interactive=True,\n", " info=\"Sample from a shortlist of top-k tokens — 0 to disable and sample from all tokens.\",\n", " )\n", " with gr.Column():\n", " with gr.Row():\n", " repetition_penalty = gr.Slider(\n", " label=\"Repetition Penalty\",\n", " value=1.1,\n", " minimum=1.0,\n", " maximum=2.0,\n", " step=0.1,\n", " interactive=True,\n", " info=\"Penalize repetition — 1.0 to disable.\",\n", " )\n", " gr.Examples(examples, inputs=msg, label=\"Click on any example and press the 'Submit' button\")\n", "\n", " submit_event = msg.submit(\n", " fn=user,\n", " inputs=[msg, chatbot],\n", " outputs=[msg, chatbot],\n", " queue=False,\n", " ).then(\n", " fn=bot,\n", " inputs=[\n", " chatbot,\n", " temperature,\n", " top_p,\n", " top_k,\n", " repetition_penalty,\n", " conversation_id,\n", " ],\n", " outputs=chatbot,\n", " queue=True,\n", " )\n", " submit_click_event = submit.click(\n", " fn=user,\n", " inputs=[msg, chatbot],\n", " outputs=[msg, chatbot],\n", " queue=False,\n", " ).then(\n", " fn=bot,\n", " inputs=[\n", " chatbot,\n", " temperature,\n", " top_p,\n", " top_k,\n", " repetition_penalty,\n", " conversation_id,\n", " ],\n", " outputs=chatbot,\n", " queue=True,\n", " )\n", " stop.click(\n", " fn=request_cancel,\n", " inputs=None,\n", " outputs=None,\n", " cancels=[submit_event, submit_click_event],\n", " queue=False,\n", " )\n", " clear.click(lambda: None, None, chatbot, queue=False)\n", "\n", "# if you are launching remotely, specify server_name and server_port\n", "# demo.launch(server_name='your server name', server_port='server port in int')\n", "# if you have any issue to launch on your platform, you can pass share=True to launch method:\n", "# demo.launch(share=True)\n", "# it creates a publicly shareable link for the interface. Read more in the docs: https://gradio.app/docs/\n", "demo.launch()" ] }, { "cell_type": "code", "execution_count": 15, "id": "7b837f9e-4152-4a5c-880a-ed874aa64a74", "metadata": { "tags": [] }, "outputs": [], "source": [ "# please uncomment and run this cell for stopping gradio interface\n", "# demo.close()" ] }, { "cell_type": "markdown", "id": "d69ca0a2", "metadata": {}, "source": [ "### Next Step\n", "\n", "Besides chatbot, we can use LangChain to augmenting LLM knowledge with additional data, which allow you to build AI applications that can reason about private data or data introduced after a model’s cutoff date. You can find this solution in [Retrieval-augmented generation (RAG) example](../llm-rag-langchain/)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "openvino_notebooks": { "imageUrl": "https://user-images.githubusercontent.com/29454499/255799218-611e7189-8979-4ef5-8a80-5a75e0136b50.png", "tags": { "categories": [ "Model Demos", "AI Trends" ], "libraries": [], "other": [ "LLM" ], "tasks": [ "Text Generation", "Conversational" ] } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 5 }