Spaces:
Runtime error
Runtime error
title: langchain-streamlit-demo | |
emoji: 🦜 | |
colorFrom: green | |
colorTo: red | |
sdk: docker | |
app_port: 7860 | |
pinned: true | |
tags: [langchain, streamlit, docker] | |
# langchain-streamlit-demo | |
[](https://opensource.org/licenses/MIT) | |
[](https://www.python.org) | |
[](https://github.com/PyCQA/bandit) | |
[](https://github.com/charliermarsh/ruff) | |
[](https://github.com/psf/black) | |
[](https://github.com/pre-commit/pre-commit) | |
[](http://mypy-lang.org/) | |
[](https://hub.docker.com/r/joshuasundance/langchain-streamlit-demo) | |
[](https://hub.docker.com/r/joshuasundance/langchain-streamlit-demo) | |
[](https://huggingface.co/spaces/joshuasundance/langchain-streamlit-demo) | |
This project shows how to build a simple chatbot UI with [Streamlit](https://streamlit.io) and [LangChain](https://langchain.com). | |
This `README` was written by [Claude 2](https://www.anthropic.com/index/claude-2), an LLM from [Anthropic](https://www.anthropic.com/). | |
# Features | |
- Chat interface for talking to AI assistant | |
- Supports models from | |
- [OpenAI](https://openai.com/) | |
- `gpt-3.5-turbo` | |
- `gpt-4` | |
- [Anthropic](https://www.anthropic.com/) | |
- `claude-instant-v1` | |
- `claude-2` | |
- [Anyscale Endpoints](https://endpoints.anyscale.com/) | |
- `meta-llama/Llama-2-7b-chat-hf` | |
- `meta-llama/Llama-2-13b-chat-hf` | |
- `meta-llama/Llama-2-70b-chat-hf` | |
- Streaming output of assistant responses | |
- Leverages LangChain for dialogue management | |
- Integrates with [LangSmith](https://smith.langchain.com) for tracing conversations | |
- Allows giving feedback on assistant's responses | |
# Usage | |
## Run on HuggingFace Spaces | |
[](https://huggingface.co/spaces/joshuasundance/langchain-streamlit-demo) | |
## With Docker (pull from Docker Hub) | |
1. Run in terminal: `docker run -p 7860:7860 joshuasundance/langchain-streamlit-demo:latest` | |
2. Open http://localhost:7860 in your browser. | |
## Docker Compose | |
1. Clone the repo. Navigate to cloned repo directory. | |
2. Run in terminal: `docker compose up` | |
3. Then open http://localhost:7860 in your browser. | |
# Configuration | |
- Select a model from the dropdown | |
- Enter an API key for the relevant provider | |
- Optionally enter a LangSmith API key to enable conversation tracing | |
- Customize the assistant prompt and temperature | |
# Code Overview | |
- `app.py` - Main Streamlit app definition | |
- `llm_stuff.py` - LangChain helper functions | |
# Deployment | |
The app is packaged as a Docker image for easy deployment. It is published to Docker Hub and Hugging Face Spaces: | |
- [DockerHub](https://hub.docker.com/r/joshuasundance/langchain-streamlit-demo) | |
- [HuggingFace Spaces](https://huggingface.co/spaces/joshuasundance/langchain-streamlit-demo) | |
CI workflows in `.github/workflows` handle building and publishing the image. | |
# Links | |
- [Streamlit](https://streamlit.io) | |
- [LangChain](https://langchain.com) | |
- [LangSmith](https://smith.langchain.com) | |
- [OpenAI](https://openai.com/) | |
- [Anthropic](https://www.anthropic.com/) | |
- [Anyscale Endpoints](https://endpoints.anyscale.com/) | |
# TODO | |
1. More customization / parameterization in sidebar | |