Spaces:
Runtime error
Runtime error
Commit
·
8a4f00d
1
Parent(s):
52a27ed
feat: added Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M) model
Browse files- Dockerfile +2 -2
- README.md +5 -5
- index.html +6 -8
Dockerfile
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
# Grab a fresh copy of the Python image
|
| 2 |
-
FROM python:3.
|
| 3 |
|
| 4 |
# Install build and runtime dependencies
|
| 5 |
RUN apt-get update && \
|
|
@@ -15,7 +15,7 @@ RUN pip install -U pip setuptools wheel && \
|
|
| 15 |
|
| 16 |
# Download model
|
| 17 |
RUN mkdir model && \
|
| 18 |
-
curl -L https://huggingface.co/TheBloke/
|
| 19 |
|
| 20 |
COPY ./start_server.sh ./
|
| 21 |
COPY ./main.py ./
|
|
|
|
| 1 |
# Grab a fresh copy of the Python image
|
| 2 |
+
FROM python:3.11-slim
|
| 3 |
|
| 4 |
# Install build and runtime dependencies
|
| 5 |
RUN apt-get update && \
|
|
|
|
| 15 |
|
| 16 |
# Download model
|
| 17 |
RUN mkdir model && \
|
| 18 |
+
curl -L https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf -o model/gguf-model.bin
|
| 19 |
|
| 20 |
COPY ./start_server.sh ./
|
| 21 |
COPY ./main.py ./
|
README.md
CHANGED
|
@@ -1,20 +1,20 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
colorFrom: purple
|
| 4 |
colorTo: blue
|
| 5 |
sdk: docker
|
| 6 |
models:
|
| 7 |
-
-
|
| 8 |
-
- TheBloke/
|
| 9 |
tags:
|
| 10 |
- inference api
|
| 11 |
- openai-api compatible
|
| 12 |
- llama-cpp-python
|
| 13 |
-
-
|
| 14 |
- gguf
|
| 15 |
pinned: false
|
| 16 |
---
|
| 17 |
|
| 18 |
-
#
|
| 19 |
|
| 20 |
Please refer to the [index.html](index.html) for more information.
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)
|
| 3 |
colorFrom: purple
|
| 4 |
colorTo: blue
|
| 5 |
sdk: docker
|
| 6 |
models:
|
| 7 |
+
- mistralai/Mistral-7B-Instruct-v0.1
|
| 8 |
+
- TheBloke/Mistral-7B-Instruct-v0.1-GGUF
|
| 9 |
tags:
|
| 10 |
- inference api
|
| 11 |
- openai-api compatible
|
| 12 |
- llama-cpp-python
|
| 13 |
+
- Mistral-7B-Instruct-v0.1-GGUF
|
| 14 |
- gguf
|
| 15 |
pinned: false
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)
|
| 19 |
|
| 20 |
Please refer to the [index.html](index.html) for more information.
|
index.html
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
<!DOCTYPE html>
|
| 2 |
<html>
|
| 3 |
<head>
|
| 4 |
-
<title>
|
| 5 |
</head>
|
| 6 |
<body>
|
| 7 |
-
<h1>
|
| 8 |
<p>
|
| 9 |
With the utilization of the
|
| 10 |
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
|
|
@@ -16,16 +16,14 @@
|
|
| 16 |
<ul>
|
| 17 |
<li>
|
| 18 |
The API endpoint:
|
| 19 |
-
<a
|
| 20 |
-
|
| 21 |
-
>https://limcheekin-codellama-13b-oasst-sft-v10-gguf.hf.space/v1</a
|
| 22 |
>
|
| 23 |
</li>
|
| 24 |
<li>
|
| 25 |
The API doc:
|
| 26 |
-
<a
|
| 27 |
-
|
| 28 |
-
>https://limcheekin-codellama-13b-oasst-sft-v10-gguf.hf.space/docs</a
|
| 29 |
>
|
| 30 |
</li>
|
| 31 |
</ul>
|
|
|
|
| 1 |
<!DOCTYPE html>
|
| 2 |
<html>
|
| 3 |
<head>
|
| 4 |
+
<title>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</title>
|
| 5 |
</head>
|
| 6 |
<body>
|
| 7 |
+
<h1>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</h1>
|
| 8 |
<p>
|
| 9 |
With the utilization of the
|
| 10 |
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
|
|
|
|
| 16 |
<ul>
|
| 17 |
<li>
|
| 18 |
The API endpoint:
|
| 19 |
+
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1"
|
| 20 |
+
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1</a
|
|
|
|
| 21 |
>
|
| 22 |
</li>
|
| 23 |
<li>
|
| 24 |
The API doc:
|
| 25 |
+
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs"
|
| 26 |
+
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs</a
|
|
|
|
| 27 |
>
|
| 28 |
</li>
|
| 29 |
</ul>
|