Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -522,7 +522,7 @@ from lmdeploy.vl import load_image
|
|
522 |
|
523 |
model = 'OpenGVLab/InternVL3-1B'
|
524 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
525 |
-
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=
|
526 |
response = pipe(('describe this image', image))
|
527 |
print(response.text)
|
528 |
```
|
@@ -539,7 +539,7 @@ from lmdeploy.vl import load_image
|
|
539 |
from lmdeploy.vl.constants import IMAGE_TOKEN
|
540 |
|
541 |
model = 'OpenGVLab/InternVL3-1B'
|
542 |
-
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=
|
543 |
|
544 |
image_urls=[
|
545 |
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
|
@@ -561,7 +561,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig
|
|
561 |
from lmdeploy.vl import load_image
|
562 |
|
563 |
model = 'OpenGVLab/InternVL3-1B'
|
564 |
-
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=
|
565 |
|
566 |
image_urls=[
|
567 |
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
|
@@ -581,7 +581,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig
|
|
581 |
from lmdeploy.vl import load_image
|
582 |
|
583 |
model = 'OpenGVLab/InternVL3-1B'
|
584 |
-
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=
|
585 |
|
586 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
|
587 |
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
|
@@ -596,7 +596,7 @@ print(sess.response.text)
|
|
596 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
597 |
|
598 |
```shell
|
599 |
-
lmdeploy serve api_server OpenGVLab/InternVL3-1B --server-port 23333 --tp
|
600 |
```
|
601 |
|
602 |
To use the OpenAI-style interface, you need to install OpenAI:
|
|
|
522 |
|
523 |
model = 'OpenGVLab/InternVL3-1B'
|
524 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
525 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
|
526 |
response = pipe(('describe this image', image))
|
527 |
print(response.text)
|
528 |
```
|
|
|
539 |
from lmdeploy.vl.constants import IMAGE_TOKEN
|
540 |
|
541 |
model = 'OpenGVLab/InternVL3-1B'
|
542 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
|
543 |
|
544 |
image_urls=[
|
545 |
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
|
|
|
561 |
from lmdeploy.vl import load_image
|
562 |
|
563 |
model = 'OpenGVLab/InternVL3-1B'
|
564 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
|
565 |
|
566 |
image_urls=[
|
567 |
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
|
|
|
581 |
from lmdeploy.vl import load_image
|
582 |
|
583 |
model = 'OpenGVLab/InternVL3-1B'
|
584 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
|
585 |
|
586 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
|
587 |
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
|
|
|
596 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
597 |
|
598 |
```shell
|
599 |
+
lmdeploy serve api_server OpenGVLab/InternVL3-1B --server-port 23333 --tp 1
|
600 |
```
|
601 |
|
602 |
To use the OpenAI-style interface, you need to install OpenAI:
|