-
-
-
-
-
-
Inference Providers
Active filters:
MiniCPM
ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
Image-Text-to-Text
•
Updated
•
1.46k
•
119
openbmb/MiniCPM-2B-sft-fp32
Text Generation
•
Updated
•
273
•
296
openbmb/MiniCPM-2B-dpo-bf16
Text Generation
•
Updated
•
409
•
48
openbmb/MiniCPM-2B-128k
Text Generation
•
Updated
•
227
•
42
openbmb/MiniCPM-S-1B-sft
Text Generation
•
Updated
•
242
•
10
mradermacher/Bio-Medical-MultiModal-Llama-3-8B-V1-i1-GGUF
Updated
•
14.5k
•
7
nitsuai/Bio-Medical-MultiModal-Llama-3-8B-V1-i1-GGUF
openbmb/MiniCPM-2B-sft-bf16
Text Generation
•
Updated
•
12.9k
•
118
openbmb/MiniCPM-2B-dpo-fp32
Text Generation
•
Updated
•
163
•
32
openbmb/MiniCPM-2B-dpo-fp16
Text Generation
•
Updated
•
471
•
34
openbmb/MiniCPM-2B-dpo-bf16-llama-format
Text Generation
•
Updated
•
9
•
12
openbmb/MiniCPM-2B-sft-fp32-llama-format
Text Generation
•
Updated
•
7
•
3
openbmb/MiniCPM-2B-sft-bf16-llama-format
Text Generation
•
Updated
•
9
•
11
Inv/MoECPM-Untrained-4x2b
Text Generation
•
Updated
•
1
•
1
jncraton/MiniCPM-2B-sft-bf16-llama-format-ct2-int8
jncraton/MiniCPM-2B-dpo-bf16-llama-format-ct2-int8
Updated
mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx
Updated
•
3
•
3
openbmb/MiniCPM-2B-history
Text Generation
•
Updated
•
50
•
20
DavidAU/MoECPM-Untrained-4x2b-Q6_K-GGUF
mlx-community/MiniCPM-2B-sft-4bit-llama-format-mlx
Updated
•
1
•
1
openbmb/MiniCPM-1B-sft-bf16
Text Generation
•
Updated
•
1.76k
•
17
Goekdeniz-Guelmez/MiniCPM-2B-sft-bf16
Text Generation
•
Updated
Goekdeniz-Guelmez/MiniCPM-2B-dpo-fp32-safetensors
Updated
•
2
•
1
Goekdeniz-Guelmez/MiniCPM-2B-sft-fp32-safetensors
Updated
•
2
•
1
Goekdeniz-Guelmez/MiniCPM-2B-dpo-bf16-safetensors
Updated
•
4
•
1
mlx-community/MiniCPM-2B-dpo-bf16-4bit
Updated
•
1
•
1
mlx-community/MiniCPM-2B-dpo-bf16
SparseLLM/ProSparse-MiniCPM-1B-sft
Text Generation
•
Updated
•
7
•
3
openbmb/MiniCPM-S-1B-sft-llama-format
Text Generation
•
Updated
•
172
•
4
openbmb/MiniCPM-S-1B-sft-gguf
Updated
•
67
•
6