[]
Kenshiro-28
AI & ML interests
None yet
Recent Activity
new activity
about 1 month ago
deepseek-ai/DeepSeek-R1:Request: Create distill of Mistral Small 24B
new activity
about 2 months ago
bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF:Prompt format
new activity
about 2 months ago
bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF:Prompt format
Organizations
None yet
Kenshiro-28's activity
Request: Create distill of Mistral Small 24B
3
#128 opened about 1 month ago
by
Kenshiro-28
Prompt format
6
#8 opened about 2 months ago
by
Kenshiro-28
A Request For reasoning model
2
#1 opened 3 months ago
by
santosgamer01
CORRECTION: THIS SYSTEM MESSAGE IS ***PURE GOLD***!!!
16
#33 opened 6 months ago
by
jukofyork

"llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'dolphin12b''"
23
#1 opened 8 months ago
by
jrell
Feedback
14
#1 opened 9 months ago
by
TravelingMan

It uses too much RAM for a 7B model
3
#1 opened 12 months ago
by
Kenshiro-28
Great model! I suggest switching to ChatML prompt format for next versions.
#8 opened about 1 year ago
by
Kenshiro-28
Bad EOS
2
#5 opened about 1 year ago
by
Kenshiro-28
Bad EOS
2
#6 opened about 1 year ago
by
Kenshiro-28
Bad EOS
#1 opened about 1 year ago
by
Kenshiro-28
Bad EOS token
#1 opened about 1 year ago
by
Kenshiro-28
Prompt format
1
#2 opened about 1 year ago
by
Kenshiro-28
The model doesn't report the ChatML EOS token
2
#1 opened over 1 year ago
by
Kenshiro-28
ChatML prompt format confusion - please reconsider
36
#3 opened over 1 year ago
by
kalomaze

The best model
3
#1 opened over 1 year ago
by
Kenshiro-28
Change prompt format to Vicuna v1.1
2
#7 opened over 1 year ago
by
Kenshiro-28
Change prompt format to Vicuna v1.1
2
#7 opened over 1 year ago
by
Kenshiro-28
Change prompt format to Vicuna v1.1
2
#7 opened over 1 year ago
by
Kenshiro-28