Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
sometimesanotion 
posted an update 4 days ago
Post
517
I am really pleased to see jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 take #4 on the 14B segment of the Open LLM leaderboard. It is a fine-tune of a merge of Arcee's arcee-ai/Virtuoso-Small-v2, and my sometimesanotion/Lamarck-14B-v0.7 and sometimesanotion/Qwenvergence-14B-v12-Prose-DS. Don't let the numbers fool you, in its element, it's quite smooth. I really enjoy merges of Lamarck with near siblings like this one.

Don't be surprised when it's challenging to bring in the full reasoning strength of a reason-heavy prose model like Qwenvergence v12-DS into a high IFEVAL model like Lamarck or Virtuoso Small v2. That's a lot of work to get right, because IFEVAL, precise reasoning, and prose quality are often in tension against each other. Gaining as much as this did is really respectable, and fine-tuning it makes it a more stable base for the coming iterations.