sometimesanotion PRO
sometimesanotion
AI & ML interests
Agentic LLM services, model merging, finetunes, distillation
Recent Activity
liked
a model
about 9 hours ago
google/gemma-3-12b-it
replied to
their
post
about 13 hours ago
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:
https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion
It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico's http://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha's http://huggingface.co/suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.
A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.
I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.
Thank you, @mradermacher and @MaziyarPanahi, for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
replied to
their
post
1 day ago
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:
https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion
It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico's http://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha's http://huggingface.co/suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.
A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.
I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.
Thank you, @mradermacher and @MaziyarPanahi, for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
Organizations
sometimesanotion's activity
Fusion vs. SLERP?
10
#2 opened 15 days ago
by
sometimesanotion

I think what you're doing here is really helpful
1
#2 opened 19 days ago
by
sometimesanotion

Excellent model!
16
#3 opened about 1 month ago
by
nixudos
This merge makes sense
4
#1 opened about 1 month ago
by
sometimesanotion

The bar graphs are a bit suspect
1
#4 opened 29 days ago
by
sometimesanotion

MATH results have changed
2
#1102 opened 29 days ago
by
sometimesanotion

This is starting to look a bit like the Lamarck process
#1 opened about 1 month ago
by
sometimesanotion

Impressive fusion
1
#2 opened about 1 month ago
by
jpacifico

Congratulations!
#2 opened about 1 month ago
by
sometimesanotion

No, this is promising
6
#1 opened about 1 month ago
by
CultriX
A tour of 14B finetuning
1
#1 opened about 1 month ago
by
sometimesanotion

Censored
8
#2 opened about 1 month ago
by
jongames
What is the instruct template?
1
#1 opened about 1 month ago
by
Poro7

This is promising
2
#1 opened about 1 month ago
by
sometimesanotion

C4ai-command-r-plus Tokenizing?
3
#1 opened about 1 month ago
by
Reithan
How are its various parameters
5
#1 opened about 1 month ago
by
Inschrift-Spruch-Raum
This release? No Deepseek R1
#1 opened about 2 months ago
by
sometimesanotion

Nuslerp parameters?
2
#1 opened about 2 months ago
by
sometimesanotion

Extra SLERP parameters
7
#1 opened 2 months ago
by
sometimesanotion

Goals and outcome
1
#1 opened about 2 months ago
by
sometimesanotion
