Sorry, I was just too excited about the recent progress of the model
Inschrift Spruch Raum
Inschrift-Spruch-Raum
AI & ML interests
None yet
Recent Activity
replied to
sometimesanotion's
post
about 10 hours ago
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:
https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion
It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico's http://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha's http://huggingface.co/suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.
A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.
I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.
Thank you, @mradermacher and @MaziyarPanahi, for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
replied to
sometimesanotion's
post
1 day ago
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:
https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion
It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico's http://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha's http://huggingface.co/suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.
A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.
I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.
Thank you, @mradermacher and @MaziyarPanahi, for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
replied to
sometimesanotion's
post
about 1 month ago
I'm just saving today's 14B parameter chart, because big things are about to hit. Lamarck v0.7 has been surpassed by at least two models I know of, and in ways that promise good things to come for the whole scene. I am taking my time to enjoy the progress, and Lamarck v0.8 will come when it's clearly keeping up and keeping its flavor.
There is no one best model for everyone, regardless of these rankings. I aim to make Lamarck good at coding, translating, and rigorously critiquing rhetoric and logic. Always check out the authors' notes on models to see if their intent is close to your use case!
Organizations
None yet
Inschrift-Spruch-Raum's activity
replied to
sometimesanotion's
post
about 10 hours ago
replied to
sometimesanotion's
post
1 day ago
Hello, I noticed that you haven't released a new quantitative model for a long time
At the same time, there have been some highly exaggerated 14B models recently-- https://huggingface.co/JungZoona
It's time to keep moving forward
replied to
sometimesanotion's
post
about 1 month ago
The list has been updated, and your Lamarck still holds the top spot, which is surprising
At the same time, we also see the great potential of Virtuoso-Small-v2 and Qwenconvergence-14B-v12-Prose-DS
However, I do not have a need to format the output. From this perspective, in my mind, Qwenconvergence-14B-v12-Prose-DS is truly the number one
๐๐๐
Unfortunately, although I have quantified Qwengenge-14B-v12-Prose-DS, I had to give up uploading it due to the impact of the network environment
How are its various parameters
5
#1 opened about 1 month ago
by
Inschrift-Spruch-Raum
How are its various parameters
5
#1 opened about 1 month ago
by
Inschrift-Spruch-Raum