OK nevermind I clicked that blog link and this is hella damn interesting how come I never heard of this haha. It states some really promising things right there... :o
CultriX PRO
AI & ML interests
Recent Activity
Organizations
CultriX's activity
the model that calls itself "Qwenconceited-14B-v13-DeepSuffering". <-- That cracked me up, lol!
And yeah very interesting but I'm going to have to read that again at another moment to fully understand all it is saying haha. Sounds like interesting stuff though!
Oh yeah for sure I'll hit you up sometime! Just to be clear I wasn't asking you to upload all your personal tweaks that you've spent probably weeks on to improve them haha. I was just curious about some of the things you said. For example, when you said " Extract a small LoRA from this" I was a little bit confused actually haha. As in: I have no idea how to do that, let alone apply it to smoothen out other models in the merge.
I know about adapter models and that you can create those with LoRA fine-tuning which you can either load on top during inference or you can merge with the base model, but extracting a LoRA from an existing model is kinda confusing me haha (sorry!). It sounds interesting though! Do I understand correctly that this would enable you to kind of "operate" on the model more precisely and with a lot less compute required (aka: more merges you can make and test in a given time window)?
Would you mind doing a writeup about your customized mergekit workflow, or do you prefer to keep some of the secret sauce to yourself? ;)
Or I guess the ReadMe as nobody can read that lol: https://huggingface.co/spaces/CultriX/MultiAgent-CodeTask/blob/main/README.md
This version does not rely on AutoGen.
The user simply enters his OPENAI_API_KEY and a task and the Space goes to work, employing a
- 1. prompt-enhancer agent,
- 2. an orchestrator agent,
- 3. a coder agent,
- 4. a code-reviewing agent and
-5. a code documentation generator agent.
See below image for an example workflow:
CultriX/MultiAgent-CodeTask
I made an improved version of this that does not rely on autogen and produces some really nice code! :)
https://huggingface.co/spaces/CultriX/MultiAgent-CodeTask
DEMO
========== USER PROMPT ==========
Write a random password generator in python.
========== Multi-Agent Conversation ==========
[Prompt Optimizer]: Optimized Task:
Instructions for Password Generator in Python
Objective: Create a Python script that generates secure, random passwords based on user-defined criteria.
Requirements:
- The script should allow users to customize the password:
- Length: Specify the desired length of the password (e.g., 12-16 characters).
- Character Types: Allow users to include or exclude uppercase letters, lowercase letters, numbers, and special characters.
- The password should be randomly generated and accessible for immediate use.
Output Specifications:
- The generated password should be printed to the console.
- The format should be simple: just the password string without any additional text.
Style:
- Write clear, maintainable Python code with appropriate comments explaining the functionality of key sections.
- Use descriptive variable names and adhere to PEP 8 formatting standards.
Example of Desired Output:
If a user requests a 12-character password with all character types included, the script might output something like:
D4v!7l@rTzq%
Guidance for Implementation:
- Start by importing necessary libraries (e.g.,
random
andstring
). - Define a function that takes parameters for password length and character type preferences.
- Use a combination of built-in functions and methods to dynamically create the password based on the specified criteria.
Here's a brief code outline to guide your implementation:
import random
import string
def generate_password(length=12, use_uppercase=True, use_lowercase=True, use_numbers=True, use_special_chars=True):
# Create a pool of characters based on user preferences
charset = ''
if use_uppercase:
charset += string.ascii_uppercase
if use_lowercase:
charset += string.ascii_lowercase
if use_numbers:
charset += string.digits
if use_special_chars:
charset += string.punctuation
# Ensure the charset is not empty
if not charset:
raise ValueError("At least one character type must be selected.")
# Generate a random password
password = ''.join(random.choice(charset) for _ in range(length))
print(password)
Feel free to modify the above snippet based on additional features or functionality you may want to incorporate. Ensure that the code is fully functional, handles user input safely, and returns a strong password.
Here's a step-by-step plan for completing the Password Generator task while coordinating tasks among a Coder Agent, a Code Reviewer Agent, and a Documentation Agent:
Step 1: Task Assignment
- Coder Agent: Responsible for writing the initial implementation of the password generator based on the provided requirements.
- Code Reviewer Agent: Responsible for reviewing the code for correctness, adherence to coding standards, and overall quality.
- Documentation Agent: Responsible for creating user documentation and comments within the code.
Step 2: Coder Agent - Implementation
- Step 2.1: Import necessary libraries (
random
andstring
). - Step 2.2: Define the function
generate_password()
with parameters for password length and character type preferences. - Step 2.3: Create a character set based on user preferences:
- Initialize an empty string for
charset
. - Use
if
statements to append tocharset
depending on whether the user has selected uppercase letters, lowercase letters, numbers, and special characters.
- Initialize an empty string for
- Step 2.4: Check if
charset
is empty and raise aValueError
if it is. - Step 2.5: Generate the password using a list comprehension that iterates for the specified length, selecting random characters from
charset
. - Step 2.6: Print the generated password.
Step 3: Coder Agent - Testing
- Step 3.1: Write test cases to validate that the password generation meets the specified criteria (length, character types).
- Step 3.2: Ensure that the function behaves as expected with various combinations of parameters.
Step 4: Code Reviewer Agent - Code Review
- Step 4.1: Review the code for:
- Correct functionality (does it generate a valid password?).
- Adherence to PEP 8 formatting standards (indentation, line length, etc.).
- Appropriate comments for understanding the code's functionality.
- Step 4.2: Provide feedback to the Coder Agent regarding any improvements or necessary changes.
Step 5: Coder Agent - Revise Code
- Step 5.1: Implement any changes or improvements suggested during the code review.
- Step 5.2: Re-test the code to ensure that revisions do not introduce any errors.
Step 6: Documentation Agent - Document the Code
- Step 6.1: Write clear and concise comments within the code, explaining the key sections and logic.
- Step 6.2: Create a README file or a user guide that explains how to use the password generator script, including:
- Functionality overview.
- Instructions for specifying parameters.
- Examples of usage and output.
Step 7: Final Review and Integration
- Step 7.1: Perform a final review of the combined code and documentation to ensure that everything is coherent and aligned with the original task requirements.
- Step 7.2: Package the final version of the code and documentation for delivery or deployment.
Step 8: Completion
- Step 8.1: Confirm all tasks are complete and that the password generator meets the specified objectives.
- Step 8.2: Consider potential further enhancements or features for future iterations based on user feedback.
By following this structured plan and assigning specific roles and responsibilities, the collaborative effort from the Coder Agent, Code Reviewer Agent, and Documentation Agent will lead to a well-crafted, maintainable, and user-friendly password generator in Python.
import random
import string
def generate_password(length=12, use_uppercase=True, use_lowercase=True, use_numbers=True, use_special_characters=True):
"""Generate a random password based on user preferences."""
charset = ""
if use_uppercase:
charset += string.ascii_uppercase
if use_lowercase:
charset += string.ascii_lowercase
if use_numbers:
charset += string.digits
if use_special_characters:
charset += string.punctuation
if not charset:
raise ValueError("At least one character type must be selected for password generation.")
password = ''.join(random.choice(charset) for _ in range(length))
print(password)
# Testing the password generator
if __name__ == "__main__":
print("Generated Passwords:")
generate_password(length=16, use_uppercase=True, use_lowercase=True, use_numbers=True, use_special_characters=True)
generate_password(length=12, use_uppercase=False, use_lowercase=True, use_numbers=True, use_special_characters=False)
generate_password(length=20, use_uppercase=True, use_lowercase=False, use_numbers=False, use_special_characters=True)
[Code Reviewer]: Review (Iteration 0):
The provided code meets the task specifications and correctly implements the password generator based on user-defined criteria. Here is the feedback based on the requirements and code quality:
Functionality: The function
generate_password
accepts the correct parameters for customizing password length and character types, allowing appropriate flexibility for the user.Character Sets: The character sets for uppercase letters, lowercase letters, numbers, and special characters are correctly created based on user preferences.
Error Handling: The code properly raises a
ValueError
if no character types are selected, which is essential for ensuring that a password can be generated.Randomness: The generation of the password using
random.choice
correctly ensures randomness in the selection of characters.Output: The password is printed directly to the console as specified, without any additional text or formatting.
PEP 8 Compliance: The variable names are descriptive, and the overall formatting adheres to PEP 8 standards. The use of double quotes for strings and the overall indentation are consistent and clear.
Comments: The code includes a comment block describing the function, which is beneficial for maintainability.
Main Guard: The
if __name__ == "__main__":
construct is used correctly, allowing the code to be tested when run as a script.Testing: Sample calls to the password generator function provide a way to check its functionality during development. This is good for demonstration but may not be appropriate in a production setting if this code were to be used as a library.
Overall, the code is well-structured and addresses the task requirements effectively. There are no significant issues to note.
APPROVE
Documentation for generate_password
Function: generate_password
Purpose: Generates a random password based on user-specified criteria.
Parameters:
length
(int): Desired length of the password (default is 12).use_uppercase
(bool): Include uppercase letters (default is True).use_lowercase
(bool): Include lowercase letters (default is True).use_numbers
(bool): Include numeric digits (default is True).use_special_characters
(bool): Include special characters (default is True).
Returns:
- A randomly generated password printed to the console.
Exceptions:
- Raises
ValueError
if no character type is selected.
Notes:
- The function constructs a character set based on the selected options and generates a password by randomly selecting characters from this set.
Help Message
Usage: generate_password(length=12, use_uppercase=True, use_lowercase=True, use_numbers=True, use_special_characters=True)
Generate a random password based on preferences:
- length: Length of the password (default: 12)
- use_uppercase: Include uppercase letters? (default: True)
- use_lowercase: Include lowercase letters? (default: True)
- use_numbers: Include numbers? (default: True)
- use_special_characters: Include special characters? (default: True)
Example: generate_password(16, use_uppercase=True)
Congrats! Very nicely done :)!
It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.
https://huggingface.co/blog/CultriX/reverse-engineering-customgpts
Introducing the Spaces Ranking Tracker, a comprehensive analytics dashboard that tracks and analyzes every AI application in the HuggingFace ecosystem.
โจ Key Features:
โข Real-time tracking of daily ranking changes over 30 days
โข Detailed analysis of top 100 trending spaces
โข User-based integrated score visualization
โข One-click access to space details
โข Interactive rank change graphs
๐ Dashboard Components:
1. Main Dashboard
- Daily rank trend graphs
- Top 20 creators' combined score chart
- Detailed space information cards
- Real-time trending score updates
2. Space Detailed Analysis
- Creation date, current rank, and trending score
- 30-day ranking history
- Direct space access
- Custom color coding for intuitive rank display
๐ฏ How to Use:
โข Monitor latest AI community trends
โข Track your project's performance
โข Discover popular AI demos
โข Analyze competing projects
โข Follow AI ecosystem dynamics
3. Interactive Features
- Custom filtering options
- Sorting by various metrics
- Detailed performance statistics
- Comprehensive trending scores
- Historical data tracking
Stay on top of every movement in the HuggingFace ecosystem with daily ranking updates! ๐ Try it now!
๐ Access Dashboard: fantos/Ranking-Tracker
#HuggingFace #AI #DataVisualization #TrendAnalysis #AITrends
1. **Code Writing with Function Calls & File Operations**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_code_writing/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb)
- *Key Tools Shown*:
-
list_files()
- Directory listing-
read_file(filename)
- File reading-
edit_file(file, start_line, end_line, new_code)
- Precise code editing- Code validation and syntax checking
- File backup and restore
2. **Auto Feedback from Code Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_auto_feedback_from_code_execution/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)
- *Key Tools Shown*:
-
execute_code(code)
with output capture- Error analysis and auto-correction
- Test case generation
- Iterative debugging loop
3. **Async Operations & Parallel Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_async/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_async.ipynb)
- *Key Tools Shown*:
- Async function registration
- Parallel agent operations
- Non-blocking file operations
- Task coordination
4. **LangChain Integration & Advanced Tools**
- [Colab](https://colab.research.google.com/github/sugarforever/LangChain-Advanced/blob/main/Integrations/AutoGen/autogen_langchain_uniswap_ai_agent.ipynb)
- *Key Tools Shown*:
- Vector store integration
- Document QA chains
- Multi-agent coordination
- Custom tool creation
Most relevant for file operations and code editing is Example #1, which demonstrates the core techniques used in autogenie.py for file manipulation and code editing using line numbers and replacement.
I updated the LLM Scientist roadmap and added a ton of new information and references. It covers training, datasets, evaluation, quantization, and new trends like test-time compute scaling.
The LLM Course has been incredibly popular (41.3k stars!) and I've been touched to receive many, many messages about how it helped people in their careers.
I know how difficult this stuff can be, so I'm super proud of the impact it had. I want to keep updating it in 2025, especially with the LLM Engineer roadmap.
Thanks everyone, hope you'll enjoy it!
๐ป LLM Course: https://huggingface.co/blog/mlabonne/llm-course
- The science team at @huggingface reproduced and open source the seek r1. https://github.com/huggingface/open-r1
- @qwen released a series of models with 1 million token context! https://qwenlm.github.io/blog/qwen2.5-1m/
- SmolVLM got even smaller with completely new variants at 256m and 500m https://huggingface.co/blog/smolervlm
There's so much you could do with these developments. Especially combining them together into agentic applications or fine-tuning them on your use case.
https://huggingface.co/blog/CultriX/reverse-engineering-customgpts
Hi all, I created this "AutoGen Multi-Agent Workflow" space that allows you to experiment with multi-agent workflows.
By default, it allows code generation with built-in quality control and automatic documentation generation. It achieves this by leveraging multiple AI agents working together to produce high-quality code snippets, ensuring they meet the specified requirements.
In addition to the default, the space allows users to set custom system messages for each assistant, potentially completely changing the workflow.
# Workflow Steps
1. User Input:
- The user defines a prompt, such as "Write a random password generator using python."
- Outcome: A clear task for the primary assistant to accomplish.
2. Primary Assistant Work:
- The primary assistant begins working on the provided prompt.
It generates an initial code snippet based on the user's request.
- Outcome: An initial proposal for the requested code.
3. Critic Feedback:
- The critic reviews the generated code provides feedback or (if the output meets the criteria), broadcasts the APPROVED message.
(This process repeats until the output is APPROVED or 10 messages have been exchanged).
- Outcome: A revised Python function that incorporates the critic's feedback.
4. Documentation Generation:
- Once the code is approved, it is passed to a documentation assistant.
The documentation assistant generates a concise documentation for the final code.
- Outcome: A short documentation including function description, parameters, and return values.
Enjoy!
CultriX/AutoGen-MultiAgent-Example
Hi all, I created this "AutoGen Multi-Agent Workflow" space that allows you to experiment with multi-agent workflows.
By default, it allows code generation with built-in quality control and automatic documentation generation. It achieves this by leveraging multiple AI agents working together to produce high-quality code snippets, ensuring they meet the specified requirements.
In addition to the default, the space allows users to set custom system messages for each assistant, potentially completely changing the workflow.
# Workflow Steps
1. User Input:
- The user defines a prompt, such as "Write a random password generator using python."
- Outcome: A clear task for the primary assistant to accomplish.
2. Primary Assistant Work:
- The primary assistant begins working on the provided prompt.
It generates an initial code snippet based on the user's request.
- Outcome: An initial proposal for the requested code.
3. Critic Feedback:
- The critic reviews the generated code provides feedback or (if the output meets the criteria), broadcasts the APPROVED message.
(This process repeats until the output is APPROVED or 10 messages have been exchanged).
- Outcome: A revised Python function that incorporates the critic's feedback.
4. Documentation Generation:
- Once the code is approved, it is passed to a documentation assistant.
The documentation assistant generates a concise documentation for the final code.
- Outcome: A short documentation including function description, parameters, and return values.
Enjoy!
CultriX/AutoGen-MultiAgent-Example
Sorry that was a typo! It should have said fp16 indeed! I also used fp16 so forget the f16 that was a typo! :)!
No what I meant is:
- I export it to fp16 = f"{MODEL_NAME}/{MODEL_NAME.lower()}.fp16.gguf --> so a .gguf file
- You export it as fp16 = f"{MODEL_NAME}/{MODEL_NAME.lower()}.fp16.bin" --> so a .bin file.
This is relevant as you later use this file in the 'for method in QUANTIZATION_METHODS:' loop. So I used a .gguf file there and you used a .bin file there.
I'm not sure if it makes a lot of difference but I feel like the .bin file failed me and then when I changed it to .gguf it worked (I followed: https://github.com/ggerganov/llama.cpp/discussions/2948)
I might be wrong here, but could it be that there is a mistake in the code?
For me it did not work either, but after I changed:
# Convert to fp16
fp16 = f"{MODEL_NAME}/{MODEL_NAME.lower()}.fp16.bin"
!python llama.cpp/convert.py {MODEL_NAME} --outtype f16 --outfile {fp16}
# Quantize the model for each method in the QUANTIZATION_METHODS list
for method in QUANTIZATION_METHODS:
qtype = f"{MODEL_NAME}/{MODEL_NAME.lower()}.{method.upper()}.gguf"
!./llama.cpp/quantize {fp16} {qtype} {method}
to
# Convert to fp16
fp16 = f"{MODEL_NAME}/{MODEL_NAME.lower()}.fp16.gguf"
!python llama.cpp/convert.py {MODEL_NAME} --outtype fp16 --outfile {fp16}
# Quantize the model for each method in the QUANTIZATION_METHODS list
for method in QUANTIZATION_METHODS:
qtype = f"{MODEL_NAME}/{MODEL_NAME.lower()}.{method.upper()}.gguf"
!./llama.cpp/quantize {fp16} {qtype} {method}
It did seem to work without error!
I found some documentation as w ell where they run this:
python llama.cpp/convert.py vicuna-hf \
--outfile vicuna-13b-v1.5.gguf \
--outtype q8_0
so they export it to a gguf file (although they also immediately apply q8_0 quant so that might be why I'm not sure this is a little bit beyond my knowledge haha. Figured I'd share this though, might help you more knowledgeable people to figure out what is going on!