π New smolagents update: Safer Local Python Execution! π¦Ύπ
With the latest release, we've added security checks to the local Python interpreter: every evaluation is now analyzed for dangerous builtins, modules, and functions. π
Here's why this matters & what you need to know! π§΅π
1οΈβ£ Why is local execution risky? β οΈ AI agents that run arbitrary Python code can unintentionally (or maliciously) access system files, run unsafe commands, or exfiltrate data.
2οΈβ£ New Safety Layer in smolagents π‘οΈ We now inspect every return value during execution: β Allowed: Safe built-in types (e.g., numbers, strings, lists) β Blocked: Dangerous functions/modules (e.g., os.system, subprocess, exec, shutil)
4οΈβ£ Security Disclaimer β οΈ π¨ Despite these improvements, local Python execution is NEVER 100% safe. π¨ If you need true isolation, use a remote sandboxed executor like Docker or E2B.
5οΈβ£ The Best Practice: Use Sandboxed Execution π For production-grade AI agents, we strongly recommend running code in a Docker or E2B sandbox to ensure complete isolation.
6οΈβ£ Upgrade Now & Stay Safe! π Check out the latest smolagents release and start building safer AI agents today.
π Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. π¦Ύπ
Here's why this is a game-changer for agent-based systems: π§΅π
1οΈβ£ Security First π Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.
2οΈβ£ Deterministic & Reproducible Runs π¦ By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable settingβno more environment mismatches or dependency issues!
3οΈβ£ Resource Control & Limits π¦ Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents donβt spiral out of control.
4οΈβ£ Safer Code Execution in Production π Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.
5οΈβ£ Easy to Integrate π οΈ With smolagents, you can simply configure your agent to use Docker or E2B as its execution backendβno need for complex security setups!
6οΈβ£ Perfect for Autonomous AI Agents π€ If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.
π Introducing @huggingface Open Deep-Researchπ₯
In just 24 hours, we built an open-source agent that: β Autonomously browse the web β Search, scroll & extract info β Download & manipulate files β Run calculations on data
π¨ How green is your model? π± Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! π open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint!
π The Comparator calculates COβ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... π οΈ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
π New feature of the Comparator of the π€ Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
π οΈ Here's how to use it: 1. Select your model from the leaderboard. 2. Load its model tree. 3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison. 4. Press Load. See side-by-side performance metrics instantly!
Ready to dive in? π Try the π€ Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator π
Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
π¨ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? π Compare models: open-llm-leaderboard/comparator