Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,56 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# LearningToOptimize
|
11 |
+
|
12 |
+
## 1. Introduction
|
13 |
+
|
14 |
+
**LearningToOptimize** is an organization dedicated to **learning to optimize (L2O)** — an emerging paradigm where machine learning models *learn* to solve optimization problems efficiently. This approach is also known as using **optimization proxies** or **amortized optimization**. Our mission is to serve as a hub for sharing open datasets, pre-trained models, and tools that accelerate research and practical applications of L2O methods.
|
15 |
+
|
16 |
+
This organization is closely linked to the [LearningToOptimize.jl](https://github.com/andrewrosemberg/LearningToOptimize.jl) Julia package, which provides foundational functionalities for fitting ML-based surrogate models (or proxies) to complex optimization problems. Here, you will find:
|
17 |
+
|
18 |
+
- **Datasets**: Collections of problem instances and their optimal solutions, useful for training and benchmarking.
|
19 |
+
- **Trained Models**: Ready-to-use optimization proxies for various tasks, enabling rapid inference on new problem instances.
|
20 |
+
- **Benchmarking Tools**: Utilities for comparing learned proxies against traditional solvers in terms of speed, feasibility, and performance.
|
21 |
+
|
22 |
+
## 2. What are Optimization Proxies?
|
23 |
+
|
24 |
+
### High-Level Explanation
|
25 |
+
|
26 |
+
*Optimization proxies* are machine learning models that approximate or replace traditional optimization solvers. By observing many instances of a problem (and possibly their solutions), a proxy learns to predict near-optimal solutions in a single forward pass. This amortized approach can reduce or eliminate the need to run a time-consuming solver from scratch for each new instance, delivering **major speed-ups** in real-world applications such as power systems, resource allocation, and beyond.
|
27 |
+
|
28 |
+
### Technical Explanation
|
29 |
+
|
30 |
+
In more technical terms, **amortized optimization** seeks to learn a function \\( f_\\theta(x) \\) that maps problem parameters \\( x \\) to solutions \\( y \\) that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like **differentiable optimization layers**, **input-convex neural networks**, or constraint-enforcing architectures (e.g., [DC3](https://openreview.net/pdf?id=0Ow8_1kM5Z)) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an **end-to-end** pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
|
31 |
+
|
32 |
+
Recent advances also focus on **trustworthy** or **certifiable** proxies, where constraint satisfaction or performance bounds are guaranteed. This is crucial in domains like energy systems or manufacturing, where infeasible solutions can have large penalties or safety concerns. Overall, learning-based optimization frameworks aim to combine the advantages of ML (data-driven generalization) with the rigor of mathematical programming (constraint handling and optimality).
|
33 |
+
|
34 |
+
For a broader overview, see the [SIAM News article on trustworthy optimization proxies](https://www.siam.org/publications/siam-news/articles/fusing-artificial-intelligence-and-optimization-with-trustworthy-optimization-proxies/), which highlights the growing synergy between AI and classical optimization.
|
35 |
+
|
36 |
+
## 3. References and Citations
|
37 |
+
|
38 |
+
1. **A. Rosemberg, M. Tanneau, B. Fanzeres, J. Garcia, P. Van Hentenryck (2023)**
|
39 |
+
*Learning Optimal Power Flow Value Functions with Input-Convex Neural Networks.*
|
40 |
+
Accepted at *PSCC 2024*.
|
41 |
+
|
42 |
+
3. **P. Donti, B. Amos, J. Z. Kolter (2021)**
|
43 |
+
*DC3: A Learning Method for Optimization with Hard Constraints.*
|
44 |
+
[ICLR](https://openreview.net/forum?id=0Ow8_1kM5Z)
|
45 |
+
|
46 |
+
4. **P. Van Hentenryck (2023)**
|
47 |
+
*Fusing Artificial Intelligence and Optimization with Trustworthy Optimization Proxies.*
|
48 |
+
[SIAM News](https://www.siam.org/publications/siam-news/articles/fusing-artificial-intelligence-and-optimization-with-trustworthy-optimization-proxies/)
|
49 |
+
|
50 |
+
5. **B. Amos (2022)**
|
51 |
+
*Tutorial on Amortized Optimization.*
|
52 |
+
[arXiv:2202.00665](https://arxiv.org/abs/2202.00665)
|
53 |
+
|
54 |
+
2. **A. Rosemberg, A. Street, D. M. Valladão, P. Van Hentenryck (2023)**
|
55 |
+
*Efficiently Training Deep-Learning Parametric Policies using Lagrangian Duality.*
|
56 |
+
[arXiv:2405.14973](https://arxiv.org/abs/2405.14973)
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
By sharing our work and resources here, we hope to foster collaboration among researchers and practitioners who are exploring the exciting intersections of AI and optimization. Thank you for visiting **LearningToOptimize**—let’s push the boundaries of what’s possible in end-to-end optimization together!
|
61 |
+
|
62 |
+
Please reach out if you want to contribute!
|