Our backward mode differentiation of convex QP layers has been implemented in C++. We refer to it as QPLayer in what follows. Our code leverages the primal-dual augmented Lagrangian solver ProxQP (Bambade et al., 2022), also written in C++ as its internal QP solver. This section illustrates through a classic Sudoku learning task that QPLayer allows relaxing primal feasibility constraints, thereby enabling the training of simplified layers.
Figure 1: Deep learning paradigm for Sudoku solving task.
Figure 2: Test MSE loss of QPLayer, OptNet, QPLayer-learn A, and OptNet-learn A specialized for learning A. It includes Sudoku Ax = 1 violation.
Figure 3: Test prediction errors over 1000 puzzles of OptNet, QPLayer, QPLayer-learn A and OptNet-learn A specialized for learning A.
Here is a code example showing how to easily use QPLayer in PyTorch:
@article{bambade2024augmented, author = {Bambade, Antoine and Schramm, Fabian and Taylor, Adrien and Carpentier, Justin}, title = {Leveraging Augmented-Lagrangian Techniques for Differentiating Over Infeasible Quadratic Programs in Machine Learning}, journal = {Twelfth International Conference on Learning Representations (ICLR)}, year = {2024}, }