What we’re solving
A binary SVM trained on with labels has a dual problem in the Lagrange multipliers :
That’s the formula in every textbook. Now read it again, slowly, and notice three things:
- The objective is quadratic in .
- The box constraints are linear inequalities.
- The single equality is two linear inequalities sandwiched together.
So the SVM dual is literally a QP. If we flip the sign to make it a minimization and stack everything into matrices, we can hand it straight to solve_qp.
Translation
Let be the kernel matrix with entries , and the label vector. Then with (Hadamard product) and , the dual becomes
The box constraint is handled by lower_bound=0.0, upper_bound=C in SolverConfig — snn_opt clips them in-place rather than treating them as inequalities, which is more stable. The equality is encoded as the inequality pair for a small slack .
End-to-end code
import numpy as np
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from snn_opt import OptimizationProblem, SNNSolver, SolverConfig
# 1. Generate a small binary problem.
X, y = make_classification(n_samples=80, n_features=10,
n_informative=4, random_state=0)
y = 2 * y - 1 # {0,1} → {-1,+1}
# 2. Build the kernel matrix (RBF, gamma = 1/n_features).
gamma = 1.0 / X.shape[1]
sq = np.sum(X**2, axis=1)
K = np.exp(-gamma * (sq[:, None] + sq[None, :] - 2 * X @ X.T))
# 3. Translate to (A, b, C, d) form.
A = (np.outer(y, y)) * K + 1e-6 * np.eye(len(y)) # tiny ridge for PSD
b = -np.ones_like(y, dtype=float)
eps = 1e-4
C_eq = np.vstack([y, -y]).astype(float) # ±yᵀα ≤ eps
d_eq = -eps * np.ones(2)
C_box, d_box = C_eq, d_eq # only the equality
# 4. Solve.
problem = OptimizationProblem(A=A, b=b, C=C_box, d=d_box)
config = SolverConfig(max_iterations=5000,
lower_bound=0.0, upper_bound=1.0)
alpha = SNNSolver(problem, config).solve(np.zeros_like(y, dtype=float)).final_x
# 5. Compare with sklearn.
clf = SVC(kernel="precomputed", C=1.0).fit(K, y)
sk_alpha = np.zeros_like(y, dtype=float)
sk_alpha[clf.support_] = clf.dual_coef_[0] * y[clf.support_] # signed → unsigned
print(f"snn_opt objective: {0.5 * alpha @ A @ alpha + b @ alpha:.4f}")
print(f"sklearn objective: {0.5 * sk_alpha @ A @ sk_alpha + b @ sk_alpha:.4f}")
On any sensible dataset the two objectives match to 3-4 decimal places. The snn_opt solution will have a slightly different active set — the spike raster will tell you which support vectors fired and how often.
Why bother?
You wouldn’t replace sklearn.svm.SVC with this in production. The point of the exercise is different:
- Pedagogy. The translation makes the QP-ness of SVM concrete.
- Diagnostics. The spike raster shows the active-set evolution explicitly.
sklearnhides this. - Composability. Once SVM is a QP, you can mix it with other QP-shaped things — kernel regression, structured-output methods, your own custom losses — all in the same solver.
- Hardware. The same dynamics run on neuromorphic hardware. The classical solver doesn’t.
Next
- Reading the spike raster — turn
result.spike_*arrays into a useful debugging plot. examples/example7_svm_dual.py— a more complete and configurable version of the code above.