A friendly tour · v0.1

Spiking neural
networks are
convex optimizers.

A simple population of leaky integrate-and-fire neurons, with the right connectivity, solves a quadratic program. Each spike is a constraint becoming active. Each silence is a gradient step. The page below walks through the idea, the math, and a runnable Python implementation.

Pkg
snn_opt
Deps
NumPy · SciPy
License
Apache-2.0
c₁ᵀx ≤ d₁ c₂ᵀx ≤ d₂ c₃ᵀx ≤ d₃ x*ᵤ x₀ x* drift (∇f) spike (proj.) x*ᵤ unconstrained

A 2-D quadratic with three half-space constraints. The trajectory drifts toward x*ᵤ, spikes at each active wall, and settles at the constrained optimum.

Three beats

The idea, distilled.

The whole framework collapses into three repeating moves. Everything else — the convergence proofs, the hardware story, the spike raster — falls out of these.

01

Drift.

Between spikes, each neuron's membrane voltage moves by the gradient of an objective. Plain gradient descent — dressed up as biology.

02

Spike.

When the voltage would push the trajectory past a constraint, a spike fires. The spike re-projects the state onto the feasible boundary — a discrete correction.

03

Settle.

The spike train and the drift balance at the constrained optimum. Active spikes encode the active constraint set. The math is exactly that of a primal-dual QP solver.

A picture

What it
looks like.

A 4-D quadratic, optimum outside a box constraint. The trajectory bounces against the four "active" faces every iteration; each dot is one projection event. Below it: the objective gap collapsing on the same iteration axis.

Projection-spike raster on a 4-D box-constrained QP
Top: spike raster, one row per inequality, dot size ∝ projection magnitude. Bottom: objective gap on the same iteration axis. Generated by benchmarks/02_spike_raster.py.

Map

What you'll
find here.

Four entry points. Pick whichever matches your appetite — the intuition essay, the hands-on tutorials, the published papers, or the source code itself.

For whom

Who this is for.

Students starting a thesis on neuromorphic methods. Researchers from optimization or control who heard "spiking networks" and weren't sure what to make of it. Anyone who finds beauty in connections between fields that look unrelated until they don't.

If you want the formal version: head to the snn_opt repository — it has the full theory document, an academic-style README, and a benchmark suite. The pages here are the friendly version of the same material.