DNN Verification with Farkas Certificates

AI Safety Peer-Reviewed (ITP 2025)

A verified proof checker for neural network safety properties — using the Farkas lemma to certify unsatisfiability of linear arithmetic systems arising from DNN verification.


Overview

Deep neural networks are increasingly deployed in safety-critical systems — autonomous vehicles, medical diagnostics, financial trading. Verifying their safety properties requires proving that certain input-output relationships hold for all possible inputs.

This case study presents a verified Farkas lemma proof checker written and proved correct in ImandraX. The Farkas lemma provides algebraic certificates that witness the unsatisfiability of systems of linear inequalities — exactly the kind of constraints that arise when encoding neural network properties.

ITP 2025

This work was presented at the International Conference on Interactive Theorem Proving (ITP 2025), one of the premier venues for formal verification research.


The Model

Polynomial Representation

A polynomial is represented as a list of coefficients — one for each variable plus a constant term:

type poly = Real.t list
 
(* 3X₁ + 4X₃ + 1 is [3.0; 0.0; 4.0; 1.0] *)

Linear Constraints

type expr =
  | Eq of poly   (*  = 0 *)
  | Geq of poly  (* >= 0 *)
  | Gt of poly   (*  > 0 *)
 
type system = expr list

Polynomial Evaluation and Arithmetic

let rec eval_poly (p:poly) (x:Real.t list) : Real.t =
  match p, x with
  | a :: p, b :: x -> Real.(a*b + eval_poly p x)
  | [a], [] -> a
  | _ -> 0.0
 
let eval_expr e x =
  match e with
  | Eq p -> Real.(eval_poly p x = 0.0)
  | Geq p -> Real.(eval_poly p x >= 0.0)
  | Gt p -> Real.(eval_poly p x > 0.0)
 
let rec eval_system (es:system) (x:Real.t list) : bool =
  match es with
  | [] -> true
  | e::es -> eval_expr e x && eval_system es x
 
let rec p_add (p:poly) (q:poly) : poly =
  match p, q with
  | [], [] -> []
  | a::p, b::q -> Real.(a + b) :: p_add p q
  | _ -> []
 
let rec p_scale (p:poly) (r:Real.t) : poly =
  match p with
  | [] -> []
  | c::cs -> Real.(r * c) :: p_scale cs r
 
let rec is_neg_const (p:poly) : bool =
  match p with
  | [] -> false
  | [c] -> Real.(c < 0.0)
  | c::p -> c = 0.0 && is_neg_const p
 
let rec sum_polys (ps:poly list) : poly =
  match ps with
  | [] -> []
  | [p] -> p
  | p::ps -> p_add p (sum_polys ps)

Certificate Construction

The scale_exprs function scales constraints by certificate coefficients, enforcing non-negativity constraints for Geq and positivity for Gt — this is where soundness is baked in:

let rec scale_exprs (cs:Real.t list) (es:system) : poly list =
  match cs, es with
  | c::cs, Eq p :: es -> p_scale p c :: scale_exprs cs es
  | c::cs, Geq p :: es ->
    if Real.(c >= 0.0) then p_scale p c :: scale_exprs cs es
    else p :: scale_exprs cs es
  | c::cs, Gt p :: es ->
    if Real.(c > 0.0) then p_scale p c :: scale_exprs cs es
    else p :: scale_exprs cs es
  | [], Eq p :: es | [], Geq p :: es | [], Gt p :: es ->
    p :: scale_exprs cs es
  | _ -> []
[@@adm es]
 
let mk_cert_poly (cs:Real.t list) (es:system) : poly =
  sum_polys (scale_exprs cs es)

The Proof Checker

The core idea: given a system of linear constraints and a vector of certificate coefficients, form a linear combination. If the result is a negative constant, the system is unsatisfiable:

let check_cert (cs:Real.t list) (es:system) : bool =
  is_neg_const (mk_cert_poly cs es)
 
let rec good_poly_x_sizes (p:poly) (x:Real.t list) : bool =
  match p, x with
  | [_], [] -> true
  | _::p, _::x -> good_poly_x_sizes p x
  | _ -> false
 
let rec well_formed es x =
  match es with
  | [] -> true
  | Eq p :: es -> good_poly_x_sizes p x && well_formed es x
  | Geq p :: es -> good_poly_x_sizes p x && well_formed es x
  | Gt p :: es -> good_poly_x_sizes p x && well_formed es x

This is remarkably simple — the complexity lies in proving it correct.


Verification

The proof proceeds through a carefully constructed chain of lemmas:

Key Lemmas

Polynomial arithmetic correctness — scaling and addition of polynomials preserve evaluation:

lemma eval_p_add p1 p2 x =
  good_poly_x_sizes p1 x && good_poly_x_sizes p2 x
  ==> eval_poly (p_add p1 p2) x =
      Real.(eval_poly p1 x + eval_poly p2 x)
[@@by auto]

Non-negativity of solutions — any satisfying assignment makes the certificate polynomial non-negative:

lemma solution_is_not_neg cs es x =
  eval_system es x && well_formed es x && es <> []
  ==> Real.(eval_poly (mk_cert_poly cs es) x >= 0.0)
[@@by auto]

This uses nonlinear arithmetic reasoning to establish that scaling a non-negative value by a non-negative coefficient preserves non-negativity.

Final Correctness Theorem

The main theorem states: if the certificate check passes, then no real-valued assignment can satisfy the system:

theorem check_cert_correct (es:system) (cs:Real.t list) (x:Real.t list) =
  well_formed es x && check_cert cs es
  ==> eval_system es x = false
[@@by [%use check_cert_correct_ne es cs x] @> auto]

This is a powerful result: it means we can trust the proof checker. If it says a system of neural network constraints is unsatisfiable, that conclusion is mathematically guaranteed.

Proof Structure

The complete proof includes 18 lemmas organized in layers, each building on the previous:

Layer 1: Polynomial arithmetic -- eval_p_add, eval_scale_pull, p_scale_p_add_pull. These establish that the polynomial evaluation function is a homomorphism with respect to addition and scalar multiplication.

Layer 2: Size invariants -- good_poly_x_sizes_scale_invariance, good_poly_x_sizes_add_invariance. These ensure that scaling and adding polynomials preserves the well-formedness condition (matching lengths between coefficient lists and variable assignments).

Layer 3: Positivity -- times_psd, times_pd, times_neg. These use nonlin() to establish fundamental facts about multiplication of real numbers:

lemma times_psd x y =
  Real.(x >= 0.0) && Real.(y >= 0.0)
  ==> Real.((x * y) [@trigger] >= 0.0)
[@@by nonlin ()] [@@fc]
 
lemma times_pd x y =
  Real.(x > 0.0) && Real.(y > 0.0)
  ==> Real.((x * y) [@trigger] > 0.0)
[@@by nonlin ()] [@@fc]

The [@@fc] (forward chaining) annotation tells ImandraX to automatically apply these lemmas whenever their hypotheses are present in the proof context -- essential for the automation of later layers.

Layer 4: Constraint semantics -- eval_non_neg_scale, eval_pos_scale, eval_zero_scale. These connect the abstract constraint types (Geq, Gt, Eq) to concrete arithmetic properties when scaled by certificate coefficients.

Layer 5: Certificate construction -- sum_scale_well_formed, mk_cert_poly_well_formed. These prove that the certificate polynomial constructed from the linear combination is itself well-formed.

Layer 6: Main argument -- solution_is_not_neg establishes that any satisfying assignment makes the certificate non-negative, solution_cert_is_not_neg_poly strengthens this to the full certificate, and check_cert_correct combines everything: a negative certificate implies no satisfying assignment exists.


Application to Neural Networks

In DNN verification, safety properties are encoded as linear constraints over network inputs and outputs. For example:

  • "If the input pixel intensity is above threshold T, the network never classifies the image as class C"
  • "Small perturbations to the input never change the classification"

These properties reduce to checking unsatisfiability of linear arithmetic systems — exactly what the Farkas checker handles. The verified checker provides a trusted kernel for DNN verification tools.


Results

The full proof includes:

  • 18 lemmas establishing polynomial arithmetic properties
  • Nonlinear arithmetic reasoning for positivity/non-negativity preservation
  • Forward chaining and rewriting to orchestrate the proof
  • A final correctness theorem that requires only two tactic steps

The proof demonstrates ImandraX's ability to handle real-valued arithmetic, polynomial operations, and the kind of book-keeping reasoning that pervades formal verification of numerical algorithms.


Verified Proof Checker

The checker itself is proved correct — not just tested, but mathematically guaranteed to be sound.

AI Safety Foundation

Provides a trusted kernel for neural network verification tools used in safety-critical deployments.

Real Arithmetic

Demonstrates ImandraX's ability to reason about real-valued polynomial arithmetic with nonlinear constraints.


References

  • Source: farkas.iml
  • ITP 2025 paper on Farkas lemma verification in ImandraX