IEEE P3109 Posit Arithmetic
IEEE Standard Hardware VerificationFormal verification of posit number arithmetic for the IEEE P3109 standard — next-generation floating point designed to improve upon IEEE 754.
Overview
Posit numbers are a proposed alternative to IEEE 754 floating-point arithmetic, designed by John Gustafson. They offer several advantages:
- No NaN — every bit pattern represents a valid number (or ±∞)
- Tapered precision — more precision near 1.0 where most computations occur
- Simpler rounding — fewer special cases than IEEE 754
- Better accuracy — more useful bits per operation for typical workloads
Imandra has been used to formally verify key properties of posit arithmetic as part of the IEEE P3109 standardization effort, ensuring that the proposed standard is mathematically correct before silicon implementation.
IEEE Collaboration
This work was conducted in collaboration with the IEEE P3109 working group and presented at ARITH 2025 (IEEE International Symposium on Computer Arithmetic).
The Model
The full verification is a large-scale, multi-file IML development available at imandra-ai/ieee-p3109 on GitHub. Below are representative excerpts showing the key ideas.
Format Specification
The posit format is parameterized by width k (total bits) and precision p. The specification covers all formats from 2-bit to 16-bit:
module Format = struct
type kpt =
| B2P1 | B2P2
| B3P1 | B3P2 | B3P3
| B4P1 | B4P2 | B4P3 | B4P4
(* ... up to B16P16 *)
type t = {
kp: kpt;
s: Signedness.t;
d: Domain.t;
}
let precision (f : t) : int = ...
let width (f : t) : int = ...
let k (f : t) : int = width f
let p (f : t) : int = precision f
endUtility Functions
Mathematical utilities including exact power-of-two computation with ordinal-based termination:
let rec pow2 (n : int) : real =
if n > 0 then
2.0 *. pow2 (n - 1)
else if n < 0 then
1.0 /. (2.0 *. (pow2 (- (n + 1))))
else
1.0
[@@measure Ordinal.of_int (abs n)]
let rec ipow2 (x : int) : int =
match x with
| 0 -> 1
| 1 -> 2
| x -> if x <= 0 then 1 else 2 * ipow2 (x - 1)
[@@unroll 5]Concrete Format Checks
The verification includes checks that format parameters match the standard for specific configurations:
let f83 = {Format.kp = Format.B8P3;
s = Signedness.Signed;
d = Domain.Extended}
lemma fmt_params_p3 (f : Format.t) =
(f = f83) ==>
let k, p, b, _, max, _, _ = Format.parameters f in
k = 8 && p = 3 && b = 16 && max = 49152.0 &&
bitWidthOf f = k && precisionOf f = p
&& exponentBiasOf f = b
&& wDecode f (maxFiniteOf f) = CER.R max
[@@by auto]Bit-Level Decoding Verification
Individual bit patterns are checked against expected real values:
let is_r (x : CER.t) (r : CER.t) = x = r [@@macro]
lemma real_0 =
is_r (wDecode f83 Float.zero) (CER.R 0.0)
[@@by auto]
lemma real_pos_one =
is_r (wDecode f83 0x40) (CER.R 1.0)
[@@by auto]
lemma real_neg_one =
is_r (wDecode f83 0xC0) (CER.R (- 1.0))
[@@by auto]
lemma tiny_value =
is_r (wDecode f83 0x01) (CER.R 0.00000762939453125)
[@@by auto]
lemma two_value =
is_r (wDecode f83 0x44) (CER.R 2.0)
[@@by auto]Key Theorems
Format Parameter Bounds
theorem format_parameter_limits (f : Format.t) =
let k, p, b, _, _, s, d =
(Format.parameters f [@trigger]) in
0 < k && k <= 16 &&
0 < p && p <= 16 &&
b <= Util.ipow2 (k - p)
[@@timeout 600]
[@@by unroll_nonlin 20 10 100]
[@@fc]Decode-Encode Round-Trip
Encoding a decoded number is always valid:
theorem decode_encode_ok (f : Format.t) (x : Float.t) =
Result.is_ok
(Float.wEncode f (Float.wDecode f x) [@trigger])
[@@timeout 3600]
[@@by [%use within_format_range f x] @> auto]
[@@fc]Decoded Values Are Within Range
theorem decode_within_range (f : Format.t) (x : Float.t) =
Float.is_within_range f
((Float.wDecode f x) [@trigger])
[@@timeout 600]
[@@by [%use within_format_range f x] @> auto]
[@@fc]Maximum Finite Value
theorem maxFinite_is_m_hi (f : Format.t) =
let k, p, _, _, m_hi, s, _ = Format.parameters f in
(s = Signedness.Unsigned || p < k) ==>
CER.R m_hi = Float.wDecode f (Float.maxFiniteOf f)
[@@timeout 20]
[@@by enumerate ["f"] @> ground_eval ()]The Posit Format
The regime is the defining feature of posit arithmetic. It is a unary-encoded run of bits immediately following the sign bit:
- A run of
kones followed by a zero encodes regime valuek-1 - A run of
kzeros followed by a one encodes regime value-k - The regime determines the coarse magnitude: value is approximately
useed^regime, whereuseed = 2^(2^es)
For example, in an 8-bit posit with es=1 (so useed = 4):
- Regime bits
1110encode regime 2, contributing a factor of4^2 = 16 - Regime bits
001encode regime -2, contributing a factor of4^(-2) = 1/16
This tapered precision design means:
- Near 1.0: the regime is short, leaving many bits for the exponent and fraction -- maximum precision where most computations occur
- Far from 1.0: the regime consumes more bits, reducing fraction precision but providing wider dynamic range
- No subnormals, no NaN -- every bit pattern is meaningful. The only special values are zero (
0b00...0) and+-infinity(0b10...0)
The absence of NaN propagation and subnormal handling simplifies both hardware implementation and formal verification. Unlike IEEE 754, there are no "trap" representations that require special-case logic throughout the arithmetic pipeline.
Scale of the Verification
The full ieee-p3109 development spans:
- 28+ IML source files covering specification, implementation, and theorems
- Specification: format definitions, encoding/decoding, arithmetic operations, rounding, saturation, special values, comparison, trigonometric functions, exponentials, logarithms, square roots
- Implementation: a verified implementation matching the specification
- Theorem suites: format ranges, IEEE 754 comparison, block operations, convergence proofs for iterative algorithms (e.g., square root via Newton's method)
Tactics used include auto, unroll, enumerate with ground_eval, and specialized nonlinear arithmetic strategies combining SMT solvers with ImandraX's own reasoning.
Why This Matters
Hardware arithmetic bugs are among the most expensive in computing history (the Intel Pentium FDIV bug cost ~$475 million). For a new arithmetic standard, formal verification before implementation is essential:
- Pre-silicon verification — catch bugs before they're burned into hardware
- Standards confidence — mathematical proof that the specification is self-consistent
- Implementation guidance — the formal model serves as an executable reference specification
IEEE Standards Body
Formal verification integrated into the IEEE P3109 standardization process itself.
Pre-Silicon Verification
Properties proved before any hardware implementation — catching bugs at the cheapest possible stage.
Presented at ARITH 2025
Results presented at the premier venue for computer arithmetic research.