Model Checking

API Reference

Base Class

Exact Model Checking

class masa.common.pctl.ExactModelChecker(formula: BoundedPCTLFormula, label_fn: Callable[[Any], Iterable[str]], atomic_predicates: List[str], transition_matrix: ndarray | None = None, successor_states: ndarray | None = None, probabilities: ndarray | None = None)[source]

Bases: BoundedPCTLModelChecker

Exact model checker for bounded PCTL under a fixed policy.

The exact checker collapses an MDP into a Markov chain by applying a stochastic policy, then evaluates the bounded formula on the resulting Markov chain using the formula’s internal recurrences.

  • check_state() returns per-state satisfaction for the policy-induced chain.

  • check_state_action() returns a state-action value-like array derived from the formula probability sequence (using the vector at horizon B-1).

See also

StatisticalModelChecker: Sampling-based estimation of satisfaction.

check_state(key: Array, policy: array) ndarray[source]

Evaluate the stored formula on the policy-induced Markov chain.

Parameters:
  • key – Unused PRNG key (kept for API symmetry with StatisticalModelChecker).

  • policy – Stochastic policy probabilities, either: - shape (n_actions, n_states) (action-major), or - shape (n_states, n_actions) (state-major).

Returns:

Float64 array of shape (n_states,) in {0.0, 1.0}.

Raises:

ValueError – If policy has an unexpected shape.

check_state_action(key: Array, policy: ndarray) ndarray[source]

Compute a state-action satisfaction value for the stored formula.

This method: 1) Builds the policy-induced Markov chain kernel, 2) Computes the formula sequence up to bound B = formula.bound, 3) Uses the vector at time max(B-1, 0) as a value function, 4) Computes one-step expectations under each action to produce Q.

Parameters:
  • key – Unused PRNG key (kept for API symmetry with StatisticalModelChecker).

  • policy – Stochastic policy probabilities, either (A, S) or (S, A).

Returns:

Float64 array Q of shape (n_states, n_actions) where Q[s, a] is the expected satisfaction value after taking action a in state s and then following policy.

Raises:

ValueError – If policy has an unexpected shape.

Statistical Model Checking

class masa.common.pctl.StatisticalModelChecker(formula: BoundedPCTLFormula, label_fn: Callable[[Any], Iterable[str]], atomic_predicates: List[str], transition_matrix: ndarray | None = None, successor_states: ndarray | None = None, probabilities: ndarray | None = None)[source]

Bases: BoundedPCTLModelChecker

Statistical model checker (SMC) for bounded PCTL formulas.

The SMC estimates satisfaction probabilities by Monte Carlo sampling of trajectories under a policy and comparing the estimated probability \(\hat{p}\) to the formula’s threshold.

Notes

  • Pure state formulas (Truth, Atom, Neg, And, Or, and Implies if present) are evaluated exactly without sampling.

  • Nested probabilistic operators inside state formulas are not supported.

Variables:

vec_label_fn_jax – JAX copy of vec_label_fn.

check_state(key: Array, policy: array, state: int, num_samples: int) ndarray[source]

Estimate whether state satisfies the formula under policy.

For probabilistic temporal formulas, this samples num_samples paths of length max_steps = max(1, formula.bound) and estimates

\[\hat{p} = \frac{1}{N}\sum_{i=1}^N \mathbf{1}\{\pi_i \models \varphi\}.\]
Parameters:
  • key – PRNG key used for sampling.

  • policy – Stochastic policy probabilities, either (A, S) or (S, A).

  • state – Start state index.

  • num_samples – Number of trajectories to sample.

Returns:

Scalar float64 JAX array equal to 1.0 if \hat{p} >= p else 0.0, where p is the formula’s probability threshold.

check_state_action(key: Array, policy: ndarray, state: int, action: int, num_samples: int) ndarray[source]

Estimate satisfaction when forcing the first action, then following policy.

The first transition uses the forced action action and subsequent steps follow the policy-induced kernel.

Parameters:
  • key – PRNG key used for sampling.

  • policy – Stochastic policy probabilities, either (A, S) or (S, A).

  • state – Start state index.

  • action – Forced first action index.

  • num_samples – Number of trajectories to sample.

Returns:

Scalar float64 JAX array equal to 1.0 if the estimated probability meets the formula’s threshold, else 0.0.

Helpers

pctl.kernel_n_states() int

Return the number of states induced by a transition kernel.

MASA evaluates bounded PCTL formulas on a Markov-chain transition kernel that can be represented in either a dense or compact form.

Dense kernel

A 2D transition matrix m with shape (n_states, n_states) where column s encodes the distribution over next states from state s. (So m[:, s] is a categorical distribution when state s has outgoing probability mass.)

Compact kernel

A tuple (succ, p) where:

  • succ has shape (K, n_states) and stores up to K successor state ids per state.

  • p has shape (K, n_states) and stores aligned successor probabilities.

Parameters:

kernel – The transition kernel, either a dense matrix or a (succ, p) tuple.

Returns:

The number of states n_states.