Optimal Control Theory Applied to Human Longevity

Mathematical Drug Scheduling for Maximum Healthspan

Author: Mullo Saint
Publisher: American Longevity Science
Published: February 12, 2026

Abstract

We present a formal framework for applying optimal control theory to human longevity interventions. By modeling biological aging as a six-dimensional dynamical system evolving in a state space defined by energetic capacity, clearance mechanisms, senescent cell burden, regenerative capacity, epigenetic programming, and functional output, we derive mathematical principles for optimal intervention scheduling. Using Pontryagin's Maximum Principle, we establish that optimal longevity protocols exhibit a structured four-phase pattern: minimal intervention when biological state is interior to the viable zone, feedback-activated intervention at boundary approach, singular arc control at constraint boundaries, and high-intensity recovery following perturbations. We demonstrate that optimal drug scheduling for interventions such as rapamycin, senolytics, and NAD+ precursors follows bang-bang or singular arc structures, providing theoretical foundation for intermittent dosing strategies observed in preclinical lifespan extension studies. Model Predictive Control (MPC) emerges as the natural computational framework for personalized, biomarker-driven longevity protocols. This work establishes mathematical rigor for the emerging field of longevity engineering, transforming intervention design from empirical trial to principled optimization.

1. Introduction

The challenge of human longevity engineering admits a precise formulation: given a biological system degrading under intrinsic dynamics, what intervention schedule maximizes healthy lifespan while respecting physiological constraints and minimizing burden? This is fundamentally an optimal control problem.

Optimal control theory, developed in the mid-twentieth century by Pontryagin, Bellman, and their collaborators, provides the mathematical machinery to answer such questions rigorously. Where previous approaches to longevity intervention relied on fixed protocols or empirical adjustment, optimal control transforms the problem into one of mathematical optimization: find the control law that minimizes a cost functional representing biological degradation while satisfying the equations of motion governing aging dynamics.

This article extracts and extends the optimal control framework from Principia Sanitatis, applying it specifically to the problem of drug scheduling for longevity interventions. We establish three core results: (1) the structure of optimal longevity policies exhibits predictable phases determined by proximity to viability boundaries, (2) optimal drug scheduling for common interventions follows bang-bang or singular arc patterns, providing theoretical support for intermittent dosing, and (3) Model Predictive Control provides a computationally tractable framework for real-time, biomarker-driven intervention.

2. The Biological State Space

We model the aging organism as a dynamical system in a six-dimensional state space. The biological state vector is defined as:

X = (E, C, Sen, R, P, F)T ∈ ℝ6

where each component represents a fundamental biological dimension:

Definition 1 (Viable Zone)

The Viable Zone V ⊂ ℝ6 is the region of state space compatible with sustained health. Formally:

V = {X ∈ ℝ6 : EEmin, CCmin, SenSenmax, RRmin, PPmin, FFmin}

The boundary ∂V represents the critical thresholds beyond which pathology emerges.

The system evolves according to differential equations of the form:

dX/dt = f(X, u, t) + w(t) Equation 1: Aging Dynamics

where f represents intrinsic aging dynamics, u = (u1, ..., um)T is the intervention vector (drug doses, lifestyle modifications), and w(t) captures biological noise and stochastic perturbations.

Key Insight
Without intervention (u = 0), trajectories drift toward the boundary of the Viable Zone and eventually cross it, corresponding to age-related disease and mortality. Optimal control finds the intervention schedule u(t) that maintains X(t) ∈ V indefinitely while minimizing intervention burden.

3. The Longevity Cost Functional

The optimal control problem requires specification of what "optimal" means — a cost functional to be minimized. For longevity, we construct a functional that penalizes deviation from youthful state and intervention intensity:

J[u] = ∫0 { (XXtarget)T Q (XXtarget) + uT R u + ρ · Penalty(X) } dt Equation 2: Longevity Cost Functional

where:

The quadratic structure in X and u represents a balance between biological performance (staying near the target) and intervention cost. The infinite horizon reflects the lifelong nature of longevity optimization.

Definition 2 (Optimal Longevity Policy)

The optimal longevity policy u*(t) is the control that minimizes J[u] subject to the dynamics (Equation 1) and the constraint X(t) ∈ V for all t ≥ 0.

3.1 Design of the Weight Matrices

The choice of Q and R encodes biological priorities. For example:

Empirical calibration of Q and R from clinical outcome data remains an active research direction, but the framework provides structure for principled tuning.

4. Pontryagin's Maximum Principle Applied to Aging

Pontryagin's Maximum Principle (PMP) provides necessary conditions for optimal control when controls are constrained to lie in an admissible set U. This is essential for biological applications, where drug doses must satisfy 0 ≤ uiui,max.

4.1 The Control Hamiltonian

We introduce the control Hamiltonian, a function that combines the system dynamics and the running cost:

H(X, u, p, t) = pT f(X, u, t) − L(X, u, t) Equation 3: Control Hamiltonian

where p ∈ ℝ6 is the costate vector (also called the adjoint vector or Lagrange multiplier trajectory). The costate has units of "cost per unit state" and represents the marginal value of each state variable.

Theorem 1 (Pontryagin's Maximum Principle for Longevity)

If (X*, u*) is an optimal state-control pair minimizing J[u], then there exists a costate trajectory p(t) such that:

  1. State equation: dX*/dt = ∂H/∂p = f(X*, u*, t)
  2. Costate equation: dp/dt = −∂H/∂X = −(∂f/∂X)T p + (∂L/∂X)T
  3. Maximum condition: H(X*(t), u*(t), p(t), t) ≥ H(X*(t), u, p(t), t) for all uU, a.e. t

That is, u*(t) maximizes the Hamiltonian over the admissible control set at each time point.

The power of PMP is that it converts an infinite-dimensional optimization problem (finding the optimal function u(t)) into a pointwise maximization at each t, coupled with ordinary differential equations for X and p.

4.2 Biological Interpretation of the Costate

The component pSen(t), for instance, represents the marginal cost of having one additional senescent cell at time t. When pSen is large (highly negative, since we're minimizing cost), the system has high incentive to reduce senescence burden — the costate is "pulling" the optimal control toward activating senolytics.

The costate equation shows that p evolves backward in time from terminal conditions, integrating the future consequences of current state deviations. This encodes the principle that intervention timing must account for long-term biological trajectories, not just instantaneous state.

5. Optimal Intervention Scheduling: Bang-Bang and Singular Arc Structures

A remarkable feature of optimal control with constrained inputs is the emergence of bang-bang control: the optimal intervention takes extreme values (maximum dose or zero) rather than intermediate values.

Theorem 2 (Bang-Bang Principle for Linear Pharmacokinetics)

For interventions with linear pharmacokinetics (dc/dt = −ke c + u(t)/Vd, where c is plasma concentration, ke is elimination rate, Vd is volume of distribution) and cost functional linear in u, the optimal dosing schedule is bang-bang: u* ∈ {0, umax}, with switches determined by the switching function σ(t) = p(t)/Vdwu.

The Hamiltonian for such systems is linear in u: maximizing it over the constraint set [0, umax] forces the control to the boundary. Intermediate doses are suboptimal — they provide the cost of intervention without maximizing effect.

5.1 Optimal Drug Scheduling: The Rapamycin Example

We illustrate with rapamycin, an mTOR inhibitor with established preclinical lifespan extension effects.

Example 1: Optimal Rapamycin Scheduling

Rapamycin pharmacokinetics: ke ≈ 0.027 h−1 (half-life ~25.5 hours), Vd ≈ 12 L/kg. The optimal scheduling problem balances:

Theorem 2 predicts bang-bang structure: periods of maximum dosing (mTORC1 inhibition) alternating with drug-free washout periods (mTORC2 recovery). This is precisely the intermittent dosing strategy observed to extend lifespan in murine models (Harrison et al., 2009; Bitto et al., 2016).

The bang-bang structure provides theoretical justification for pulsed dosing regimens, which empirically outperform continuous low-dose administration for lifespan extension while reducing side effects.

5.2 Singular Arcs: Maintenance Dosing

When the switching function σ(t) ≡ 0 over an interval, the optimal control is not uniquely determined by the Hamiltonian maximization. This is a singular arc.

Definition 3 (Singular Control)

On a singular arc, the optimal control usingular is determined by higher-order necessary conditions. For the pharmacokinetic model with quadratic state cost, singular control maintains the concentration at a target level: usingular = ke ctarget Vd.

Singular arcs correspond to maintenance dosing: once a biological state reaches its target, the optimal strategy is to administer just enough intervention to counteract natural degradation, keeping the state at the boundary of the viable zone with minimal effort.

6. Multi-Drug Optimization: Sequential and Parallel Strategies

Longevity interventions involve multiple drugs targeting different state variables. The optimal control framework naturally extends to multi-input systems.

For the six-dimensional SSM state space with intervention vector u = (uE, uC, uSen, uR, uP)T corresponding to interventions targeting each dimension (e.g., NAD+ precursors for E, rapamycin for C, senolytics for Sen, etc.), the Hamiltonian becomes:

H = Σi pi fi(X, u) − (XXtarget)T Q (XXtarget) − uT R u

The maximum condition ∂H/∂u = 0 (for unconstrained controls) or pointwise maximization over U (for constrained controls) yields a system of coupled equations determining the optimal multi-drug schedule.

6.1 Sequential vs. Parallel Intervention

The SSM framework imposes a sequential dependency structure: ECSenRP. This biological ordering creates coupling in the optimal control problem.

Coupling Insight
The optimal policy respects biological dependencies. For instance, senolytic intervention (uSen) is effective only when clearance capacity (C) and energetic state (E) are sufficient to process apoptotic debris. Mathematically, this appears as cross-terms in the dynamics: ∂fSen/∂E > 0, ∂fSen/∂C > 0.

The optimal multi-drug protocol therefore exhibits phased activation: establish energetic capacity first (uE > 0), then enhance clearance (uC > 0), then apply senolytics (uSen > 0) only when the system can tolerate the resulting apoptotic load.

7. Model Predictive Control for Personalized Longevity

Computing the globally optimal control for the infinite-horizon problem is computationally intractable for six-dimensional nonlinear systems. Model Predictive Control (MPC) provides a practical solution.

Definition 4 (Model Predictive Control for Longevity)

At each measurement time tk (e.g., monthly biomarker assessment):

  1. Measure the current biological state X(tk) via biomarkers
  2. Solve a finite-horizon optimal control problem over [tk, tk + T]:

    minu(·)tktk + T L(X, u, s) ds + Vf(X(tk + T))

    subject to dX/ds = f(X, u, s), XV, uU
  3. Apply the optimal control u*(t) for t ∈ [tk, tk+1)
  4. Repeat at the next measurement time tk+1

Here T is the prediction horizon (e.g., 6–12 months) and Vf is a terminal cost penalizing deviation from target at the horizon's end.

MPC is a receding horizon strategy: it solves a finite-horizon problem, implements only the first portion of the solution, then re-solves from the new measured state. This provides several critical advantages:

7.1 Stability of MPC Longevity Protocols

A critical question: does the receding horizon strategy maintain the biological state in the Viable Zone, or can the repeated re-optimization lead to instability?

Theorem 3 (MPC Stability for Longevity)

If the terminal cost Vf is chosen as a Lyapunov function for the closed-loop system (i.e., there exists a local control law κf such that Vf(f(X, κf(X))) ≤ Vf(X) − L(X, κf(X)) for all X in a terminal set), then the MPC closed-loop system maintains X(t) ∈ V and the optimal cost decreases monotonically.

In practice, Vf is often chosen as the solution of an infinite-horizon LQR problem for the linearized dynamics near Xtarget. This ensures that as the state approaches the target, the MPC controller smoothly transitions to a stabilizing linear feedback law.

7.2 Practical Implementation: Biomarker-Driven Intervention

An MPC-based longevity protocol operates as follows:

Time Point Action Measurement Computation
Month 0 Baseline assessment NAD+, hs-CRP, senescence markers, epigenetic age Solve MPC problem with 12-month horizon
Month 1 Implement Phase 1 interventions Monitor tolerability
Month 2 Re-measure biomarkers NAD+, hs-CRP update Re-solve MPC with updated state, adjust dosing
Month 3 Phase 2 activation Autophagy markers Re-optimize control law
... Continuous iteration Quarterly comprehensive panels Monthly re-optimization

The MPC framework naturally integrates clinical constraints: if a patient experiences side effects, the constraint set U is tightened for that intervention. If a biomarker unexpectedly worsens, the re-optimization automatically adjusts the protocol.

8. The Four-Phase Structure of Optimal Longevity Policies

Synthesizing the theoretical results, we identify the general structure of optimal longevity interventions.

Theorem 4 (Structure of Optimal Longevity Policy)

Under the SSM dynamics with quadratic state cost, L1 control cost, and Viable Zone constraints, the optimal longevity policy exhibits four phases:

  1. Interior Phase: When X is well within V, optimal control is minimal (near zero). Endogenous maintenance dominates.
  2. Boundary Approach Phase: As any component Xi approaches its viability threshold, the corresponding control ui activates, proportional to the barrier function gradient: ui ∝ −∂B/∂Xi.
  3. Boundary Contact Phase: When a state constraint is active (Xi = threshold), the control maintains the constraint through singular arc control at minimum intensity.
  4. Recovery Phase: After a perturbation (illness, injury, stress), the optimal control applies transient high-intensity intervention to restore the state to the interior.

This four-phase structure arises naturally from the mathematics: the L1 cost promotes sparsity (Phase 1), the barrier penalty activates control at boundary approach (Phase 2), state constraints yield singular arcs (Phase 3), and the quadratic cost drives rapid recovery (Phase 4).

Clinical Translation

The four-phase structure justifies adaptive dosing: intervention intensity is not fixed but depends on current biological state. A young, healthy individual (deep interior to V) requires minimal intervention. An individual with borderline biomarkers (near ∂V) requires active intervention. An individual recovering from acute illness requires temporarily intensified intervention.

This is the mathematical foundation for personalized, biomarker-driven longevity protocols.

9. Practical Implementation and Constraints

While the optimal control framework provides theoretical clarity, practical implementation faces several constraints:

9.1 Measurement Limitations

The MPC framework requires accurate state measurement at each control update. Current biomarkers provide noisy estimates of the true biological state. NAD+ measurement requires blood draws and has ~15% coefficient of variation. Epigenetic clocks have standard errors of 3–5 years.

Robust MPC addresses measurement noise by solving a min-max problem: minimize cost over controls while maximizing over the uncertainty set of possible true states given the noisy measurement. This yields conservative control laws that maintain safety despite measurement error.

9.2 Model Uncertainty

The dynamics f(X, u, t) are not precisely known. Intervention effects vary between individuals. Parameter identification from clinical data is an active research area.

Adaptive MPC addresses model uncertainty by online parameter estimation: as biomarker trajectories are observed, the model parameters are updated via Bayesian inference or least-squares identification, and the control law adapts to the individual's specific dynamics.

9.3 Computational Requirements

Solving the MPC optimization at each time step requires numerical nonlinear programming. For the six-dimensional SSM problem with a 12-month horizon discretized into weekly intervals (~50 time steps), the resulting NLP has ~300 state variables and ~250 control variables — well within the capability of modern solvers (IPOPT, SNOPT, CasADi).

Computation time on standard hardware is on the order of seconds to minutes, acceptable for monthly re-optimization cycles.

9.4 Constraint Qualification and Compliance

Optimal control assumes perfect compliance: the computed control u*(t) is implemented exactly. In reality, patients miss doses, experience side effects, and make autonomous adjustments.

The MPC feedback structure provides inherent robustness to compliance failures: each re-measurement corrects for past deviations, preventing cumulative error. However, persistent non-compliance degrades performance. Intervention protocols must balance mathematical optimality with practical adherence.

10. Discussion

10.1 From Empiricism to Optimization

Traditional longevity interventions follow fixed protocols derived from preclinical studies. A typical protocol might specify: "Take 500 mg NMN daily, 10 mg rapamycin weekly, and quarterly senolytics." This is a open-loop control strategy — it does not adapt to the individual's response.

Optimal control transforms this into closed-loop, biomarker-driven intervention: measure current biological state, compute the control that optimizes the cost functional subject to dynamics and constraints, implement for one cycle, then repeat. This is the paradigm shift from fixed protocols to adaptive optimization.

10.2 The Role of Mathematical Rigor

The formal machinery of Pontryagin's Maximum Principle, Hamiltonian dynamics, and costate equations may seem abstract, but it provides two critical benefits:

  1. Qualitative insight: The bang-bang and singular arc structures are not empirical observations but mathematical consequences of the problem structure. They tell us what types of dosing schedules to expect a priori.
  2. Quantitative optimization: The HJB equation and MPC provide computational frameworks to actually solve for optimal policies numerically, not just characterize them theoretically.

Rigor ensures that the intervention design is not ad hoc but grounded in a principled optimization framework.

10.3 Evidence Base and Validation

The optimal control framework presented here is mathematically rigorous (Grade A for control theory) but biologically preliminary (Grade B–C for application to human longevity). The evidence base:

The claim is not that optimal control has proven to extend human healthspan, but that it provides the correct mathematical structure for designing and validating such interventions.

10.4 Limitations and Open Questions

Several fundamental questions remain:

These are empirical questions that the optimal control framework structures but does not answer. The framework tells us what to measure and how to optimize; clinical research must provide the data to populate the model.

11. Conclusion

We have established a formal optimal control framework for human longevity intervention, deriving the following results:

  1. Biological aging can be represented as a six-dimensional dynamical system evolving in a state space with a defined Viable Zone.
  2. The optimal intervention problem is a constrained optimal control problem minimizing a cost functional balancing biological degradation and intervention burden.
  3. Pontryagin's Maximum Principle provides necessary conditions for optimal control, revealing that optimal drug scheduling follows bang-bang or singular arc structures.
  4. The four-phase structure of optimal longevity policies (interior, boundary approach, boundary contact, recovery) emerges naturally from the mathematical framework.
  5. Model Predictive Control provides a computationally tractable, feedback-based implementation for personalized, biomarker-driven intervention.

This work establishes longevity engineering as a rigorous discipline grounded in control theory, transforming intervention design from empirical trial to principled optimization. The mathematics does not eliminate the need for clinical validation — it structures the validation process and provides falsifiable predictions about optimal intervention schedules.

The optimal control framework is not a claim that aging has been solved. It is a claim that the problem has been correctly formulated. What remains is implementation, measurement, and empirical verification — engineering challenges, not conceptual mysteries.

References

Anderson, B. D. O., & Moore, J. B. (1990). Optimal Control: Linear Quadratic Methods. Prentice Hall.
Bellman, R. (1957). Dynamic Programming. Princeton University Press.
Bitto, A., Inging, T. K., Kaeberlein, M., et al. (2016). Transient rapamycin treatment can increase lifespan and healthspan in middle-aged mice. eLife, 5, e16351. doi:10.7554/eLife.16351
Bryson, A. E., & Ho, Y. C. (1975). Applied Optimal Control. Hemisphere Publishing.
Crandall, M. G., & Lions, P. L. (1983). Viscosity solutions of Hamilton-Jacobi equations. Transactions of the American Mathematical Society, 277(1), 1–42.
Harrison, D. E., Strong, R., Sharp, Z. D., et al. (2009). Rapamycin fed late in life extends lifespan in genetically heterogeneous mice. Nature, 460(7253), 392–395. doi:10.1038/nature08221
Mayne, D. Q., Rawlings, J. B., Rao, C. V., & Scokaert, P. O. M. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36(6), 789–814.
Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V., & Mishchenko, E. F. (1962). The Mathematical Theory of Optimal Processes. Interscience Publishers (Wiley).
Rawlings, J. B., Mayne, D. Q., & Diehl, M. (2017). Model Predictive Control: Theory, Computation, and Design (2nd ed.). Nob Hill Publishing.
Saint, M. (2026). Principia Sanitatis: Volume I, Book I. American Longevity Science.
Schättler, H., & Ledzewicz, U. (2015). Optimal Control for Mathematical Models of Cancer Therapies. Springer.