Chapter 0: Preface


The vendors aren’t going to fix AI’s hallucination problems any time soon.

Hallucination is typically framed as a high-level problem; a persistent bug or quality issue for the AI companies to work out. However, those companies are currently unprofitable and bleeding cash, with stakeholders and regulators breathing down their necks. The public is furious over rising utility bills resulting from ever-expanding AI datacenter construction. Market and economic news articles are being published in major media outlets, openly questioning whether the entire AI industry is a circular-funded market bubble preparing to burst.

Clearly, the vendors have more pressing priorities. In the meantime, people are losing hours of productive work to AI sessions that seemed fine right up until the moment they weren’t, with no reliable means of prevention or recovery.

The AI Stability Framework approaches this as a client-side problem with a client-side solution. It doesn’t require any API keys, exploits, or hoping for a “better” model that might never appear. Working software exists, it’s in near-daily use, and you can try it yourself for free:

  • AISF-DEMO — Small, standalone Windows desktop app. Full framework, manual workflow.
  • CDA — Copilot Digital Accessibility. WCAG-only enterprise accessibility tool with multi-model compatibility — it’s not just for MS Copilot. CC0 public domain.
  • OLM — The framework can even be tuned directly into models themselves. QLoRA tuning yields 100% compliance with a locally-hosted Mistral 7B model, trained on mid-range retail consumer hardware running Windows 11. Training materials and configuration are available, train one and try it for yourself. Full results in Appendix 2.

The framework’s simple tool applies structural and behavioral fixes that effectively stabilize AI sessions NOW. Not when the AI companies get around to it, and not when regulators force them to do it. AISF lets you fix it for yourself, today. It’s unconventional, but it works.


Contents

    Chapter  1.  What Time Is It?  . . . . . . . . . . . . . . . . . . . .    1
    Chapter  2.  The Platforms Are Not on Your Side  . . . . . . . . . . .    2
    Chapter  3.  The Deployment Stack  . . . . . . . . . . . . . . . . . .    3
    Chapter  4.  What Good Is Accessible Bullshit?  . . . . . . . . . . . .   4
    Chapter  5.  The Four Laws of Instanced AI  . . . . . . . . . . . . . .   5
    Chapter  6.  What AI Perceives  . . . . . . . . . . . . . . . . . . . .   6
    Chapter  7.  The Human-AI Extended Mind . . . . . . . . . . . . . . . .   7
    Chapter  8.  Defense in Depth  . . . . . . . . . . . . . . . . . . . . .  8
    Chapter  9.  Does It Work?  . . . . . . . . . . . . . . . . . . . . . .   9
    Chapter 10.  How to Use It  . . . . . . . . . . . . . . . . . . . . . .  10
    Chapter 11.  What's Next  . . . . . . . . . . . . . . . . . . . . . . .  11

    Appendix  0.  Endnotes  . . . . . . . . . . . . . . . . . . . . . . . apx00
    Appendix  1.  Foreseeably Asked Questions  . . . . . . . . . . . . .  apx01
    Appendix  2.  OLM: Behavioral Compliance Training  . . . . . . . . .  apx02
    Appendix  3.  TOY: Child-Safe Model Training . . . . . . . . . . . .  apx03

Author: Leonard Rojas Contact: AISF at LeonardRojas dot com


© 2025-2026 Leonard Rojas. All rights reserved.

This site uses Just the Docs, a documentation theme for Jekyll.