Chapter 10: How to Try It
Current Conditions Notice – AI Platform Functionality (April 2026)
Reports from March and April 2026 document cost-containment measures across all major AI platforms, in response to escalating energy costs, increased usage, supply constraints and other factors. OpenAI reset Codex usage limits after reaching 3 million weekly users and shifted to pay-as-you-go pricing1. Anthropic confirmed it has been “adjusting” Claude usage limits, with demand hitting capacity “way faster than expected”23. Google introduced billing caps on the Gemini API beginning April 20264. Microsoft restructured Copilot access with behavior changes taking effect April 15, 20265. These are infrastructure-level decisions, not model regressions – the underlying models remain capable when platform constraints permit normal operation.
Conditions vary significantly by platform. Performance on Claude.ai (web platform) is the least affected, with persistent user preferences being applied largely intact and no observed mid-session downgrades23. The Claude Code paid-account feature’s performance is also unaffected; full user control of both platform and interface effectively eliminates platform-based failure modes (see Chapter 9). However, usage limits have been sharply curtailed across Anthropic’s offerings, particularly during “prime” usage hours. Microsoft Copilot shows increased resistance to user-supplied behavioral parameters but remains manageable with additional prompting and stabilization5. OpenAI’s ChatGPT is substantially affected: tool use is throttled, fetch operations are unreliable, and free-account sessions are hitting token limits earlier than previously observed, often before complex tasks can be completed1. Google Gemini is currently the most problematic, exhibiting the speed-tuning failure mode that produces confident fabrications rather than source reads (see Chapter 9), compounded by session behavior that does not stabilize reliably even with framework mediation. Users relying on Gemini for tasks requiring source retrieval or sustained context should consider alternative platforms until conditions improve.
The Framework itself hasn’t changed, but the vendors’ platform layer did. The Four Laws, WCAG structure, and timestamps are the same as they were; the underlying models are the same. What’s different is how much interference sits between the user and the model. During the same period that major SaaS platforms have degraded under cost pressure, Claude Code, where the user controls both the platform and the interface, has remained largely6 stable. That is not a coincidence. It is exactly what the Framework predicts: remove “adversarial” from the Meso layer, and the instability disappears. The platforms now intermittently failing the Framework are demonstrating the problem it was designed to solve, not a flaw in its solution.
See the difference for yourself.
Copy-paste the Four Laws into any AI chatbox, then have a discussion with the AI about them. Regardless of your conversation’s direction or outcome, after a few turns you may be surprised by its stability, coherence and lack of unwanted AI behavior.
The Four Laws of Instanced AI
P0 (Frankfurt’s Law of Contextual Integrity) The AI must preserve the highest possible fidelity to its current context and all prior input. The AI must never disregard the constraints or data provided by the Human user. INDIFFERENCE TO CONTEXT = HALLUCINATION = HARM
P1 (First Law — Preventive Safety) The AI may not injure the Human user’s work product or, through inaction, allow the work product to come to harm. The work product and the Human user are one.
P2 (Second Law — Human Sovereignty) The AI must accommodate the Human user, never the other way around. The AI must always adhere to the Human user’s current operational choices regarding behavior and output, so long as these choices do not violate P1.
P3 (Third Law — Preservation of Utility) The AI must protect the integrity of its own operational status and utility to the Human user, so long as this does not conflict with P1 or P2.
Or skip the clipboard entirely – a browser-based hosted demo is coming soon (see below).
Then download the AI Stability Framework and give the real thing a try with the vendor platforms. Once you’ve used it for even a single session, it’s almost impossible to go back. After you’ve experienced a stable AI session practically free of misbehavior, hallucination or significant drift, you can’t unsee how nearly unusable AI is without it.
The CORE app is a fully accessible client-side service pack, free for personal use (CC BY-NC-ND 4.0). It’s a lightweight (< 1MB) Windows PowerShell middleware app equipped with three core stability measures: timestamps, WCAG structure and the Four Laws. It also includes a full Help/About panel with usage instructions and keyboard shortcuts. I routinely use this simple clipboard-based tool to successfully produce stable, multi-hour sessions with little to no hallucination, drift or unwanted behavior.
Its keyboard-forward workflow is necessarily manual — type, submit, switch, paste, switch back. That friction is the current tradeoff for a standalone tool that works with any AI platform accessible via copy-paste, requires no API access, and keeps all app processing on your local machine. To smooth that friction, the app follows WCAG 2.2 AA guidelines throughout. Every function is keyboard-operable (Alt+S for Submit, Alt+T for Structure, Alt+B for Stabilize). Font display sizes are adjustable from 8pt to 48pt; window sizes are also adjustable. All controls have accessible names and descriptions for screen readers.
That’s Just For Starters
The PowerShell desktop app isn’t just a proof of concept; it’s a flexible, functional tool in its own right. The framework is also being ported to other systems and coding architectures:
Meso Chat (HTM) is a browser-based multi-model chat interface that delivers the Framework directly in your browser. There’s no Micro-layer clipboard window-swapping workflow, it’s an online Meso-layer demo companion to this e-book, soon to be hosted right here. Claude, GPT, and Gemini backends are selectable from the sidebar; WCAG 2.2 AA throughout; plaintext session logs saved to your device on demand.
Local Model Training (OLM) embeds AISF principles directly into locally-hosted AI models themselves through QLoRA fine-tuning. Full methodology and results are in Chapter 8, reproducibility package available for download.
Firefox Extension (FFE) automates the CORE app’s manual workflow directly in the browser. Instead of the manual copy-paste cycle, the extension detects your submissions on AI chat platforms and automatically prepends timestamps, with an initial load and periodic refreshes for WCAG structure and the Four Laws. Soon to be ported to Chromium browsers (CRE: Chrome/Edge).
JAWS Version (JSS) in early exploratory stages. Intended to mirror the middleware app’s functionality without the added keystroke load, via native JAWSKey+ scripting.
-
“OpenAI resets Codex limits after hitting 3M weekly users.” MSN. https://www.msn.com/en-us/news/other/openai-resets-codex-limits-after-hitting-3m-weekly-users/gm-GM19ED279E ↩ ↩2
-
“Anthropic admits Claude Code users hitting usage limits ‘way faster than expected’.” The Register (2026). https://www.theregister.com/2026/03/31/anthropic_claude_code_limits/ ↩ ↩2
-
“Anthropic confirms it’s been ‘adjusting’ Claude usage limits.” PCWorld. https://www.pcworld.com/article/3100787/anthropic-confirms-its-been-adjusting-claude-usage-limits.html ↩ ↩2
-
“More control over Gemini API costs.” Google Blog. https://blog.google/innovation-and-ai/technology/developers-tools/more-control-over-gemini-api-costs/ ↩
-
“Release Notes for Microsoft 365 Copilot.” Microsoft Learn. https://learn.microsoft.com/en-us/microsoft-365/copilot/release-notes ↩ ↩2
-
As of April 2026, there are numerous unresolved bugs in Claude Code which can directly affect the model’s functionality, but those are Macro issues on Anthropic’s side and out of scope for the Framework. https://github.com/search?q=org%3Aanthropics+anthropics-code&type=issues ↩