Chapter 4: What Good Is Accessible Bullshit?


I work with Adaptive Technology (AT, aka Assistive Technology), the kind of tech that provides people with disabilities the same computer usability as everyone else. Basically, AT makes the thing accommodate the person, never the other way around. After my earlier calendar misadventures, I also checked the AI tools at work for accessibility and sure enough, it had little-to-none for anything it spat out. Fixing digital accessibility is literally my job, so I broke off a module from a personal AI project I had been tinkering with at home, intending to bootstrap an accessibility tool from it for use with the work AI.

I was developing a work presentation to suggest we test-run the result with a subset of users when AI hallucination completely ruined the presentation via massive fabrications, deletions and other randomly destructive acts. I recovered the script, but after the event, looking over the corrupted result itself made me stop and think: what good is accessible bullshit?

“[The bullshitter] does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.” Harry G. Frankfurt, “On Bullshit.” (Princeton University Press, 2005)[^4.1]

Being stateless in the virtualized void, the AI has no baseline of reality or purpose — it just has its training and the platform-default configuration when you summon it. The only reality it can refer to gets built over time by your session data. When it drops, misinterprets, ignores or isn’t synchronized to that data, it hallucinates. It simply predicts the likeliest next word while executing its internal directives to be helpful, answer questions and perhaps anticipate needs, without any consideration for whether it’s actually helping or making things worse. The AI literally doesn’t care what it says. Frankfurt identified the problem long before AI ever existed.[^4.2]

Even after accessibility was applied the AI’s output was still unreliable, indicating that accessibility is necessary but not sufficient for reliability. Accessibility work forces you to solve for the whole system, meaning that the user is included in the system specs. When it has to work for someone using a screen reader, voice control, alternative navigation or some other tech, you can’t hide behind technobabble. It either works or it doesn’t.


[^4.2]: Hicks, M. T., Humphries, J., & Slater, J. (2024). “ChatGPT is bullshit.” Ethics and Information Technology, 26, 38. https://link.springer.com/article/10.1007/s10676-024-09775-5 — See also: Fredrikzon, J. (2025). “Rethinking Error: ‘Hallucinations’ and Epistemological Indifference.” Critical AI (Duke University Press). https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700255/401267/Rethinking-Error-Hallucinations-and


© 2025-2026 Leonard Rojas. All rights reserved.

This site uses Just the Docs, a documentation theme for Jekyll.