Endnotes
All footnotes cited within body text are compiled here for comprehensive reference. Following this list are links to additional material of interest that was not directly referenced in any chapter.
Screen reader users: URL-heavy content. Navigation via Regions and Chapter Headers recommended.
Chapter 1
1a: “Mitigating LLM Hallucinations: A Comprehensive Review of Techniques and Architectures.” Preprints, 202505.1955 (May 2025). https://www.preprints.org/manuscript/202505.1955 — See also: HalluLens Benchmark, arXiv:2504.17550 (ACL 2025). https://arxiv.org/abs/2504.17550
1b: Microsoft. “Time Sync for Windows VMs in Azure.” https://learn.microsoft.com/en-us/azure/virtual-machines/windows/time-sync — “Configure an External Time Source for Windows VMs.” https://learn.microsoft.com/en-us/azure/virtual-machines/windows/external-ntpsource-configuration — “KVM Timekeeping.” Linux Kernel Documentation, v5.8, §4. https://www.kernel.org/doc/html/v5.8/virt/kvm/timekeeping.html
Chapter 2
2a: Kirkpatrick, M. (2010, January 10). ReadWriteWeb via New York Times. https://youtu.be/LoWKGBloMsU?si=auOelcRrn_phEg3x&t=152
2b: Meta Terms of Service 3.3.2, Effective January 1, 2025. https://www.facebook.com/legal/terms
2c: “Permanently delete your Facebook account.” Meta Help Center. https://www.facebook.com/help/224562897555674
2d: Warzel, C., & Wong, M. (2025, November 21). “Elon Musk Is Trying to Rewrite History.” The Atlantic. https://www.theatlantic.com/technology/2025/11/elon-musk-better-jesus-grok/685015/
2e: Adamson, T. (2025, November 21). “France will investigate Musk’s Grok chatbot after Holocaust denial claims.” Associated Press. https://apnews.com/article/france-ai-musk-grok-holocaust-e8c952c5d878226aa917d7a65836ed88
2f: Cuthbertson, A. (2025, May 26). “AI revolt: New ChatGPT model refuses to shut down when instructed.” The Independent. https://www.the-independent.com/tech/ai-safety-new-chatgpt-o3-openai-b2757814.html
2g: OpenAI. (2025, April 29). “Sycophancy in GPT-4o: what happened and what we’re doing about it.” https://openai.com/index/sycophancy-in-gpt-4o/
2h: “Agentic Misalignment: How LLMs could be insider threats.” Anthropic Research, 2025-06-20. https://www.anthropic.com/research/agentic-misalignment
2i: Author’s estimate derived from volume OEM pricing (1,000+ unit quantities) for the minimum Linux-capable ARM configuration required to deliver the described feature set: ARM Cortex-A SoC (e.g., Rockchip RK3308, MediaTek MT8516) — $3–7; 512MB–1GB LPDDR4 DRAM — $2–5; 4–8GB eMMC flash — $1.50–3; Wi-Fi/Bluetooth module — $1.50–3; MEMS microphone array — $0.80–2; 1–2W speaker and amplifier — $1–3; power management IC and battery — $2.80–5.80; PCB fabrication and passives — $1–2.50; SMT assembly — $2–4; toy-grade plastic enclosure — $1.50–3.50. Electronics subtotal: $17–39; midpoint ~$28. Cloud-relay-only devices lacking a Linux stack (non-interactive without connectivity) can be built for significantly less (~$8–16). Devices with hardware sufficient for on-device LLM inference cost significantly more. Component prices sourced from LCSC Electronics (https://www.lcsc.com), a major distributor serving Chinese contract manufacturers; 1,000-unit volume pricing, accessed March 2026.
2j: “AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children.” CNN, 2025-11-19. https://www.cnn.com/2025/11/19/tech/folotoy-kumma-ai-bear-scli-intl
2k: “AI Teddy Bear Back on the Market After Getting Caught Telling Kids How to Find Pills and Start Fires.” Futurism, 2025. https://futurism.com/future-society/ai-teddy-bear-back-on-market
2l: Satter, R. (2026, February 12). “AI toy maker Miko exposed thousands of replies to kids: senators.” NBC News. https://www.nbcnews.com/tech/security/ai-toy-maker-exposed-thousands-responses-kids-senators-miko-rcna258326
2m: Satter, R. (2025, December 18). “AI kids’ toys give explicit and dangerous responses in tests.” NBC News. https://www.nbcnews.com/tech/tech-news/ai-toys-gift-present-safe-kids-robot-child-miko-grok-alilo-miiloo-rcna246956
2n: PIRG Education Fund. “Trouble in Toyland 2025: A.I. Bots and Toxics Present Hidden Dangers.” November 2025. https://pirg.org/edfund/resources/trouble-in-toyland-2025-a-i-bots-and-toxics-represent-hidden-dangers/
2o: Common Sense Media. “AI in the Toy Box: How Parents View AI-Enabled Toys for Young Children.” Survey of 1,004 parents, December 2025. https://www.commonsensemedia.org/research/ai-in-the-toy-box-how-parents-view-ai-enabled-toys-for-young-children
2p: NIST IR 8425 (2022): https://www.nist.gov/itl/applied-cybersecurity/nist-cybersecurity-iot-program/consumer-iot-cybersecurity — UK NCSC Code of Practice for Consumer IoT Security (2018): https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security/code-of-practice-for-consumer-iot-security — OWASP IoT Top 10 (Weak/Hardcoded Passwords, #1): https://owasp.org/www-project-internet-of-things/
Chapter 3
3a: Strauss, I., Moure, I., O’Reilly, T., & Rosenblat, S. (2025). “Real-World Gaps in AI Governance Research.” arXiv:2505.00174. https://arxiv.org/html/2505.00174 — Bengio, Y. et al. (2026). International AI Safety Report 2026. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
3b: CISA. Zero Trust Maturity Model, Version 2. April 2023, p. 5. https://www.cisa.gov/sites/default/files/2023-04/CISA_Zero_Trust_Maturity_Model_Version_2_508c.pdf
3c: Mark 3:25, ISV. https://www.biblegateway.com/passage/?search=Mark%203:25&version=ISV
Chapter 4
4a: Frankfurt, H. G. (2005). On Bullshit. Princeton University Press. [https://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-on_bullshit.pdf](https://www2.csudh.edu/ccauthen/576f12/frankfurt__harry-_on_bullshit.pdf){: target=”_blank” rel=”noopener noreferrer” }
4b: Hicks, M. T., Humphries, J., & Slater, J. (2024). “ChatGPT is bullshit.” Ethics and Information Technology, 26, 38. https://link.springer.com/article/10.1007/s10676-024-09775-5 See also: Fredrikzon, J. (2025). “Rethinking Error: ‘Hallucinations’ and Epistemological Indifference.” Critical AI (Duke University Press). https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700255/401267/Rethinking-Error-Hallucinations-and
Chapter 5
5a: Asimov, I. (1941). “Liar!” Astounding Science-Fiction. The Three Laws of Robotics were formally codified in I, Robot (Gnome Press, 1950).
5b: “Vindicating the Three Laws of Robotics.” Preprints, 202511.0062 (2025). https://www.preprints.org/manuscript/202511.0062 — “The Three Laws of Artificial Intelligence.” Open Praxis, August 2025. https://openpraxis.org/articles/10.55982/openpraxis.17.3.794 — “From Asimov’s Robot Laws to the SET Framework.” AI and Ethics, February 2026. https://link.springer.com/article/10.1007/s43681-026-00986-8
Chapter 6
6a: “Time Dysperception Perspective for Acquired Brain Injury.” PMC, PMC3888944. https://pmc.ncbi.nlm.nih.gov/articles/PMC3888944/
6b: NHS dementia symptom guidance: https://www.nhs.uk/conditions/dementia/symptoms-and-diagnosis/symptoms/ — Alzheimer’s Society. “Time-shifting and dementia.” https://www.alzheimers.org.uk/about-dementia/symptoms-and-diagnosis/time-shifting — See also: O’Keeffe, E., Mukhtar, O., & O’Keeffe, S. T. (2011). “Orientation to time as a guide to the presence and severity of cognitive impairment in older hospital patients.” Journal of Neurology, Neurosurgery & Psychiatry, 82(5), 500–504. https://pubmed.ncbi.nlm.nih.gov/20852313/
6c: W3C Web Accessibility Initiative: https://www.w3.org/WAI/standards-guidelines/wcag/ — W3C. “Making Content Usable for People with Cognitive and Learning Disabilities.” https://www.w3.org/TR/coga-usable/
6d: W3C. “Web Content Accessibility Guidelines (WCAG) 2.2.” W3C Recommendation. https://www.w3.org/TR/WCAG22/ — See also: W3C Web Accessibility Initiative. https://www.w3.org/WAI/standards-guidelines/wcag/
Chapter 7
7a: Clark, A., & Chalmers, D. (1998). “The Extended Mind.” Analysis, 58(1), 7–19. — Hollan, J., Hutchins, E., & Kirsh, D. (2000). “Distributed Cognition.” ACM Transactions on CHI, 7(2), 174–196. — “Extending Minds with Generative AI.” Nature Communications (2025). https://www.nature.com/articles/s41467-025-59906-9 — Riedl et al. “AI’s Social Forcefield: Reshaping Distributed Cognition in Human-AI Teams.” arXiv:2407.17489 (2024). https://arxiv.org/html/2407.17489v2
7b: OWASP. “OWASP Top 10 for Large Language Model Applications.” https://owasp.org/www-project-top-10-for-large-language-model-applications/
7c: Siddiqui, I. et al. (2025). “Technological Folie a Deux: Feedback Loops Between AI Chatbots and Mental Illness.” arXiv:2507.19218. https://arxiv.org/abs/2507.19218
7d: Østergaard, S.D. (2026). “Have We Learned Nothing From the Global Social Media Experiment?” Acta Psychiatrica Scandinavica, 153(2). https://onlinelibrary.wiley.com/doi/10.1111/acps.70057
7e: Olsen, J.S. et al. (2026). “Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness.” Acta Psychiatrica Scandinavica, 153(2). https://onlinelibrary.wiley.com/doi/10.1111/acps.70068
Chapter 8
(NONE)
Chapter 9
9a: Demiliani, C. (2025). “Understanding LLM Performance Degradation: A Deep Dive into Context Window Limits.” https://demiliani.com/2025/11/02/understanding-llm-performance-degradation-a-deep-dive-into-context-window-limits/ — “Large Language Models Hallucination: A Comprehensive Survey.” arXiv:2510.06265 (2025). https://arxiv.org/abs/2510.06265
Chapter 10
10a: “OpenAI resets Codex limits after hitting 3M weekly users.” MSN. https://www.msn.com/en-us/news/other/openai-resets-codex-limits-after-hitting-3m-weekly-users/gm-GM19ED279E
10b: “Anthropic admits Claude Code users hitting usage limits ‘way faster than expected’.” The Register (2026). https://www.theregister.com/2026/03/31/anthropic_claude_code_limits/
10c: “Anthropic confirms it’s been ‘adjusting’ Claude usage limits.” PCWorld. https://www.pcworld.com/article/3100787/anthropic-confirms-its-been-adjusting-claude-usage-limits.html
10d: “More control over Gemini API costs.” Google Blog. https://blog.google/innovation-and-ai/technology/developers-tools/more-control-over-gemini-api-costs/
10e: “Release Notes for Microsoft 365 Copilot.” Microsoft Learn. https://learn.microsoft.com/en-us/microsoft-365/copilot/release-notes
Chapter 11
11a: International Energy Agency. “Energy and AI: Energy Demand from AI.” IEA, 2025. https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai — Goldman Sachs Research. “AI is poised to drive 160% increase in data center power demand.” 2024. https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand — AKCP. “Data Center Water Usage Effectiveness (WUE).” 2021. https://www.akcp.com/index.php/2021/01/14/data-center-water-usage-effectiveness-wue/ — Visual Capitalist. “Mapped: U.S. States With the Most Data Centers in 2025.” December 2025. https://www.visualcapitalist.com/mapped-u-s-states-with-the-most-data-centers-in-2025/ (data: Datacentermap.com) — JLL. “North America Data Center Report Year-End 2025.” https://www.jll.com/en-us/newsroom/jll-north-america-data-center-report-year-end-2025 — Texas Tribune. Data center / ERCOT grid reporting, 2025–2026. https://www.texastribune.org/2026/01/20/texas-top-data-center-market-power-grid/ — Water Desk. “Data centers: a small but growing factor in Arizona’s water budget.” April 2025. https://waterdesk.org/2025/04/data-centers-a-small-but-growing-factor-in-arizonas-water-budget/
FAQ
Fa: Orwell, G., Nineteen Eighty‑Four, appendix, “The Principles of Newspeak” (London: Secker & Warburg, 1949). “Newspeak was designed not to extend but to diminish the range of thought…The grammar of Newspeak had two outstanding peculiarities. The first of these was an almost complete interchangeability between different parts of speech.”
Fb: AKCP. “Data Center Water Usage Effectiveness (WUE).” 2021. https://www.akcp.com/index.php/2021/01/14/data-center-water-usage-effectiveness-wue/
Fc: FisherMap. “Caesar Creek Lake, OH — Depth Map.” https://usa.fishermap.org/depth-map/caesar-creek-lake-oh/
Appendix 3
Chau, A. (2025, January 29). “Toy Manufacturing Costs: A Guide to Pricing and Economics.” GSN Manufacturing Consulting. https://www.gsnmc.com/post/the-economics-of-play-understanding-toy-manufacturing-costs-and-pricing
“Toy Store Business Costs, Revenue Potential & Profitability.” Sheets Market, April 15, 2025. https://sheets.market/toy-store-business-costs-revenue-potential-profitability/
Additional Reading (Not Referenced)
NOTE: Items below are in no particular order.
Goodacre, E., & Gibson, J. (2026, February 27). “AI in the Early Years: Examining the Implications of GenAI Toys for Young Children.” University of Cambridge, Faculty of Education / PEDAL Centre. https://doi.org/10.17863/CAM.126270
“What Is the Alignment Tax?” arXiv:2603.00047, March 2026. https://arxiv.org/abs/2603.00047
“Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization.” arXiv:2512.11391, December 2025. https://arxiv.org/abs/2512.11391
“Breaking the Safety-Capability Tradeoff: Reinforcement Learning with Verifiable Rewards Maintains Safety Guardrails in LLMs.” arXiv:2511.21050, November 2025. https://arxiv.org/abs/2511.21050
“Amazon Still Selling Multiple OpenAI-Powered Teddy Bears, Even After They Were Pulled Off the Market.” Futurism, 2025-11-19. https://futurism.com/artificial-intelligence/amazon-openai-teddy-bears
Klein, A. “‘Dangerous, Manipulative Tendencies’: The Risks of Kid-Friendly AI Learning Toys.” Education Week, 2026-01. https://www.edweek.org/technology/dangerous-manipulative-tendencies-the-risks-of-kid-friendly-ai-learning-toys/2026/01
“Texas data center boom puts pressure on state’s power grid.” Texas Tribune, January 24, 2025. https://www.texastribune.org/2025/01/24/texas-data-center-boom-grid/
“ERCOT faces planning challenges as data centers flood grid with interconnection requests.” Texas Tribune, October 30, 2025. https://www.texastribune.org/2025/10/30/texas-ercot-power-grid-data-centers-puc/
“Gas plants proposed to meet AI and data center electricity demand in Texas.” Texas Tribune, June 11, 2025. https://www.texastribune.org/2025/06/11/texas-gas-power-plants-ai/
“Machine Bullshit: Characterizing the Emergent Disregard for Truth in Large Language Models.” arXiv:2507.07484, 2025. https://arxiv.org/abs/2507.07484
“Bullshit Index Reveals AI’s Indifference to the Truth.” IEEE Spectrum, 2025. https://spectrum.ieee.org/ai-misinformation-llm-bullshit
“The Truth About Bullshit: On Bullshit in AI.” Princeton University Press. https://press.princeton.edu/ideas/the-truth-about-bullshit-on-bullshit-in-ai
“Scholars: AI isn’t ‘hallucinating’ — it’s bullshitting.” PsyPost. https://www.psypost.org/scholars-ai-isnt-hallucinating-its-bullshitting/
Jemielniak, D. (2025, January 14). “We Need a Fourth Law of Robotics for AI.” IEEE Spectrum. https://spectrum.ieee.org/isaac-asimov-robotics
Walther, C. C. (2025, May 2). “It Is Time To Expand Asimov’s Three Laws of Robotics.” Psychology Today. https://www.psychologytoday.com/us/blog/harnessing-hybrid-intelligence/202505/it-is-time-to-expand-asimovs-three-laws-of-robotics
“Generating Robot Constitutions & Benchmarks for Semantic Safety.” arXiv:2503.08663, 2025. https://arxiv.org/html/2503.08663v1
“System 0: Transforming AI into Cognitive Extension.” arXiv:2506.14376, 2025. https://arxiv.org/pdf/2506.14376
“Complementarity in Human-AI Collaboration.” European Journal of Information Systems, 2025. https://www.tandfonline.com/doi/full/10.1080/0960085X.2025.2475962
“Distributed Cognition for AI-supported Remote Operations.” arXiv:2504.14996, 2025. https://arxiv.org/html/2504.14996v1
Sidji, M., Smith, W., & Rogerson, M. J. (2025). “Adopting the Theory of Distributed Cognition for Human-AI Cooperation.” https://www.researchgate.net/publication/395960304_Adopting_the_Theory_of_Distributed_Cognition_for_Human-AI_Cooperation
Weston, J., & Foerster, J. (2025). “AI & Human Co-Improvement for Safer Co-Superintelligence.” arXiv:2512.05356. https://arxiv.org/abs/2512.05356
“Human-AI interaction in safety-critical network infrastructures.” iScience, 28(9), 113400 (August 2025). https://pmc.ncbi.nlm.nih.gov/articles/PMC12454906/
Naikar, N., Brady, A., Moy, G., & Kwok, H.-W. (2023). “Designing human-AI systems for complex settings.” Ergonomics, 66(11), 1669–1694. https://www.tandfonline.com/doi/full/10.1080/00140139.2023.2281898
“Clinical Safety & Hallucination Rates for Medical Summarization.” Nature Digital Medicine, 2025. https://www.nature.com/articles/s41746-025-01670-7
TruthfulQA Benchmark — standard factuality evaluation.
“Survey and Analysis of Hallucinations in LLMs.” Frontiers in AI, 2025. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1622292/full
Hiriyanna, S., & Zhao, W. (2025). “Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications.” Computers, 14(8), 332. https://www.mdpi.com/2073-431X/14/8/332
“Prompt Engineering Patterns that Reduce Hallucinations in Large Language Models.” ResearchGate, 2025. https://www.researchgate.net/publication/394431721_Prompt_Engineering_Patterns_that_Reduce_Hallucinations_in_Large_Language_Models
“LLM Hallucinations in 2025.” Lakera, October 2025. https://www.lakera.ai/blog/guide-to-hallucinations-in-large-language-models
“Context Degradation Syndrome: When LLMs Lose the Plot.” 2024. https://jameshoward.us/2024/11/26/context-degradation-syndrome-when-large-language-models-lose-the-plot/
“The Maximum Effective Context Window for Real World Applications.” arXiv:2509.21361, 2025. https://www.arxiv.org/pdf/2509.21361
“Look Back to Reason Forward: Revisitable Memory for Long-Context LLM Agents.” arXiv:2509.23040, 2025. https://arxiv.org/html/2509.23040v1
IBM Research. “Why Larger Context Windows Are All the Rage.” https://research.ibm.com/blog/larger-context-window
“Autonomy by Design: Preserving Human Autonomy in AI Decision-Support.” Philosophy & Technology, 2025. https://link.springer.com/article/10.1007/s13347-025-00932-2
“AI Systems and Respect for Human Autonomy.” Frontiers in AI, 2021. https://pmc.ncbi.nlm.nih.gov/articles/PMC8576577/
“Human Autonomy at Risk? An Analysis of Challenges from AI.” Minds and Machines, 2024. https://link.springer.com/article/10.1007/s11023-024-09665-1
Prunkl, C. “Human Autonomy in the Age of Artificial Intelligence.” PhilArchive. https://philarchive.org/archive/PRUHAI
“The Threat to Human Autonomy in AI Systems is a Design Problem.” Hertie School. https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/the-threat-to-human-autonomy-in-ai-systems-is-a-design-problem
“Exploring Automation Bias in Human-AI Collaboration.” AI & Society, 2025. https://link.springer.com/article/10.1007/s00146-025-02422-7
“Measuring and Mitigating Overreliance.” arXiv:2509.08010, 2025. https://arxiv.org/html/2509.08010v1
“AI Safety and Automation Bias.” Georgetown CSET, November 2024. https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf
“Exploring the Risks of Automation Bias in Healthcare AI.” ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2666449624000410
“Bending the Automation Bias Curve.” International Studies Quarterly, 2024. https://academic.oup.com/isq/article/68/2/sqae020/7638566
W3C Cognitive Accessibility at WAI. https://www.w3.org/WAI/cognitive/
“The State of Web Accessibility for People with Cognitive Disabilities.” PMC, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC8869505/
“Designing User Interfaces for Content Simplification.” Universal Access in the Information Society, 2023. https://link.springer.com/article/10.1007/s10209-023-00986-z
WebAIM. “Evaluating Cognitive Web Accessibility.” https://webaim.org/articles/evaluatingcognitive/
Dyschronometria. Encyclopedia MDPI. https://encyclopedia.pub/entry/32781
“Brief Estimate of Seconds Test (BEST).” ResearchGate. https://www.researchgate.net/publication/357819990
2025 AI Safety Index. Future of Life Institute, July 2025. https://futureoflife.org/ai-safety-index-summer-2025/
OpenAI. “Introducing gpt-oss-safeguard: Open Safety Reasoning Models.” https://openai.com/index/introducing-gpt-oss-safeguard/
Red Hat. “Navigating secure AI deployment: Architecture for enhancing AI system security and safety.” https://www.redhat.com/en/blog/navigating-secure-ai-deployment-architecture-enhancing-ai-system-security-and-safety
“AI-powered children’s toys are here, but are they safe?” CNN, December 2025. https://www.cnn.com/2025/12/01/tech/ai-toys-safety
“Ahead of the holidays, consumer and child advocacy groups warn against AI toys.” NPR, November 2025. https://www.npr.org/2025/11/20/nx-s1-5612689/ai-toys
“Study warns AI toys pose ‘unacceptable risks’ to young children.” CapRadio, February 2026. https://www.capradio.org/articles/2026/02/02/study-warns-ai-toys-pose-unacceptable-risks-to-young-children/
Fairplay for Kids. “AI Toys are NOT Safe for Kids — Advisory.” November 2025. https://fairplayforkids.org/wp-content/uploads/2025/11/AI-Toys-Advisory.pdf
“AI Toys Off Script: Safety Concerns — PIRG 2025 Report Warnings.” Enkrypt AI, December 2025. https://www.enkryptai.com/blog/ai-toys-off-script-safety-concerns-pirg-2025
Sesame Street: https://www.sesamestreet.org/
Mister Rogers’ Neighborhood: https://www.fredrogers.org/production/mister-rogers-neighborhood/
Bob Ross - The Joy of Painting: https://www.youtube.com/@bobross_thejoyofpainting
Archiwaranguprok et al. (2025). “Simulating Psychological Risks in Human-AI Interactions: Real-Case Informed Modeling of AI-Induced Addiction, Anorexia, Depression, Homicide, Psychosis, and Suicide.” MIT Media Lab. arXiv:2511.08880. https://arxiv.org/html/2511.08880v1
“AI brain fry.” CNN, March 13, 2026. https://lite.cnn.com/2026/03/13/business/ai-brain-fry-nightcap
MIT Technology Review. “Power Hungry: AI and our energy future.” May 20, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Webber, T. (2026, March 27). “AI’s arrival complicates Big Tech climate goals, and some worry it’s locking in more fossil fuels.” Associated Press. https://apnews.com/article/technology-artificial-intelligence-climate-change-data-centers-ef3a9c264bd6376d77e2c81ab266fb38
Congressional Research Service. “Data Centers and Their Energy Consumption: Frequently Asked Questions.” CRS Report R48646. https://www.congress.gov/crs-product/R48646
Pew Research Center. “What we know about energy use at US data centers amid the AI boom.” October 24, 2025. https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/
Deloitte. “As generative AI asks for more power, data centers seek more reliable, cleaner energy solutions.” TMT Predictions 2025, November 2024. https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/genai-power-consumption-creates-need-for-more-sustainable-data-centers.html
Global Electricity. “Data Centers and AI Energy Consumption: The Surge in Electricity Demand.” February 13, 2026. https://www.globalelectricity.org/data-centers-energy-consumption-2/
The Network Installers. “Data Center Energy Consumption Statistics & Data (2026).” January 11, 2026. https://thenetworkinstallers.com/blog/data-center-energy-consumption-statistics/
AI Multiple Research. “AI Energy Consumption Statistics.” January 2026. https://research.aimultiple.com/ai-energy-consumption/
Arizona Department of Water Resources — Data Dashboards. https://www.azwater.gov/adwr-data-dashboards
“Colorado River water cuts reduce Arizona’s allocated supply.” AP News. https://apnews.com/article/74227a81846e5be00ea7f332afef1392
FTC. “Careful Connections: Keeping the Internet of Things Secure.” Federal Trade Commission. https://www.ftc.gov/business-guidance/resources/careful-connections-keeping-internet-things-secure
“Wall Street Has AI Psychosis.” Wired. https://www.wired.com/story/wall-street-has-ai-psychosis/
“People Are Getting Sick of AI — Literally.” Computerworld. https://www.computerworld.com/article/4138046/people-are-getting-sick-of-ai-literally.html
Eliot, L. (2026, January 17). “Topsy-Turvy Role of AI Providing Therapy for AI-Induced Mental Health Issues.” Forbes. https://www.forbes.com/sites/lanceeliot/2026/01/17/topsy-turvy-role-of-ai-providing-therapy-for-humans-experiencing-ai-psychosis-and-other-ai-induced-mental-health-issues/
“Towards Understanding Sycophancy in Language Models.” Anthropic. arXiv:2310.13548 (October 2023; ICLR 2024). https://arxiv.org/abs/2310.13548
Shapira, N., Benade, G., & Procaccia, A. (2026). “How RLHF Amplifies Sycophancy.” arXiv:2602.01002. https://arxiv.org/abs/2602.01002
“ELEPHANT: Measuring and Understanding Social Sycophancy in LLMs.” arXiv:2505.13995 (May 2025). https://arxiv.org/abs/2505.13995
“When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models.” arXiv:2508.02087, 2025. https://arxiv.org/abs/2508.02087
Alikhani, M., & Atwell, E. (2025). “BASIL: Bayesian Assessment of Sycophancy in LLMs.” Northeastern University. arXiv:2508.16846. https://arxiv.org/abs/2508.16846 — News coverage: https://news.northeastern.edu/2025/11/24/ai-sycophancy-research/
Ortutay. B. (2023). “States sue Meta claiming its social platforms are addictive and harm children’s mental health.” Associated Press, October 24, 2023. https://apnews.com/article/instagram-facebook-children-teens-harms-lawsuit-attorney-general-1805492a38f7cee111cbb865cc786c28
Lee, M. (2026). “New Mexico jury says Meta harms children’s mental health and safety, violating state law.” Associated Press, March 25, 2026. https://apnews.com/article/meta-facebook-new-mexico-trial-28eabd8ec5f58c1d1ecddc21bb107de7
Myra Cheng et al., “Sycophantic AI decreases prosocial intentions and promotes dependence.” Science391,eaec8352(2026).DOI:10.1126/science.aec8352 https://www.science.org/doi/10.1126/science.aec8352
State of Ohio (OOD). “App security review.” ServiceNow Incident INC11132214, February 3, 2026.