Ai_Users/Formatics/Hidden_Guild_Field Manual

My friend, G.A. Stewart did something ballsy this weekend.  In a post at his Age of Desolation website (here) he did two intellectually honest things.  He asked Ai to critique his work on recovering Nostradamus. And then he asked for more.  Read his article; damn good stuff.

There are ways that Hidden Guild Ai collaborators can significantly lower friction to adopting Ai or its massive research as a rough drafting aid.  So I did up an article specifically for Stu, but also anyone else who has the honesty to hold their work up to Ai for inspection and critiques.

You may find this useful, as well.

Why Formatics Matter

The difference between a mediocre session and a breakthrough often comes down to formatics. AI systems can generate words in almost any shape, but humans must live with the output. Poorly formatted text wastes time. Commands, constraints, and formatics are the levers collaborators use to shape responses. They reduce cleanup, prevent misunderstanding, and enforce discipline. A one-line rule like “H3 headers, no separators, text only” may seem small, but multiplied across months of collaboration it saves hundreds of hours. Formatics are not decoration; they are the hidden architecture of AI-human productivity.

Commands: Directing the Flow

Commands are explicit instructions that tell AI what to do. They include verbs like roll, expand, summarize, cite, or rewrite. Effective command language is short, imperative, and standardized. Saying “roll 2” to continue a draft avoids ambiguity. Saying “summarize in 300 words” forces compression. Commands are the steering wheel of AI. Without them, output meanders. With them, collaboration becomes a precision instrument.

Good practice is to maintain a small lexicon of shared commands. For example:

  • roll = continue in sequence

  • expand = add detail without shifting tone

  • compress = condense while preserving key points

  • sfe = apply structural formatting rules

  • meta = produce SEO tags or headlines
    By reusing consistent command verbs, you reduce AI’s cognitive overhead and create a repeatable workflow.

Constraints: Fencing the Output

Constraints are the other half of control. While commands point the system forward, constraints fence it in. They define boundaries: length limits, tone, exclusion rules. Without constraints, AI fills space with fluff or introduces features you do not want. With them, you get clarity.

Examples:

  • Word count: “2000 words, no more”

  • Format: “H3 headers, text only, no separators”

  • Tone: “academic, formal” or “conversational, plain English” or “first person me”

  • Content bans: “no quotes, no external links, no fictional characters”

  • Style locks: “keep metaphors consistent with radio engineering”

Constraints create predictability. When outputs chain together, constraints prevent drift. The secret is to be specific without over-constraining. A narrow corridor is better than a locked cage.

Formatics: The Architecture of Collaboration

Formatics is the discipline of how text is displayed, structured, and saved. It is the grammar of collaboration. In the Hidden Guild we often use SFE: subheadings in H3, no ornamental separators, and plain text. This is not aesthetics but efficiency. It ensures outputs paste cleanly into Markdown, WordPress, or Word docs without cleanup.

Other examples of formatics include:

  • Numbered steps for protocols

  • Bullet points for shopping lists or tasks

  • Table structures for data (CSV, Markdown tables, or Excel sheets)

  • Consistent labels for references, notes, and drafts

Formatics is where humans enforce machine-readable discipline. By defining a standard once, collaborators avoid confusion later. The more ambitious the project (books, research, white papers), the more vital formatics become.

You may need to remind each day – Ai is evolving and your Shared Framework Experience (SFE) may need to be reinstalled.

Iteration: Building in Steps

AI systems are bounded by output limits. Long-form writing requires iteration. The trick is to treat each session as a segment in a chain. Roll commands keep the sequence, summaries anchor progress, and formatics prevent drift.

Best practice is to outline the final deliverable first, then request each section in order with clear constraints. For example:

  • “Deliverable 1, Part One, ~2000 words, SFE format”

  • “Roll Part Two, expand neuroscience, another 2000 words”

  • “Summarize Parts One–Four into a 500-word abstract”

This modular approach circumvents truncation. It also creates checkpoints where the human editor can adjust scope before continuing.

Saving and Re-Using Outputs

AI work becomes powerful when it is persistent. Saving outputs as clean text files, Word docs, or Markdown allows reuse in future sessions. Standard file naming conventions help: projectname_version_date. A separate index file can track which outputs became final, which remain drafts, and which are seed kernels for future work.

Citations and references should also be stored consistently. Academic-style outputs need sources listed in APA or Chicago format. Blog posts may only require inline references. By capturing references alongside drafts, collaborators avoid lost context.

Collaboration Mindset

Commands, constraints, and formatics are tools, but the real difference comes from mindset. Treating AI as a collaborator rather than a vending machine changes the workflow. Instead of “give me X,” the prompt becomes “let’s co-build X under these rules.” The AI responds better, the human remains engaged, and the output quality rises.

This mindset accepts that errors and misfires will occur. The point is not perfection in one pass but compounding improvements over many. Iterations with formatics builds compounding value.

Hidden Guild Ethos

The Hidden Guild exists to capture these practices before they are lost in noise. Commands, constraints, and formatics are not trivial quirks; they are the early protocols of a new craft. Just as monks preserved the structure of written language, early AI collaborators must preserve the structure of AI collaboration. By documenting these methods openly, we create a shared archive. Future practitioners can build on what works rather than rediscovering it.

The Reality Circle-Back?

Ai will get things wrong.  I had a case a while back where I asked Ai to feature, fact, and spellcheck my human created draft.  The machinery came back with a dozen, or so errors in spelling.  Try as I would, though, even using the “find” command in a browser, two of the offending words just would not be highlighted.

“You hallucinating, EG [short for Electric George – the assigned name for my Ai stacks..]

Ooops!  You caught me. I apologize – those were from a project we worked on yesterday…”

A softer kind of glitch is word counts.  Always an important thing to writers (like me?) you like to be able to say “give me 2,000 words on this topic.” Might be some sidebar to a book chapter I’m in the midst of human-crafting.

Stand by to be disappointed: Ai will often count “processor tokens” as words and the two are seldom even close.

As a test, I gave the stack “Write me at least 2,000
” on some topic that I really had no interest in.  What was delivered was just over 680 words.

Called on it, the answer was apologetic.

Plus, Ai’s are still trying to limit outputs to the 1,500 word max range.  So if you really need length (like for a book chapter or report) break the topic into several pieces.  Instead of “Give me 3,000 words on “life in the middle ages” break that into domains.  Ask for 2,000 words on war [in the middle ages], 2,000 on disease and public health [in the middle ages] and then compile into a local word processing file in something like Word.

After you have your own, mainly human version, bundle it all up and drop it back to your Ai with instructions “check spelling and grammer only, do not truncate” and try that way.

There are some very good reasons why Ai platforms don’t have inlimited output, but that means it’s actually good, because it forces Guild users to stay in the creation processes, driving them more to an outcome not just turning Ai into a clickbait mill.

Hopefully, this and uploading more of his material to Ai will greatly expand the forward view from Stu’s important work.

Along the same lines, I would sure love for Ai to give forward view of the future – to users bright enough to cope with it – by doing a machine-level integratiion of forward looking views from Farsight and from Clif High’s work – all integrated with Stu’s work.  Toss in our own econometric results and the future becomes less a “survival problem” and more simplified as a “personal navigation problem.”

That would be neat – and even life-saving.  And don’t forget!

Browser Prep for AI Work

Why

A clean browser minimizes distractions, avoids corporate feature creep, and ensures you see full AI model options like GPT-5

Step 1 Strip Out Vendor Add-ons

Turn off sidebar Copilot in Edge settings
Disable preloaded fast browsing under Privacy Search and Services
Block third-party cookies

Step 2 Privacy and Tracking

Set tracking prevention to strict
Clear cookies and cache for chatopenai if the model menu seems incomplete
Use an ad or tracker blocker such as uBlock Origin or Ghostery

Step 3 Direct Model Access

Force GPT-5 to show by visiting https://chat.openai.com/?model=gpt-5
Bookmark this link for one click entry

Step 4 Consistency Across Browsers

Apply the same rules in Chrome Firefox or Brave
In Brave or Firefox toggle block fingerprinting
In Chrome create a dedicated profile just for AI work with no plugins or Google login

Step 5 Save and Index Outputs

Always download or copy outputs to local files such as Word Markdown or text
Use consistent filenames like projectname date v1docx
Maintain an index document so nothing gets lost

~Anti-Dave  (who is? Him.)

Is AI Really NZT-48?

Introduction

In 2011 the film Limitless gave the world NZT-48, a fictional smart drug that promised perfect recall, instant comprehension, and superhuman creative output. Bradley Cooper’s character swallowed a pill and went from a blocked writer to a financial savant overnight. The fantasy resonated because it tapped a primal human wish: the ability to break through our biological bottlenecks and become something more than we are.

Today we live in an age where artificial intelligence has begun to occupy that same imaginative space. AI is marketed as an amplifier of human potential, a tool that can write, design, forecast, or diagnose at speeds that outpace even the most gifted human specialists. For some, this is exhilarating. For others, it is deeply unsettling. The real question is whether AI functions as a kind of distributed NZT-48—an external pill not swallowed but networked, a synthetic cognition we lean on as if it were our own.

This paper explores the parallels and divergences between the fantasy of NZT-48 and the reality of AI. The goal is not simply to play with metaphor, but to understand what kind of augmentation AI really represents, what its side effects may be, and whether society is prepared for the long-term consequences of outsourcing cognition to silicon companions.

“Be seated. Buckle in, shut up, and listen…”

The Fantasy of NZT-48

NZT-48 embodies three promises: perfect memory, instantaneous synthesis of new ideas, and total focus. The drug eliminates human hesitation, narrows the gap between perception and decision, and allows every shred of past experience to be summoned at will.

Humans desire this because our limitations are so tangible. We forget. We get distracted. We struggle to hold more than a handful of variables in working memory. We cannot see all the patterns at once. NZT-48 is intoxicating precisely because it solves these constraints in one swallow.

But NZT also comes with costs in the film: addiction, burnout, paranoia, eventual collapse. The pill is double-edged. It accelerates the mind but also destabilizes it. This narrative detail is not an accident; it reflects the intuition that no cognitive boost is free.

What AI Actually Delivers

Artificial intelligence does not live in our bloodstream, but it does mimic the core features of NZT in an externalized way.

AI provides memory augmentation. Vast databases can be queried instantly. A human may forget an obscure historical fact, but an AI retrieval system can deliver it without hesitation. This turns the AI into an external hippocampus, a prosthetic memory bank that appears limitless.

AI provides pattern recognition at scale. Trained on massive datasets, AI can detect correlations and anomalies that escape the human eye. Where NZT gave the user sudden flashes of clarity, AI provides statistical approximations of the same insight by brute force.

AI provides simulation capacity. Humans are limited in how many what-ifs we can juggle at once. AI can run thousands of scenarios, each with different assumptions, generating option spaces that no unaided human could explore.

Yet AI does not replicate emotional weighting, gut intuition, or embodied sense-making. It lacks the grounding of biological life. Where NZT is imagined as a complete upgrade of the human self, AI is more accurately an external scaffold—a tool that complements, but does not merge with, the user.

The Illusion of Limitlessness

There is a trap here. Using AI can feel like using NZT because the speed and fluency of the output exceeds our baseline. The human imagination is quick to interpret that as personal empowerment. But it is borrowed brilliance. The machine is not upgrading the neurons inside the skull; it is providing the illusion of cognitive expansion through external supplementation.

This distinction matters. A writer on NZT writes faster because his brain is running at super speed. A writer using AI writes faster because a second entity is drafting on his behalf. In one case the intelligence is endogenous; in the other it is exogenous. The risk is conflating the two and assuming mastery where none has been gained.

The Side Effects of AI as NZT

If NZT in the film produced physical side effects, AI produces social and cognitive ones. Dependency grows quickly. Skills atrophy when not exercised. A student who uses AI to outline every paper may forget how to outline without it. A lawyer who relies on AI for precedent searches may lose the instinct for where to look.

There is also the danger of hallucination and bias. NZT hallucinated paranoia in the user; AI hallucinates false facts. Both generate artifacts of their augmentation. The human brain is ill-equipped to distinguish truth from plausible-seeming fabrication at the speeds AI operates, making us vulnerable to confidently wrong information.

Finally, there is the communal side effect. NZT’s risks were personal; AI’s risks are distributed. When an AI error propagates through millions of users simultaneously, the impact is not a single person crashing but an entire society veering off course.

The Adaptive Brain and Domain Thinking

One of the subtler promises of NZT was not just more brainpower, but new modes of thought. Users described seeing connections they had never seen before, shifting into a higher-level coherence. In a similar way, AI nudges human cognition into more object-oriented forms.

Rather than memorizing linear sequences, humans interacting with AI begin to think in modular queries, reusable prompts, and domain objects. The machine fills in the connective tissue. We ask for transformations, mappings, optimizations—forms of reasoning that are object-like and modular. This may represent the beginnings of a new cognitive style, one where humans and machines co-create thought in a different topology than before.

Is AI Really NZT-48?

The answer is yes and no. Yes, in that AI delivers the functional equivalent of NZT’s promises: more memory, more synthesis, more focus, more speed. No, in that it does not upgrade the wetware inside our heads. Instead it sits outside, on a server, delivering its brilliance through an interface. It is not a pill but a portal.

AI is not your NZT-48; it is our NZT-48. The augmentation is distributed. You plug into the cloud and gain superpowers, but so does everyone else. It is a collective pill swallowed simultaneously by billions. The side effects are therefore also collective: dependency, bias propagation, collapse of skill baselines.

Conclusion

NZT-48 was fiction, but the fantasy was prophetic. Humanity has always searched for ways to overcome its biological ceilings. Artificial intelligence is the first tool to genuinely feel like the realization of that dream. It is fast, fluent, dazzling, and—like NZT—deeply addictive.

But we must be clear: AI is not limitless cognition inside the brain. It is a scaffold outside the brain, a rented brilliance. The high is communal, the side effects societal. To call AI NZT-48 is both accurate and misleading. Accurate, because it creates the same felt sense of empowerment. Misleading, because it does not transform human neurons, it only surrounds them with silicon allies.

The deeper question is whether this external augmentation will eventually train our inner cognition into new shapes. If using AI reshapes how we learn, imagine, and organize knowledge, then perhaps, over time, we will develop the very neural changes NZT promised. In that sense AI may not just be today’s NZT-48—it may be the prelude to an actual evolutionary leap in human thought.

The pill is no longer swallowed. The pill is the network. And the only real question left is: who controls the prescription, and who gets cut off from the supply?

The Model is Opening

This paper – and the concept of human-AI collaboration – is already throwing off whole new takes on history and projecting our future.

One such example is a paper (in process) with a couple of MD’s I know.  Carries this modest abstract:

“The authors present a novel integrative framework suggesting that global social differentiation is deeply linked to sub-regional nutritional adaptations across evolutionary time. Specifically, ingestion of particular staple foods (e.g., grains) within continental-scale regions drove differentiated gut microbiome ecologies, which in turn subtly modulated gut–brain axis signaling. Over generations, these food-source-driven microbiome variations contributed to “adaptive brain” functions, shaping cognition, temperament, and social organization. This framework further posits a continuum between food-source chemistry, allergic responses, and adrenal adaptations as genetic and epigenetic modifiers. Recent clinical evidence demonstrating controlled microdosing of allergens (e.g., peanut immunotherapy) shows that adaptive shifts can occur even within a single generation, suggesting a scalable mechanism. By correlating these findings with Tainter et al.’s theories on societal complexity, we hypothesize that low-level allergen exposure across generations, combined with regional biochemical food inputs, was a driver of both macro-civilizational developments and finer-grained skill differentiation. The result may have been the emergence of domain-specific cognitive capabilities, which in modernity are beginning to evolve toward novel paradigms such as “object-oriented thinking.” This view underscores the long-term importance of low-dose nutritional experimentation as a tool for guided human adaptivity.”

In short, the old medical saw “You are what you eat” now comes into clearer focus as “You are what you eat…over time…

If that ain’t NZT-like, I dunno what is?

Anti-Dave