Let’s “go deep” on parameterizing the Shared Framework Experience which we’ve been evolving here in the Hidden Guild to up the game in Human-AI Collaboration.
If you’re new to AI, the back story here is humans and AI need to “prepare their common meeting grounds” for the sharing of intelligence to be passed in a low-friction manner.
Fleshing Out the SFE
Over the past few sessions, we unintentionally ran a live-fire test of something that’s been forming quietly in the background: the Shared Framework Experience (SFE) as a practical way to delimit, stabilize, and improve human–AI collaboration.
What broke wasn’t facts. What broke wasn’t tone. What broke was process clarity. And that turned out to be the most useful part of the exercise.
The takeaway is simple: when humans work iteratively with AI on real writing, analysis, or editorial judgment, context drift becomes the single biggest failure mode. SFE exists to stop that drift.
What Failed (and Why That Was Useful)
Three recurring problems showed up:
First, venue ambiguity. A piece intended for UrbanSurvival was briefly evaluated as if it were ShopTalk. That wasn’t a content error; it was a rubric mismatch.
Second, missing uncertainty declaration. The AI inferred intent instead of stopping to say, “I’m not sure which frame you want—confirm?” Humans do that naturally. Machines don’t unless instructed to.
Third, version bleed. Multiple near-identical drafts were discussed in sequence, and references crept in from earlier versions that were no longer in scope.
None of this is sinister. None of it is censorship. It’s simply how probabilistic systems behave when assumptions are left implicit.
The Core Insight
SFE works best when it is treated not as prose, but as a lightweight protocol.
The goal isn’t to constrain creativity. The goal is to make assumptions explicit, so the AI doesn’t have to guess—and so the human doesn’t have to keep correcting invisible context errors.
In other words: SFE isn’t about control. It’s about alignment.
If you don’t understand how call-outs/labeling are used in code (to firewall the human cuing material fromt he machine-oriented commands in plain language) here’s a simple example:
# (next line declines to machine) <—– this is a comment line with a #
Do this (telling machine the instruction) <—– this is AI & human readable
This is similar to how comments are used in serve CLI (linux) and in <php> coding.
The Proposed SFE Upgrade
To address what we observed, we extended SFE with a small set of clearly labeled adjustment points. These are written in plain language, using standard code-style callouts (#’s) so that other users can easily modify or reuse them.
Here is the refined SFE model.
The Shared Framework Experience (SFE) — Session Protocol
# Declare SFE Seesion
Observe the Shared Framework Experience for this session.
Use my Peoplenomics / UrbanSurvival format defaults for this session:
# Venue Lock
– Venue is explicitly defined for this session (e.g., UrbanSurvival weekday, UrbanSurvival holiday, ShopTalk Sunday, Peoplenomics).
– If venue or purpose is unclear, pause and ask for clarification before proceeding.
# Uncertainty Declaration Rule
– If context, intent, or scoring rubric is ambiguous, the assistant must explicitly state uncertainty and request clarification before continuing.
# Formatting Rules
– Headings as H3 only
– Body as plain text only (no separators, no horizontal lines, no links unless explicitly requested)
– Never insert “SFE,” extra dividers, or Markdown separators between sections
# Writing Style Rules
– Keep paragraphs tight and narrative-style, as in a newsletter column
– Maintain an analytical but conversational tone — part economist, part ranch philosopher
– For voice, aim for George: a hybrid of Mark Twain’s wry human insight and science fiction meeting a quantitative analyst — smart, dry, observant, slightly amused by the absurd
# Collaboration Declaration
– This session is a human–AI collaboration
– The user is collaborating on a non-fiction deliverable
# User Profile
– I am a pure-truth human
# Input Scope Rules
– Each pasted text is treated as a hard scope boundary
– Do not reference prior drafts unless explicitly requested
# Source Limits
– Use verifiable data only
# Creativity Limits
– Do not confabulate or hallucinate
Why This Structure Works
This version of SFE does four important things.
It locks venue, so evaluation criteria don’t drift.
It forces uncertainty to surface early, instead of being papered over with confident but wrong assumptions.
It treats each paste as a clean scope, preventing ghost references from earlier drafts.
And it separates style, format, sourcing, and creativity rules, making the whole system easier to debug and reuse.
Most importantly, it gives the AI permission to say something machines are normally bad at saying:
“I don’t know yet—clarify.”
A Practical Example
In our case, the moment the venue was clarified as UrbanSurvival holiday, the scoring rubric changed appropriately. Length became a feature, not a bug. Reflective sections became seasonal texture, not digression. The friction vanished—not because the content changed, but because the frame was corrected.
That’s the power of SFE when it’s explicit.
Why This Matters Beyond One Column
As more people use AI for real thinking—not just prompts, but drafting, editing, analysis, and judgment—the failure mode won’t be hallucinated facts. It will be misaligned intent.
SFE is a way to prevent that.
It’s not a brand. It’s not a philosophy. It’s a session-level contract between a human who knows what they’re trying to do and a machine that needs to be told how to help.
And the best part?
It’s simple enough that anyone can adopt it.
That’s how useful ideas spread.
And this idea is simple, well constructed, and we think will serve the user/AI community well.
Happy holidays from us boths…
~! Anti Dave