Why Self-Ownership Has to Be the Answer
How to Own Your Brain At Work
For the deeper case on the principle behind all of this, why ownership of your own cognition is the fight that defines the next decade of work, read John Wolpert’s How to Own Your Brain at Work. Wolpert (Chief Technical Product Officer, startup founder, AI applications developer, and author of The Two But Rule, Wiley) has been making this argument longer and louder than most. Start there.
Every solution to brain-scraping that keeps the expert’s cognition in someone else’s cloud is a negotiation with the people doing the scraping. That’s the wedge.
The problem isn’t surveillance. Surveillance is the symptom. The actual business model is capture, converting decades of human judgment into model weights that can be copied, redeployed, and rented forever, while the human who produced the judgment gets paid once or not at all. This is structurally different from prior labor extraction. A factory worker’s labor disappears when the shift ends. An expert’s cognition, once captured into weights, becomes infinitely reproducible inventory. Capture-once, monetize-forever is the asymmetry that makes brain-scraping a venture-scale business.
Look at the alternatives on offer:
Regulation. Slow, jurisdictional, already lapped. Meta’s MCI runs in the US precisely because EU rules block it. The data finds the permissive jurisdiction.
Unions and collective bargaining. Useful, but addresses pay, not the underlying flow of cognition into someone else’s training pipeline. You can negotiate your hourly rate and still wake up replaced.
Marketplaces with “expert rights” frameworks, clean rooms, royalty stacks, residual cashflows. Better than the status quo. Still middleman-shaped. The expert is a tenant in someone else’s architecture, with rights enforced by someone else’s contracts on someone else’s servers. [Inference] Every tenant contract eventually gets renegotiated in the landlord’s favor. Ask any creator who built on a platform.
Self-hosted brains break the pattern at the physics layer, not the contract layer. That’s the difference that matters.
What it means in practice: the agent’s weights live on hardware the expert controls, laptop, personal server, co-located box. Training data is the expert’s own corpus: notes, case files, drafts, decisions. Inference happens locally, or through an encrypted relay that exposes nothing to the relay operator. When the expert engages a client, the agent leaves the machine only as outputs, answers, analyses, deliverables, never as weights, never as training signal. When the engagement ends, nothing has been deposited anywhere it can be re-mined.
The asymmetry that powers brain-scraping depends on the data leaving the expert’s environment. Remove that step and the unit economics collapse. There is no inventory to clone. No corpus to dilute. No “expert” abstraction sitting on someone else’s GPUs while the original gets a thank-you and a severance package.
Honest counterarguments:
Compute. Frontier-grade reasoning still wants serious hardware. But the curve is moving fast, small models with strong reasoning, on-device inference on consumer silicon, tool-calling architectures that offload heavy lifting to stateless APIs. The professional-tier laptop [Such as those from HP] of 2028 likely runs a serviceable expert agent. The hardware floor is dropping faster than the regulatory ceiling is rising.
Distribution. A self-hosted expert is invisible to clients who shop on platforms. Solvable with federated discovery, directories that index agents without ingesting their weights. Mechanically similar to how email finds you without Gmail owning your inbox.
Quality gap. The hyper-scaler agent will be smarter than yours for a while. True. But for high-stakes professional work, M&A, complex litigation, specialist medicine, security review, “narrower agent with the actual expert in the loop and full IP ownership” beats “smarter generic agent owned by the firm trying to replace you.” Stakes select for accountability, and accountability lives with the human who can be sued.
Drive the wedge all the way down: every other model treats the expert as a labor input whose output happens to be cognition. Self-hosting treats the expert’s cognition as capital, owned, depreciable, rentable, inheritable. Capital you keep. Not labor you sell once and watch get cloned.
Brain-scraping wins by default. Self-ownership only wins by deliberate choice, by professionals refusing the deal, by tooling getting good enough to make refusal practical, by clients learning that “the expert and their agent, together, on their hardware” is a better product than “the firm’s agent trained on the expert’s exit interview.”
The default settings of the next five years are being chosen right now. If “your brain on their cloud” becomes normal, the professional class becomes the next gig economy, then the next obsolete category. If “your brain on your machine” becomes normal, the hyper-scalers become a service tier, not a landlord.
Pick which world you want to wake up in. Then build like it.
What that looks like, concretely, this week:
Stop feeding the scrapers. Read your employment agreement and any “AI training” or “model improvement” clauses. If there’s a Mercor-style side gig in front of you, price the perpetual cloning of your judgment into the rate — or walk.
Stand up a local model. Install Ollama or LM Studio. Pull a capable open-weights model (Llama, Qwen, Mistral, gpt-oss). Run it. Confirm it works offline. This is the one-hour version of the future.
Build your corpus. Start a private, encrypted directory of your own work product notes, decisions, drafts, reasoning traces. This is your training data. No one else’s. Back it up to hardware you own.
Fine-tune or RAG against it. Use a local fine-tuning stack (Unsloth, Axolotl, MLX on Apple silicon) or, simpler, a local RAG setup (LlamaIndex, LangChain, AnythingLLM) so your model can actually retrieve and reason over your corpus.
Add tools, not data leaks. Wire your agent to APIs through stateless tool calls (MCP servers, function calling) so it can act in the world without uploading your corpus to do it.
Pick a deployment floor. A laptop for now. A Mac mini or a used workstation with a decent GPU as your “always-on expert” tier. A co-located box if your work warrants it. Own the hardware your brain runs on.
Negotiate accordingly. When a client engages you, the deliverable is the output. Not the weights, not the corpus, not the agent. Put it in the contract. “Model and training data remain property of {you}” is a clause, not a moonshot.
Find the others. Federated discovery only works if there’s a federation. Join or start a directory of self-hosted experts in your field. Make it cheaper for clients to find you than to fund your replacement.
None of these steps are speculative. All eight tools and patterns named exist today. The gap between “possible” and “normal” closes the week professionals decide to close it.


