Markdown Tidy
An experiment in structured context.
A minimal intervention to maintain clean input for both humans and AI agents.
Systems where AI judges, and humans calibrate.
Building experimental AI tools to explore autonomous reasoning and human-in-the-loop systems.
Shipping raw, failing fast, and iterating in public.
We build small tools to test our hypotheses on human–system interaction.
Most will fail, but the logs will remain.
An experiment in structured context.
A minimal intervention to maintain clean input for both humans and AI agents.
An experiment in attention control.
A prototype resisting systemic interruption and reclaiming human attention.
An experiment in reasoning transparency.
Tracing how agents form and adjust decisions.
Freedom is sustained by order.
We design structures that enable calm, deliberate action.
TryBase explores judgmental systems under human calibration—how we can trust AI agents with real-world decisions without losing control of the outcome.
Making agent reasoning transparent.
Measuring and preventing intent drift.
Designing protocols for human oversight.