February 17, 2026

AgentFS — The Missing Abstraction for AI Agents

post

Turso · November 2025

  • Everything an agent does — files, state, tool calls — lives in a single SQLite database exposed as a POSIX filesystem; the abstraction is not a new API, it is the filesystem itself
  • FUSE support lets agents use git, grep, and standard Unix tools directly against their state store with zero integration code; the trust boundary is the mount point, not a permission model in application code
  • Makes agent state portable (one file), auditable (SQL queries over history), and composable (multiple agents share a filesystem with conflict resolution) — the same properties Unix gives processes via /tmp and pipes

Bash One-Liners for LLMs

post

Justine Tunney · December 2023

  • Treats LLMs as standard Unix filters: pipe data in via stdin, get structured output on stdout, chain with sed, curl, and links — the model is just another composable process
  • Uses –temp 0 to make LLM output deterministic, turning a stochastic model into a reproducible Unix tool suitable for scripting and automation
  • Demonstrates that llamafile turns an LLM into a single-file executable callable from bash — no Python, no framework, no daemon; the filesystem is the package manager

The Unreasonable Effectiveness of an LLM Agent Loop with Tool Use

post

sketch.dev · May 2025

  • The entire agent pattern reduces to a 9-line while loop: read input, call tool, feed output back — this is a read-eval-print loop, the same pattern shells have used for fifty years
  • With just one general-purpose tool — bash — current models can solve many problems in a single shot; the agent does not need a framework, it needs a shell
  • Argues custom agent loops will replace tasks “too specific for general tools and too unstable to automate traditionally” — the exact niche shell scripts have always filled
February 16, 2026

Building Effective Agents

post

Anthropic · December 2024

  • Argues the most effective agent architectures are augmented LLMs with simple tool loops, not multi-agent frameworks
  • Distinguishes “workflows” (predetermined tool orchestration) from “agents” (model-directed tool use) — both reduce to tool loops at different autonomy levels
  • Recommends starting with the simplest implementation and adding complexity only when measurably needed

Taste Is Not a Moat

post

sshh.io · 2026

  • Argues that taste is “alpha” (a decaying edge) not a “moat” — as AI baselines improve every few months, individual judgment only matters relative to what the tools do by default
  • Reframes the human role as “taste extractor”: articulating tacit preferences so tool loops can operationalize them, which is exactly the shell pattern of encoding intent into composable commands
  • Proposes concrete extraction techniques (A/B interviews, ghost writing, external reviews) that all reduce to the same structure — a human-in-the-loop refining outputs through iterative feedback cycles