temporal intelligence for AI?

A viable system implies it exists in the real world where time passes. A viable system is inherently intelligent. A viable system that accomplishes tasks more efficiently (in cost: time, money, etc.) is more intelligent than another.

LLMs are clearly “intelligent” in some sense, but clearly not viable systems – they’re effectively stateless. You can add memories or a filesystem and tools, but you need more for viability. I think that “more” is a meta-harness the system can iterate on itself. This is not a new idea, but I think it’s increasingly viable with today’s commoditized LLMs and agent CLIs.

Note

For economic reasons (subsidized plans), I think relying on existing agent CLIs (claude, codex, copilot) is the optimal path for now. But we should be prepared to replace them with our own minimalist core “LLM with tool use in a loop” agent harness CLI that uses AI service providers (OpenAI, Anthropic, public clouds) for the LLMs. Even farther out, these may be replaced by self-hosted or local LLMs (probably not anytime soon).