Open-source research on three hard problems: temporal context graphs, multi-agent coordination, and AI-augmented decision-making.
AI systems need more than prompts and models. They need context about what changed and when, coordination mechanisms for multi-agent workflows, and judgment frameworks for high-stakes decisions.
These three projects explore each layer—from extracting structured knowledge from conversations, to coordinating agent swarms, to testing AI architectures against real decisions with known outcomes.
Temporal knowledge graphs that track how understanding evolves
Multi-agent systems that collaborate, track, and visualize work
Testing AI reasoning against high-stakes human decisions
What AI knows, when it knew it, and how it changed.
Most AI memory systems store snapshots. They remember what was said, but not when understanding shifted or why it matters now. Engram is different: it tracks how knowledge evolves.
Imagine a conversation over weeks. A client mentions they prefer Nike running shoes. Later, they switch to Adidas because of arch support. A regular AI remembers the latest preference. Engram remembers the evolution—when it changed, what triggered it, and what it replaced. So when you ask "what did they prefer before the switch?" you get Nike, not confusion.
This is a temporal context graph: not just facts, but the relationships between facts and how they change over time. Built for conversations, not documents. The only open-source system that does this.
Coordination infrastructure for AI agent swarms.
As AI agents proliferate, we need infrastructure to coordinate them—track what each agent is doing, visualize work in progress, and manage shared context. Swarm Control explores patterns for multi-agent coordination between AI systems and human developers.
Weekend prototypes exploring: event hooks from agent activity, dashboard UIs for swarm monitoring, three-tier memory systems, and timeline visualizations for agent workflows.
Testing AI architectures against decisions with known outcomes.
The WeWork/BowX SPAC merger (October 2021) is a case study in judgment failure. The numbers said proceed. Many sophisticated investors agreed. The outcome was catastrophic. We're testing whether AI architectures can surface signals that human decision-makers rationalized away.
This is applied research: feed pre-close documents to multiple AI systems and evaluate what they catch, what they miss, and how they compare to human judgment under uncertainty.
| Project | Layer | Problem | Status |
|---|---|---|---|
| Engram | Context | AI systems lack temporal understanding of evolving knowledge | In Dev |
| Swarm Control | Coordination | No infrastructure for multi-agent AI workflows | Prototype |
| Deal Signals | Judgment | Testing whether AI can augment human decision-making | Active |
| MRI Triage | Judgment | Medical imaging urgency detection (exploratory) | Archive |
If you are:
30 minutes. Blunt feedback. Examples of where this breaks. No pitch. No deck. Just conversation.