Writing
- Insights 16: Claude Code — A Playbook for the Trillion-Dollar Opportunity
Foundation Capital's "Context Graphs" thesis names a trillion-dollar prize. A builder's blueprint for capturing it, drawn from an unlikely source.
- The Claude Code Hypothesis
ChatGPT, Perplexity, NotebookLM — they're all demos. Claude Code is the actual product for knowledge work. A hot take on why the magic is context, not code.
- AI2 Incubator Eats Its Own Dog Food: The AI-Native Chapter
Moving into AI House at Pier 70, and committing to eat our own dog food. Two experimental AI-native projects for the AI2 Incubator: systematic idea discovery, and customer discovery simulators for founders.
- Insights 15: The State of AI Agents in 2025 — Balancing Optimism with Reality
2025, the year of AI agents, or the year of inflated agentic expectation? A deep research report on where agents actually stand.
- Open Versus Closed AI Development: A Balanced Perspective
The open-vs-closed AI debate is often idealistic and self-interested. A deep-research take that cuts through the nuance — generated by OpenAI's Deep Research, lightly annotated.
- Insights 14: Navigating Up the Slope of Enlightenment
Launching Harmonious AI — how to go from picking an idea to raising seed in 2024.
- Insights 13: Trough of Disillusionment
2024 will be a tough year for GenAI as we struggle to teach LLMs to learn from mistakes. AI development will become accessible to a broader audience.
- Insights 12: Generative Voiceover, the Next Multi-Billion Dollar Opportunity
The grand challenge of creating AI capable of true voiceover — approaching the infinitely rich, nuanced, and expressive qualities of human spoken communication.
- Insights 11: Peak of Inflated Expectation?
A review of AI progress six months after ChatGPT's release. Agents, community models, AI infrastructure and tooling, and AI safety.
- Insights 10: Conversational Programming, AI Assistants, Foundation Model Operations
A survey of startup opportunities around foundation models: AI assistants, foundation model operations (FMOps), and "conversational programming" — what the industry would later call vibe coding. Written before Cursor, Devin, and the coding-agent wave.
- Insights 9: Stable Diffusion, Code Generation, Oren Joining Incubator
The incubator adds a world-renowned AI expert and entrepreneur as technical director while our portfolio raised $35M over the summer. Plus Stable Diffusion and code generation.
- Insights 8: Founder Technical Toolkits, Flowdex, Yoodli, FM for Commerce
Tools handy for founders building AI-first companies, the launch of AI-powered note-taking tool Flowdex, and foundation models applied to commerce.
- Insights 7: Open Source Large Models, Vespa
Open-sourced models — GPT-J, BLOOM, PolyCoder. Plus the Vespa search engine and few-shot entity extraction with Cohere.
- Insights 6: Measure Labs, Birch, and Augment
Measure Labs, Birch, and Augment raised seed rounds. Updates on foundation models from Big Tech.
- Insights 5: WhyLabs, Lexion, and WellSaid Labs
WhyLabs, Lexion, and WellSaid Labs each raised A rounds. Why foundation models need to be large.
- Insights 4: Yoodli Unstealthed, Large Language Models, Task-Centric AI
Yoodli comes out of stealth, TheSequence profiles WhyLabs, and a brief history of large language models — calling out the rise of LLMs more than a year before ChatGPT arrived.
- Insights 3: Ozette, Modulus, and the Transformers Effect
At the intersection of AI and the life sciences — Ozette and Modulus. Plus the rise of Transformers beyond NLP.
- Insights 2: Weakly and Self-Supervised AI, Perceiver IO, OpenAI Codex
Profiles of alumni Emad Elwany and Greg Finak, weakly-supervised and self-supervised learning, DeepMind's Perceiver IO, and OpenAI Codex.
- ML In Startups: Some Observations
Observations on building AI-first products in early-stage startups: bootstrap with pre-trained models, aim for minimum algorithmic performance, measure on real product data, and lean on weak supervision.
- First Insight: Modulus, WellSaid, Applied AI
Michael Carlon joins the incubator team; Modulus Therapeutics' seed round and WellSaid's Series A; modeling with limited labeled data, and neural networks catching up to tree-based methods.
- Conversational AI: What To Expect In The Next Five Years
By 2022, chatbots will take coffee orders, help with tech support, and recommend restaurants — albeit without small talk and good humor. A look at task-oriented dialog agents and why the open-ended chat problem is much harder.
- alexafsm: A Finite-State Machine Python Library for Building Complex Alexa Skills
An open-source Python library from the Allen Institute for AI for modeling Alexa dialog agents as finite-state machines, with first-class concepts for states, attributes, transitions, and actions.