Notes

Skills May Matter More Than Agents

Main AI vendors may be converging on a common pattern: the SDK provides the Runtime; developers own the Skills.

A Skill is fundamentally an act of knowledge externalization — translating workflows, business rules, and validated processing logic into structured, reusable capability units. From this perspective, Skills may carry more long-term value than Agents.

Meanwhile, Agent Runtime is becoming infrastructure: standardized, interchangeable, converging toward commodity.

Agents can be replaced. Proprietary data and Skills are the harder assets to replicate.

The open question: when a Skill runs on a third-party Runtime, its data and logic are inevitably exposed to the platform. How do you protect against that?

Permalink

Harness Engineering Is Not a New Paradigm

Most people are treating Harness Engineering like it’s a new paradigm. It’s not.

Start from first principles: an LLM takes input and produces output. The model is fixed. You only control two things.

The first is how you construct the input. Prompt Engineering and Context Engineering are the same problem at different scales. One is just the scaled-up version of the other. They are not two paradigms, just one continuous spectrum.

The second is how you make the system around the model reliable: execution control, evaluation, observability, and feedback loops. That is genuinely different engineering territory.

Harness Engineering is just both of these combined. OpenAI gave it a name, backed it with a 1M-line codebase story, and framed it as a new era.

The underlying problems are not new. The label is.

Permalink

MCP and Skills Solve Different Problems

MCP is basically a capability contract. It tells you what tools exist, how to call them, and what the permission boundaries are.

A Skill is a different thing entirely. It packages how you actually get something done, which in practice is prompt logic plus a tool-use strategy.

Skills feel more immediately useful because the key steps, calling conventions, and even error handling are already written in. That was never MCP’s job.

As for CLI replacing MCP, I think they are splitting into different lanes.

CLI is more straightforward for local automation and single-user setups. But once you bring in permissions, audit requirements, or cross-system coordination, a protocol layer like MCP still has advantages you cannot just wave away.

Longer term, we will probably see new interfaces that are better suited for agents specifically. These things are evolving together, not replacing one another.

Permalink

AI Systems Need Method, Execution, and Evaluation

Most AI systems are still stuck in a single-inference pattern: they re-plan every time, discard execution results immediately, and accumulate no experience.

A more sustainable architecture separates concerns into three layers:

  • a method layer defining goals and strategy
  • an execution layer housing validated, reusable scripts
  • an evaluation layer detecting degradation and triggering repair

Reasoning should intervene only when necessary.

The deeper shift is this: in the AI era, the real code is prompts, skills, and method descriptions. Traditional code degrades into a replaceable execution artifact.

The maintenance focus moves from crafting code to clearly expressing objectives, constraints, and success criteria.

Permalink

Generative AI Does Not Think

Generative AI does not think. What it does is predict the highest-probability token given the preceding context.

Humans can actively violate rules: “I know the expected answer, and I will deliberately avoid it.”

AI cannot do this. It can only re-optimize under new constraints.

This is not just a capability gap. It is a structural difference, at least for now.

Permalink

AI Displacement Repeats Old Patterns

This wave of AI displacement shares familiar patterns with previous industrial revolutions.

Technology moves forward, and some people get left behind before the promised new opportunities ever arrive.

Whether they land on their feet depends on how quickly they adapt and whether policy can keep up.

Neither of those is guaranteed.

Permalink

AI Is Reshaping Social Learning

AI is reshaping social learning.

We once relied on peers to recommend resources, challenge ideas, and surface blind spots. Now more of those cognitive functions are being handled by AI.

Human relationships will not disappear, but the division of roles is shifting:

  • knowledge and thinking from AI
  • meaning, values, and shared experience from humans
Permalink

Expression Is Cheap, Taste Is the Edge

AI makes expression cheaper, but judgment becomes more important.

Originality is shifting away from “who wrote the words” toward “who decided why this content should exist.”

Expression is becoming commoditized.

Taste is the new edge.

Permalink

AI Is Combinational, Not Conceptual

Generative AI is not discovering new continents. It is a hyper-efficient search engine over high-dimensional probability space.

It is strong at combinational innovation and weak at true conceptual breakthroughs.

AI amplifies creativity, but defining new problem spaces is still a human job.

Permalink

Engineering Cognition into Executable Form

We are not merely learning how to use AI.

We are learning how to translate human tacit cognition into machine-interpretable, executable structures:

  • thinking -> prompts
  • experience -> context
  • consensus -> protocols
  • workflows -> agents

Prompt, Context, Protocol, and Agent are all ways of engineering cognition into executable form.

Permalink

Understanding Is Being Right Under Novelty

Perhaps we have misunderstood what understanding really means.

Humans and large language models are alike in one crucial way: intuition comes first, reasoning follows.

What we call explanations are often stories constructed after the fact.

True understanding is not the ability to articulate reasons. It is the ability to remain correct in unfamiliar situations.

Permalink