The Most Important Ideas in AI Right Now
嗯,用户让我总结一下这篇文章的内容,控制在一百个字以内。首先,我需要仔细阅读文章,理解其主要观点。文章讨论了AI带来的几个关键变化,比如自主组件改进、意图工程转型、透明化、工作中的脚手架、专业知识的扩散等。 接下来,我要找出这些观点的核心。自主组件改进和意图工程转型说明AI在优化流程方面的能力提升。透明化意味着工作流程和数据变得更加可测量和可优化。专业知识的扩散则指专家知识被记录下来并共享,这改变了传统的工作方式。 然后,我需要将这些要点浓缩到一百字以内。要注意用词简洁,同时涵盖所有主要方面:AI带来的变化、自主优化、透明化、脚手架工作减少以及知识共享的影响。 最后,检查总结是否准确传达了文章的核心内容,并且符合字数限制。确保没有遗漏关键点,并且语言流畅自然。 </think> AI通过自主优化、透明化和意图明确化改变世界。它揭示大部分工作为支撑性任务,并将专业知识转化为公开资源,加速改进速度。 2026-3-28 19:0:0 Author: danielmiessler.com(查看原文) 阅读量:2 收藏

Self-improvement and transparency change everything in unexpected ways

March 28, 2026

The Most Important Ideas in AI Right Now

After thinking about this for about a week, and attending the RSA conference during that time, I think there are a few main AI ideas that are going to change things more than anything else.

  1. Autonomous Component Improvement
  2. The Transition to Intent-Based Engineering
  3. The Move from Opacity to Transparency
  4. The Realization That Most Work is Scaffolding
  5. Expertise Gets Diffused into Public Knowledge

This one connects to the current-to-ideal-state concept, the Algorithm, general verifiability, etc.

But one thing that really made it tangible was Karpathy's Autoresearch project.

His was focused on AI research, as in "Autoresearch of the research portion of AI research", meaning automatically doing all the gross stuff around model parameter tweaking, wrangling fragile environments and combinations of options, etc.

His release lets you give some ideas in a PROGRAM.md file, and the system will handle all that grossness itself, and you go to sleep and it's used ML optimization to produce better results than what you had.

Extending Autoresearch

But now there's "Autoresearch for X", meaning it's becoming a paradigm. A movement. A tool, basically.

He has lots of people thinking:

Could I apply a similar thing to this thing I'm working on?

It's extraordinary.

Combining Autoresearch with what I've been working on

So my thing has been this whole general verifiability concept; or, general hill-climbing. Again pivoting off of something Karpathy said a long time ago in Software 2.0 and a recent tweet, where he talked about the future of software being everything being verifiable.

So what I do inside of the Algorithm within PAI is I try to break everything into ideal state criteria that are essentially constructing the ideal state for the outcome that I want.

And from there the Algorithm can hill-climb towards it.

Evals for everything

Related to this is the concept of Evals for everything. It's very much like my general verifiability or general hill-climbing. It's the idea that everything we do becomes measurable, but more importantly: improvable.

And the thing that makes evals possible for everything is transparency.

The real power of AI is moving from current state to ideal state. Define where you are, define where you want to be, and let AI close the gap. Simple concept, but there's a step before any of that works: you have to be able to articulate what you actually want. And it turns out this is incredibly hard. If you can't describe what good looks like, no amount of tooling helps you.

This is a massive problem for companies. Ask a CEO what their ideal security program looks like and you'll get hand-waving. Ask a team lead what "done" means for their project and you'll get a paragraph that three people interpret three different ways. The articulation gap isn't just between experts and AI—it's between leaders and their own organizations. Most companies can't clearly state what they're trying to do, let alone break it into components you could measure or optimize.

What I've been building inside the Algorithm is exactly this—a way to reverse engineer any request into discrete, testable ideal state criteria. Eight to twelve words each, binary pass/fail. Once you have those, you can hill-climb. You can eval. You can automate improvement. But the whole thing starts with being able to say what you want. That's the new engineering skill—not coding, not prompting. Articulating intent clearly enough that it becomes verifiable.

Companies have never really been able to see what's happening inside their own walls. How much does this process actually cost? How long does it really take? What's the quality of the output? Who's doing the work vs. who's doing the scaffolding around the work?

Most organizations run on vibes and spreadsheets. AI makes all of that visible. The actual work, the actual costs, the actual quality—all of it becomes measurable in ways that were never practical before. And once you can see it, you can improve it. That applies to businesses, governments, teams of three people—anything you want to point it at.

And one of the first things transparency reveals is how much of the work was never really the work.

AI is revealing that 75-99% of knowledge work is scaffolding overhead. In security testing, development, consulting—most of the time goes to maintaining tooling, workflows, templates, and knowledge bases. The actual hard thinking is a tiny percentage, done by a tiny percentage of people, a tiny percentage of the time.

AI absolutely crushes the scaffolding part. Agent Skills have shown you can package all that context, methodology, and tooling into a skill, and the AI executes as good or better than most professionals. The work wasn't hard—maintaining the scaffolding was.

There's an articulation gap between what experts know and what's written down. Most expertise lives in people's heads. Cliff, the 62-year-old who knows how everything works but never documented any of it. When Cliff retires, that knowledge dies with him.

What's happening now is that expertise is dispersing from brains into skills, SOPs, context files, open source projects. And once it's captured it never comes back out. It's like pee in the pool. Every skill published, every process documented, every expert debrief captured—it permanently enters the collective knowledge base. And it makes every AI instance smarter. Not one. All of them. Simultaneously.

This is a one-way ratchet. Humans take 20-30 years to develop deep expertise in a single domain. They forget things, they retire, they leave companies. AI absorbs all captured expertise instantly, never forgets, and can be duplicated infinitely. The gap between human expertise accumulation and AI expertise accumulation is widening every single day.

Autonomous improvement changes the speed of everything

The speed of improvement in many fields is about to accelerate beyond anything we've seen. When you can define what good looks like, measure against it, and iterate automatically—things that used to take months of manual tuning happen overnight. Autoresearch showed this for ML research. But this applies to security programs, consulting deliverables, content pipelines, hiring processes. Anything with a definable ideal state becomes autonomously improvable.

Intent becomes the bottleneck

The new scarce skill isn't coding or prompting—it's being able to say what you actually want. And it has to be high-quality intent. The quality of the idea is always the most important thing. But the second most important is the ability to articulate it, define it as your actual goal, and orient the entire company around it. Most leaders can't do this. Most companies can't do this. The ones who figure it out first get to point all these optimization tools at real targets while everyone else is still hand-waving about OKRs.

Everything becomes transparent

We are about to see the world go from opaque vibes to transparent and optimizable components. Frauds and gatekeepers will have increasingly few places to hide.

This also makes it way harder to compete when you're selling products or services. Because the first thing the agents are going to ask is, "What are your metrics?" Not your marketing copy. Not your customer quotes. Your actual, verifiable performance data. If you don't have it, you lose to someone who does.

Scaffolding gets commoditized

The wizardry around certain fields and professions will be revealed as scaffolding that was simply not understood by most people. Such as how to stand up a particular dev environment and keep things running long enough to write some code. Same with law, consulting, and other high-paying professions.

Expert knowledge becomes public infrastructure

The knowledge that only experts used to have will soon be possessible by everyone—and most importantly by AIs. People's advantage of 50 years of experience in a field won't be one for much longer. Because that content will have been extracted by them or their colleagues from elsewhere in the world.

The craziest thing about this is that they all play off and magnify each other.

It's not just that we're going to be able to improve all these different components; it's the fact that the speed of the improvement itself will improve.

Of all these ideas, this is the most important to take away.

I cannot express to you how insane this is about to be.


文章来源: https://danielmiessler.com/blog/the-most-important-ideas-in-ai?utm_source=rss&utm_medium=feed&utm_campaign=website
如有侵权请联系:admin#unsafe.sh