RAID (Real World AI Definitions)
2024-7-9 01:1:51 Author: danielmiessler.com(查看原文) 阅读量:4 收藏

I see a lot of definitions of different AI terms out there and I wanted to put my own thoughts into a long-form format. This is mostly for my own reference, but I hope it’ll be useful to others as well.

Table of Contents

Some of these are practical definitions, i.e., useful, general, and conversational containers that help us frame an everyday conversation. Others are more technical with specific thresholds, which are better for tracking milestones towards big jumps.

The One-Liner AI Definitions Table

I really liked Hamel Husain’s AI Bullshit Knife that gave violently short definitions for a bunch of AI terms. Here’s my own, expanded take on it.

  • AI — Tech that does cognitive tasks that only humans could do before

  • Machine Learning — AI that can improve just by seeing more data

  • Prompting: Clearly articulate what you want from an AI

  • RAG: Provide context to AI that’s too big/expensive to fit in a prompt

  • Agent: An AI component that does more than just LLM call→respond

  • Chain-of-Thought: Tell the AI to walk through its thinking and steps

  • Zero-shot: Ask an AI to do something without any examples

  • Multi-shot: Ask an AI to do something and provide multiple examples

  • Prompt Injection: Tricking an AI into doing something bad

  • Jailbreaking: Bypassing security controls to get full execution ability

  • AGI — General AI smart enough to replace an $80K white-collar worker

  • ASI — General AI that’s smarter and/or more capable than any human

I think that’s a pretty clean list for thinking about the concepts. Now let’s expand on each of them.

Expanded Definitions Table

We’ll start with an expanded definition and then go into more detail and discussion.

AI

AI is technology that does cognitive tasks or work that could previously only be done by humans.

Real-world AI Definitions

There are so many different ways to define AI, so this is likely to be one of the most controversial. I choose the “what used to only be possible with humans” route because it emphasizes how the bar continues to move not only as the tech advances, but also as people adjust their expectations. The general template for this rolling window is this:

Well, yeah, of course AI can do ___________, but it still can’t do __________ and probably never will.

(Narrator: And then that happened 7 months later)

I like this definition over a lot of the more technical ones because they’re usually so granular and specific, and it’s hard to get any large group of experts to agree on them.

Machine Learning

Machine Learning is a subset of AI that enables a system to learn from data alone rather than needing to be explicitly reprogrammed.

Real-world AI Definitions

I know there are a million technical definitions for machine learning, but back in 2017 when I started studying it the thing that floored me was very simple.

Learning from data alone.

That’s it. It’s the idea that a thing—that we created—could get smarter not from us improving its programming, but from it just seeing more data. That’s insane to me, and to me it’s still the best definition.

Prompt Engineering (Prompting)

Prompt Engineering is the art and science of using language (usually text) to get AI to do precisely what you want it to do.

Real-world AI Definitions

Some people think Prompt Engineering is so unique and special it needs its own curriculum in school. Others think it’s just communication, and isn’t that special at all.

I’ll take a different line and say prompt engineering is absolutely an art—and a science—because it’s more about clear thinking than the text itself.

Just like writing, the hard part isn’t the writing, but the thinking that must be done beforehand for the writing to be good.

The best Prompt Engineering is the same. It comes from deeply understanding the problem and being able to break out your instructions to the AI in a very methodical and clear way.

You can say that’s communication, which it is, but I think the most important component is clear thinking. And shoutout to our open source project Fabric that takes this whole thinking/writing thing very seriously in its crowdsourced prompts.

Retrieval Augmented Generation (RAG)

Retreival Augmented Generation (RAG) is the process of taking large quantities of data—which are either too large or too expensive to put in a prompt directly—and making that data usable as vectorized embeddings to AI at runtime.

Real-world AI Definitions

It’s important to understand that RAG is a hack that solves a specific problem, i.e., that people and companies have vast amounts (gigabytes, terabytes, or petabytes) of data that they want their AI to be aware of when performing tasks. The problem is that AI can only practically handle small amounts of that data per interaction—either because of the size of the context window, or because of cost.

So the solution we’ve come up with is to use embeddings and vector databases to encode relevant information, and then to include small amounts of relevant context from that data in AI queries at runtime. Sending context-specific embeddings rather than the raw content makes the queries much faster and more efficient than if all the content itself was sent.

It’s not clear yet what the successor will be for this, but one option is to add more content directly into prompts as the context windows increase and inference costs go down.

Agents

An AI agent is an AI component that interprets instructions and takes on more of the work in a total AI workflow than just LLM response, e.g., executing functions, performing data lookups, etc., before passing on results.

Real-world AI Definitions

This one will be one of the most contested of these definitions because people are pretty religious about what they think an agent is. Some think it’s anything that does function calls. Others think it’s anything that does tool use. Others think it means live data lookups.

I think we should abstract away from those specifics a bit, because they’re so likely to change. That leaves us with a definition that means something like, “taking on more work in a way that a human helper might do”. So looking things up, calling tools, whatever.

The trick is to remember the etymology here, which is the Latin “agens”, which is “to do”, “to act”, or “to drive”. So ultimately I think the definition will evolve to being more like,

An AI component that has its own mission and/or goals, and that uses its resources and capabilities to accomplish them in a self-directed way.

A future definition of AI Agent

Perhaps that’ll be the definition in the 2.0 version of this guide, but for now I think AI Agent has a lower standard, which is anything that acts on behalf of the mission, i.e., something that performs multiple steps towards the final goal.

And like we said, practically, that means things like function calls, tool usage, and live data search.

Chain-of-Thought

Chain-of-Thought is a way of interacting with AI in which you don’t just say what you want, but you give the steps that you would take to accomplish the task.

Real-world AI Definitions

To me, Chain-of-Thought is an example of what we talked about in Prompt Engineering. Namely—clear thinking. Chain-of-Thought is walking the AI through how you, um, think when you’re solving the problem yourself. I mean, the clue is in the title.

Again, I see prompting is articulated thinking, and CoT is just a way of explicitly doing that. I just natively do this now with my preferred prompt template, and don’t even think of it as CoT anymore.

Prompt Injection

A method of using language (usually text) to get AI to do something it’s not supposed to do.

Real-world AI Definitions

People often confused Prompt Injection and Jailbreaking, and I think the best way to think about this (thanks to Jason Haddix for talking through this and sharing his definition) is to say that:

  • Prompt Injection is a METHOD

  • Jailbreaking is a GOAL

Or, more precisely, Prompt Injection is a method of tricking AI into doing something, and that something could be lots of things:

  • Getting full/admin access to the AI (Jailbreaking)

  • Getting the AI to take unauthorized / dangerous actions

  • Stealing data from backend systems

  • Etc.

Jailbreaking, on the other hand, is the act of trying to get to a Jailbroken State. And that jailbroken state is one in which as much security as possible is disabled and you have the ability to interact with the system (in this case an AI) will maximum possible permissions.

Jailbreaking

An attempt to bypass an AI’s security so that you can execute commands on the system with full/admin permissions.

Real-world AI Definitions

Jailbreaking is a broader security term that applies mostly to operating systems, but that happens to also apply to LLMs and other types of AI systems.

Generically, Jailbreaking means getting to a security-bypassed state. So imagine that an operating system, or an AI system, is a very powerful thing, and that we want to ensure that users of such a system can only do certain things and not others.

Security is put in place to ensure they can’t interact with certain OS functions, data from other users, etc., and that ensures that the system can be used safely.

Jailbreaking is where you break out of that protection and get full control of the thing—whatever that thing is.

Artificial General Intelligence (AGI)

The ability for an AI system—whether a model, a product, or a system—to fully do the job of an average US-based knowledge worker.

Real-world AI Definitions

In other words, this is AI that’s not just competent at doing a specific thing (often called narrow AI), but many different things—hence, general. So basically it’s some combination of sufficiently general and sufficiently competent.

The amounts of each, well…that’s where most of the debate is. And that’s why I’ve settled on the definition above. Here are a few more technical details that should be mentioned as necessary for my definition of AGI:

  • This AGI should be generally competent at many other cognitive tasks as well—given similar training—to roughly equal the skill level of such a knowledge worker. For example, this AGI should be able to learn spreadsheets, or how to do accounting stuff, or how to manage a budget, even if they were originally trained to do cybersecurity assessments. In other words, the general part of AGI is important here, and such a system needs to have some significant degree of flexibility.

  • The system should be sample efficient (learning from few examples) at about the same level as the 80K employee, recognizing that it’ll be much better and somewhat worse than a human in different areas.

  • The system should be able to apply new information, concepts, and skills it learns to additional new problems it hasn’t seen before, which is called generalization (again, AGI). So, if we teach it how to do budgets for a skateboarding company, we shouldn’t need to teach it to do budgets or something similar for other types of companies. So it’s not just general in its out-of-the-box capabilities but also in the new things it learns.

I like this definition because it focuses on what I would argue most humans actually care about in all this AI discussion, which is the future of humans in a world of AI. Regular people don’t care about processing speed, or agents, or model weights. What they care about is if and when any of this is going to tangibly affect them.

And that means job replacement.

So here are the levels I see within AGI—again, with the focus on replacing a decent white-collar worker.

AGI 1 — Better, But With Significant Drawbacks

This level doesn’t yet function completely like an employee, but it is right above the bar of being a worker replacement. You have to specifically give it tasks through its preferred interface using somewhat product specific language. It frequently needs to be helped back on track with tasks because it gets confused or lost. And it needs significant retooling to be given a completely different mission or goals.

Characteristics:

  • Interface (web/app/UI): proprietary, cumbersome

  • Language (natural/prompting/etc): somewhat specific to product

  • Confusion (focus): human refocus frequently needed

  • Errors (mistakes): frequent mistakes that need to be fixed by humans

  • Flexibility: needs to be largely retooled to do new missions or goals and given a basic plan

Discussion: A big part of the issue of this level is that real work environments are messy. There are a million tools, things are changing all the time, and if you have an AI running around creating bad documents, or saying the wrong thing at the wrong time, or oversharing something, causing security problems, etc.—then that’s a lot of extra work being added to the humans managing it.

So the trick to AGI 1 is that it needs to be right above the bar of being worth it. So it’ll likely still be kludgy, but it can’t be so much so that it’s not even worth having it.

AGI 2 — Competent, But Imperfect

This level is pretty close to a human employee in terms of not making major mistakes, but it’s still not fully integrated into the team like a human worker is. For example you can’t call it or text it like you can a human. It still sometimes needs to explicitly be told when context changes. And it still needs some help when the mission or goals change completely.

Characteristics:

  • Interface (web/app/UI): mostly normal employee workflows (standard enterprise apps)

  • Language (natural/prompting/etc): mostly human, with exceptions

  • Confusion (focus): it doesn’t get confused very often

  • Errors (mistakes): fewer mistakes, and less damaging

  • Flexibility: adjusts decently well to new direction from leaders, with occasional re-iteration needed

Discussion: At this level, most of the acute problems of AGI 1 have been addressed, and this AI worker is more clearly better than an average human worker from an ROI standpoint. But there are still issues. There is still some management needed that’s different/above what a human needs, such as re-establishing goals, keeping them on track, ensuring they’re not messing things up, etc.

So AGI 2 is getting closer to an ideal replacement of a human knowledge worker, but it’s not quite there.

AGI 3 — Full Worker Replacement

This level is a full replacement for an average knowledge working in the US—before AI. So let’s say a knowledge worker making $80,000 USD in 2022. At this level, the AI system functions nearly identically to a human in terms of interaction, so you can text them, they join meetings, they send status updates, they get performance reviews, etc.

Characteristics:

  • Interface (web/app/UI): just like any human employee, so text, voice, video, etc., all using natural language

  • Language (natural/prompting/etc): exactly like any other human employee

  • Confusion (focus): confused the same amount (or less than) the 80K employee

  • Errors (mistakes): same (or fewer) mistakes as an 80K employee

  • Flexibility: just as flexible (or more) than an 80K employee

Discussion: At this level the AI functions pretty much exactly like a human employee, except far more consistent and with results at least as good as their human counterpart.

Artificial Super-Intelligence (ASI)

A level of general AI that’s smarter and more capable than any human that’s ever lived.

Real-world AI Definitions

This concept and definition is interesting for a number of reasons. First, it’s a threshold that sits above AGI, and people don’t even agree on that definition. Second, it has—at least as I’m defining it—a massive range. Third, it blends with AGI, because AGI really just means general + competent, which ASI will be as well.

My preferred mental model is an AI that’s smarter than John Von Neumann, who a lot of people consider the smartest person to ever live. I particularly like him as the example because Einstein and Newton were fairly limited in focus, while Von Neumann moved science forward in Game Theory, Physics, Computing, and many other fields. I.e., a brilliant generalist.

But I don’t think being smarter than any human is enough to capture the impact of ASI. It’s a necessary quality of superintelligence, but not nearly enough.

I think ASI—like AGI—should be discussed and rated within a human-centric frame, i.e., what types of things it will be able to do and how those things might affect humans and the world we live in. Here are my axes:

Primary Axes

  • Model (abstractions of reality)

    • Fundamental

    • Quantum

    • Physics

    • Biology

    • Psychology

    • Society

    • Economics

    • etc.

  • Action (the actions it’s able to take) 

    • Perceive

    • Understand

    • Improve

    • Solve

    • Create

    • Destroy

Secondary Axes

  • Field (human focus areas)

    • Physics

    • Biology

    • Engineering

    • Medicine

    • Material Science

    • Art

    • Music

    • etc.

  • Problems (known issues)

    • Aging

    • War

    • Famine

    • Disease

    • Hatred

    • Racism

  • Scale (things)

    • Quark

    • Atom

    • Molecule

    • Chemical

    • Cell

    • Organ

    • Body

    • Restaurant

    • Company

    • City

    • Country

    • Planet

    • Galaxy

    • etc.

So the idea is to turn these into functional phrases that convey the scope of a given AI’s capabilities, e.g.,

  • An AI able of curing aging by creating new chemicals that affect DNA.

  • An AI capable of managing a city by monitoring and adjusting all public resources in realtime.

  • An AI capable of taking over a country by manufacturing a drone army and new energy-based weaponry.

  • An AI capable of faster-than-light travel by discovering completely new physics.

  • Etc.

With that type of paradigm in mind, let’s define three levels.

ASI 1 — Superior

An AI capable of making incremental improvements in multiple fields and managing up to city-sized entities on its own.

Real-world AI Definitions

  • ➡️ Smarter and more capable than any human on many topics/tasks

  • ➡️ Able to move progress forward in multiple scientific fields

  • ➡️ Able to recommend novel solutions to many of our main problems

  • ➡️ Able to copy and surpass the creativity of many top artists

  • ➡️ Able to fully manage a large company or city itself

  • ❌Unable to create net-new physics, materials, etc.

  • ❌Unable to fundamentally improve itself by orders of magnitude

ASI 2 — Dominant

An AI capable of creating net-new science and art, fundamental self-improvement, and that’s capable of running an entire country on its own.

Real-world AI Definitions

  • ➡️ All of ASI 1

  • ✅ Able to completely change how we see multiple fields

  • ✅ Able to completely solve most of our current problems

  • ✅ Able to fully manage a country by itself

  • ✅ Able to fundamentally improve itself by orders of magnitude

  • ❌Unable to create net-new physics, materials, etc.

  • ❌Unable to run an entire planet

ASI 3 — Godlike

An AI capable of creating net-new physics, completely new materials, manipulation of (near)fundamental reality, and can run the entire planet.

Real-world AI Definitions

  • ➡️ All of ASI 2

  • ✅ Able to modify reality at a fundamental or near-fundamental level

  • ✅ Able to manage the entire planet simultaneously

  • ✅ Its primary concerns perhaps become sun expansion, populating the galaxy and beyond, and the heat death of the universe (reality escape)

Summary

  1. AI terms are confusing, and it’s nice to have simple, practical versions.

  2. It’s useful to crystalize your own definitions on paper, both for your own reference and to see if your definitions are consistent with each other.

  3. I think AI definitions work best when they’re human-focused and practically worded. What we lose in precision can be handled and debated elsewhere, and what we gain is the ability to have everyday conversations about AI's implications for what matters—which is us.

Notes

  1. Thanks to Jason Haddix and Joseph Thacker for reading versions of this as it was being created, and giving feedback on the definitions.

  2. Hamel Husain has a good post on cutting through AI fluff called The AI Bullshit Knife, where he lays out his own super-short definitions. It’s quite good. X THREAD

  3. Harrison Chase also has a model called Levels of Autonomy in LLM Applications. LANGCHAIN

  4. I will keep these definitions updated, but I’ll put change notes down here in this section for documentation purposes.

  5. The distinction between AGI and ASI is complex, and contested, and I’m still thinking through it. But essentially, I’m thinking of AGI in terms of human work replacement and ASI in terms of capabilities and scale.

  6. This resource is part of what I’ve been doing since 1999, which is writing my own tutorials/definitions of things as a Feynmanian approach to learning. Basically, if you want to see if you know something, try to explain it.


文章来源: https://danielmiessler.com/p/raid-ai-definitions
如有侵权请联系:admin#unsafe.sh