๐ Defining the Jargon
We'll be throwing some terms around in this workshop, which you should become familiar with. AI tools, up until now, were created by engineers with almost zero product and marketing help, which means the naming is completely arbitrary and meaningless, in most cases.
In other words: don't try and figure out why a name or a term is the way it is. Naming is hard, that's it.
- Model or LLM. This refers to the "Large Language Model". This is the raw data that is the result of the algorithmic training that built the model. Scraping GitHub repos, StackOverflow, or any other online resource to train a model results in a given LLM, such as Claude Sonnet 4 or Gemini 2.5-pro.
- Prompt. This is the thing you write to the AI to have it do a thing. It doesn't have to be well formed sentences, but grammar does play a part. So does CAPS and punctuation.
- Agent. This is when you want something done for you, rather than to read an answer. You give the tooling "agency" to complete a task. The LLM does not do the task, the tooling does. We'll get into this.
- Instructions. Most AI tooling has the ability to use "instructions", which usually involve one or more markdown documents that explain how you like things done. The tone of the response, coding standards, templates for tests or classes. These go in the instruction files.
- MCP. This is "Model Context Protocol" three generic words that, when strung together, mean even less than they did before. When you use MCP, you're "wiring up" your own set of data as the context for a given prompt. It's like instructions on overdrive. The "wiring up" part is enabled by the "protocol". For instance: you might want to hook up the PostgreSQL documentation to your chat session. MCP allows for this.
- Cutoff Date. Every model is trained on a window of data and then versioned for use. For instance, a model that is created for working with code (such as Claude Opus 4) might scrape public repositories, StackOverflow, and other resources for a given period of time. At some point, that scraping needs to end so the model can be built up. This is the "model cutoff time" and is critical to understand when reviewing the answers your LLM gives you as many things can be out of date.
- Hallucination. I'm not a fan of this term because it humanizes the LLM, suggesting that it's somehow intoxicated or otherwise affected. The LLM is a computer algorithm that is going to tell you what it is you're wanting to see, that's it. Either way, this term has stuck and refers to when LLMs add incorrect details or embellishments to their answer.
Another word you'll see a lot is the ever-present abuse of "context". I would have added it to the list above, but like the definition, the meaning of the word changes based on how it's being used. You set context with instructions, you also set it by referencing a document in a chat prompt.
Context is one of the words in MCP, for some reason. You'll just have to get used to it.