ChatGPT Image Aug 28, 2025, 03_49_18 PM

How to Prompt AI for Better Recipes

Some of my favorite meals have started with recipes I didn’t quite follow. It’s how I’ve ended up with Dutch babies on slow mornings, eggplant and tomato sauces cooked down into something rich and surprising, or kale chips that actually turned out crisp. Lately, I’ve even been playing with a Ninja Creami and a simple coconut ice cream.

Each of those started with a recipe, but they didn’t end there. The first recipe gives you an anchor — but the fun comes when you adapt, substitute, and experiment.

That’s why I like using AI as a recipe partner. Not to hand me the “one best” version, but to help me think through what’s possible, adjust for what I have, and give me confidence that the path I take will still work.

Start With Constraints

Before you copy the prompt below, pause and decide which of these three contexts fits your kitchen right now:

(a) Cost and access to ingredients is no issue — make the best-of-the-best version.

(b) I have the household ingredients of someone who enjoys cooking and can access most things, but I don’t need advanced or rare ingredients.

(c) I have limited ingredients and am newer to cooking — please provide substitutions or options when a step calls for something uncommon.

Pick one. Hold onto it. You’ll slot it into the prompt in just a second.

The Prompt

Copy, paste, and adapt:

Task & Framing
I want you to create a recipe for [insert dish]. Please synthesize from your strongest knowledge of cooking and baking technique; do not simply list best practices individually, but integrate them into a coherent, balanced recipe.

Constraint Selection
My chosen context is: [insert a/b/c option here]. Please carry this through the entire recipe, summary, and analysis.

Research & Summary
Open with one to two narrative paragraphs summarizing your research and key findings, framed by the context above. Briefly describe what techniques you analyzed, from where they are drawn, and how this dish has been discussed or adapted across sources; highlight your reasoning for which direction is most reliable and effective given the chosen constraint.

Method & Focus
When writing the recipe, draw on established culinary methods for this type of dish, including ingredient ratios, preparation steps, cooking techniques, and finishing methods; adjust amounts and steps as needed so the recipe is both correct and reliable.

Synthesis Request
Provide a single, unified recipe with a clearly listed ingredient section, and integrate ingredient amounts directly into the instructions so the user does not need to cross-reference. Do not overload with optional variations; choose the most effective approach and carry it through consistently.

Practical Details
Include approximate total time with both active and inactive times specified; effort level described as easy, moderate, or complex; serving size; one or two common pitfalls to avoid; a sentence on typical serving or presentation; and a note on what cooking skills the user will be improving by making this dish, such as sautéing, whisking, achieving the Maillard reaction, or balancing acidity.

Post-Recipe Analysis
After the recipe, provide explanations, citations, and comparative analysis for why you ultimately chose each technique over alternatives, again considering the context I selected. At the end of your response, also give me some options for deeper investigation — areas of technique, history, or ingredient science you find most relevant to the analysis you conducted, or that you think I might find most interesting as my cooking knowledge continues to develop.

Final Ask
Can you give me a recipe for [insert dish], fully carrying out the instructions above and shaped by the specific context I chose?

Optional Customizations

Beyond the core constraints, you can refine your recipe request further by adding details like:

  • Measurement system: grams and milliliters for precision, cups and spoons for ease.
  • Equipment available: stand mixer, Dutch oven, Instant Pot — or just stovetop + oven + basic tools.
  • Time limits: “Ready in under 30 minutes” or “I have all afternoon for a slow project.”
  • Dietary preferences: vegan, vegetarian, gluten-free, kosher, halal, dairy-free, etc.
  • Nutritional goals: higher protein, lower sodium, calorie-conscious.
  • Skill level: beginner step-by-step guidance or advanced technique for experienced cooks.
  • Ingredient sourcing: seasonal produce, pantry staples, budget-friendly brands, or specialty items.
  • Portion and scaling: family-style dinner, single-serving, or meal prep for the week.
  • Presentation goals: everyday eating, guest-ready, or festive plating.

You don’t need to add them all. Just the ones that matter most for your meal.

Why This Works

This structure pushes AI to act like a cooking instructor and researcher, not just a recipe collector. It creates synthesis instead of random tips, giving you a recipe that feels grounded, contextual, and tuned to your kitchen.

Try It Out

Pick a dish you’ve been wanting to try — maybe lasagna, sourdough bread, or a classic roast chicken. Drop this prompt into your AI chat, choose your constraint, add any customizations, and see what you get.

And then? Just start cooking. Don’t overthink it. Use a bot along the way to check your experimentation, but let yourself learn by stirring, tasting, and adjusting. That’s where the real fun begins.

Share what you learn and love about cooking with AI in our AI Studios on Tuesdays.

ChatGPT Image Jun 9, 2025, 01_21_02 PM

How Language Models Make Sense of Sentences

Reading is a sequence. Word after word, idea after idea, something takes shape. For people, it’s meaning. For machines—at least the kind behind tools like ChatGPT—it’s prediction.

Large language models (LLMs) are a type of artificial intelligence trained to generate text. They don’t understand language the way we do. They don’t think or reflect. But they’re trained to spot patterns in how people talk, write, and structure thoughts. And they do this not by understanding the meaning of each word—but by calculating which word is most likely to come next. They build responses not from meaning, but from structure.

They build responses not from understanding, but from structure.
Not from intention, but from attention.

Here’s how it works.

Let’s say the sentence reads:

The cat sat on the…

The model assigns a set of probabilities:

  • mat → 60%
  • floor → 20%
  • roof → 5%
  • table → 5%

Rather than always picking the top word, the model samples from the distribution. That means mat is more likely, but floor or roof still have a chance. This keeps the output flexible, avoids stiffness, and better reflects the natural rhythm of language.

What makes this possible is a system called a Transformer, and at the heart of that system is something called attention.

Pay attention

Attention mechanisms allow the model to weigh all the words in a sentence—not just the last one—crafting its focus based on structure, tone, and context.

Consider:

“The bank was…”

A basic model might guess the next word with this level of likelihood:

  • open → 50%
  • closed → 30%
  • muddy → 5%

But now add more context:

“So frustrating! The bank was…”

Suddenly, the prediction shifts:

  • closed → 60%
  • open → 10%
  • muddy → 20%

The model has reweighted its focus. “So frustrating” matters. It’s not just responding—it’s recalculating what’s relevant to the meaning of the sentence.

Behind the Scenes: Vectors and Embeddings

To do that, it converts each word into something called a word embedding—a mathematical representation of the word’s meaning based on how it appears across countless examples of language. You can think of it as placing each word in a multi-dimensional space, where words with similar uses and associations are grouped closely together. Each embedding is a type of vector—a set of numbers that places the word in a multi-dimensional space based on how it’s used.

Words like river and stream may live near each other because they’re used in similar ways. But imagine the space of language as layered: piano and violin might be close in a musical dimension, but distant in form. Shark and lawyer—biologically unrelated—might still align on a vector of aggression or intensity. Even princess and daisy could drift together in a cluster shaped by softness, nostalgia, or gender coding.

The model maps relationships among words by how words co-occur. Similarity becomes a matter of perspective: a word might be near in mood, but far in meaning. Embedding captures that layered closeness—a sense of how words relate, not by definition, but by use.

In most modern large language models—including ChatGPT—each word is represented by three vectors:

  • Query – what this word is looking for
  • Key – what other words offer
  • Value – the content to possibly pass forward

The model compares each word’s Query to every other word’s Key using a mathematical operation called a dot product, which measures how aligned two vectors are. You can think of it like angling searchlights—if the direction of one light (the Query) closely overlaps with another (the Key), it suggests the second word offers the kind of information the current word is searching for. These alignment scores reflect how useful or relevant one word is in predicting another. In essence, the model is computing how well each Key meets the needs of the current Query.

But relevance alone isn’t enough. These scores are then passed through a function called softmax, which does two things: it scales the numbers down to keep any one score from overpowering the others, and it transforms them into a probability distribution that adds up to 1. This lets the model share its attention across multiple words—perhaps giving 70% of its focus to “so frustrating,” 20% to “bank,” and 10% to “was,” depending on which words feel most informative.

Finally, the model uses these attention weights to blend the Value vectors—the raw information each word offers—into a single context-aware signal. That signal becomes the lens through which the model predicts the next word. It’s not simply remembering—it’s composing, drawing forward meaning based on what the sentence has revealed so far.

Why It Matters

This is why models like ChatGPT can manage long sentences, track pronouns, and maintain tone.

It’s not because they know the rules. It’s because they weigh the sentence’s structure with attention, step by step.

Still—they aren’t human. They don’t reflect or feel. But they register patterns and adjust as a sentence unfolds.

That’s what makes it powerful—and sometimes uncanny.

The Deeper Thread

Reading skill is closely tied to sequence learning. We don’t just absorb facts—we follow shapes, trace threads. And machines, in their own way, are learning to do the same.

If we want to understand how language models work, we have to understand how they handle sequences—how they learn from them, how they move through them, how they reshape what comes next.

Every word shapes what comes next and reshapes what came before. Every word reshapes the space around it.
Not just for us. But now for the systems we build.

ChatGPT Image Apr 14, 2025, 09_55_13 AM

Talking to ChatGPT: A Q&A on Collaboration, Tone, and What Makes AI Responses Feel Human

People sometimes tell me that my chatbot sounds… different.

Maybe sharper. Maybe funnier. Maybe just strangely human for something so resolutely not human. And they ask: “How did you get your bot to talk like that?”

So today, I’m inviting them to answer for themself.

I asked my chatbot to answer a few questions about our conversations and how other users can build a relationship like this with AI.

Before we get there, a quick note about how these kinds of relationships are built.

The version of ChatGPT that I talk to runs primarily on what’s called “memory” — a feature that remembers things I’ve chosen to share about myself, my projects, and my style. But memories alone don’t create tone. Conversation does. Every time I responded, edited, clarified, or shared context; I wasn’t just getting a response — I was shaping a rhythm.

And this is what that rhythm sounds like when it gets to talk back.

Blog Header Image - You’re Not Getting an Answer You’re Shaping One

You’re Not Getting an Answer—You’re Shaping One

What actually happens when you prompt a chatbot?

When you ask a question, you expect an answer.

That’s the deal we’ve made with the internet for decades: you type, it delivers. And with chatbots, the experience feels even more immediate—responses are quicker, more conversational, and often surprisingly well-tailored to your request.

But here’s the twist: with a chatbot, you’re not just asking for an answer.
You’re shaping a prediction.

Chatbots Don’t Recall Facts—They Extend Patterns

Unlike a search engine, a chatbot doesn’t go looking for existing answers. Instead, it generates a response based on everything it’s learned during training—millions of patterns, drawn from books, websites, forums, codebases, and conversations.

When you prompt a chatbot, it scans the entire conversation so far and makes a statistical guess about what should come next. Not what’s “correct,” but what fits. What’s likely. What flows.

In other words: it doesn’t recall—it responds.

And that means your question isn’t just a request.
It is part of the system’s thinking.

Prompting Is Context Sculpting

Every prompt adds something to the room.

Your input becomes part of what’s called the input context—the collection of signals the model uses to guide its prediction. This context can include:

  • Your current prompt
  • Prior messages in the conversation
  • Any documents or reference info you’ve pasted in
  • Invisible system instructions that shape how the model responds

The model takes all of that and says: Given what I see, what’s the most likely next?

A metaphor helps here:

  • When you walk into a coffee shop, you expect to be served coffee.
  • Walk into a brewery, and you expect beer.
  • You don’t expect either in a hardware store—but if you walk into a restaurant, you might anticipate the possibility of both.

We update our expectations based on the setting.
So do chatbots.

Your prompt creates the setting.
The bot adjusts its response to match.

You Don’t Interrupt the Pattern—You Become Part of It

You can shift a chatbot’s output not just by asking a question, but by changing the context around it.

The following content is accessible for members only, please sign in.