Many years ago, on the first day of school, my science teacher gave us this homework assignment: Write down the instructions to make a peanut butter and jelly sandwich.

If you haven’t ever heard of this exercise, take a moment to think about what you would write down. Go ahead, I’ll wait…

I don’t remember exactly what sixth-grade Lybbi wrote, but it was probably something like “take out two pieces of bread, spread peanut butter on one side and jelly on the other, and put the two pieces together.”

The next day in class, my teacher stood at the front of the class with all the ingredients and carefully followed everyone’s instructions. And when I say “carefully,” I mean he did EXACTLY what was written.

  • Take out two pieces of bread. He ripped two small pieces off a slice of bread and threw them out the door.
  • Spread peanut butter on one side and jelly on the other. He went outside to grab one of the pieces, then used his whole hand to collect a big glob of peanut butter (à la Winnie The Pooh) and smeared it on the bread chunk. He then flipped it over and did the same to the other side with the jelly.
  • Put the two pieces together. He took the goopy little piece of bread back outside and set it back on the ground next to the other one.

This wasn’t just a fun day in an otherwise boring class, but my teacher’s way of demonstrating the importance of being crystal clear when working in a science lab. However, I don’t think this memory stuck with me 20+ years later because of the valuable lesson, but more because of how funny and gross the demonstration was. However, during a recent conversation about why some people are struggling to use AI effectively, this little experiment popped into my head.

If you have ever felt that your AI tool of choice is just not getting it right, or that it’s not understanding what you are trying to get from the output, there’s a pretty good chance it’s because you aren’t clearly telling it what you want. When we as humans are thinking something through, there is so much context already stored in our brains and we don’t have to think about all these little details. It doesn't occur to us to specify that you should use a dull kitchen knife to collect about 2 TBSP of peanut butter, and then that you should evenly spread that peanut butter across the whole surface of only one side of a slice of bread.

To be fair, if we talked like this in real life, it would raise red flags. When I ask my husband if he can take the dog out, I don’t need to clarify that I want him to take the black leash off the hook by the front door and clip it to the silver metal ring on Oscar’s collar and then hold onto the leash with one hand while opening the door with the other and on and on. In fact, I think if I did provide that much context, it would probably lead to a fight.

A lot of people are talking to AI as if they’re talking to my husband (or whatever other human they might ask to take their dog out), and that’s where the wheels fall off. But it’s not a lack of context causing bad responses - it’s actually the opposite. AI has a whole world of context to pull from, and it has no idea what needs to be applied to each exact situation.

When writing your prompts, think of it like the PB&J test and add in those little “obvious” details. Tell it exactly who your target audience is, what your goal is for whatever you’re working on, what you want (and don’t want) to be included or highlighted, etc. If you get output that isn’t working for you, tell it exactly what is wrong, and how it can be improved. Or, reply to the chat and literally say "I don't like this response. Ask me some clarifying questions to help me better communicate what I want from you."

Even better, create a document that has a big brain dump of everything you want it to know about you and your business. Tools like ChatGPT and Claude have a “Projects” feature where you can upload materials like your brain dump document and write rules to build a foundation for whatever you’re working on, and then any new chats within that project can pull from the resources it already has.  

The good news is that the more you chat with your AI tool, the more context it will gather about you and what you want from it. Eventually you will get to the point where you can just ask it to make you a PB&J, and it won’t hand you a chunk of bread smeared with sticky ingredients, but a fully formed sandwich.