The Art of AI Prompting: Refining Instructions for Precision and Control
Prompt like a pro – Learn how to craft effective prompts by combining role-based instructions with iterative refinement, so you can get consistent, high-quality results from AI.
Ciao,
This edition of Radical Curiosity is all about working smarter—with AI, with your team, and with your time.
I’ve been deep-diving into prompt design and testing strategies to get more consistent, useful responses from AI. If you’ve ever struggled to get the output you wanted, you’ll find some practical techniques in the first section—including how I’m using role-based prompts and iterative refinement to build better learning tools.
You’ll also find a guide on cutting down unnecessary meetings (without making enemies) and a roundup of the most inspiring real-world uses of ChatGPT—stories that go well beyond the usual productivity hacks.
Table of Contents
The Art of AI Prompting: Refining Instructions for Precision and Control
How to Cut Down on Useless Meetings
The Most Impressive Uses of ChatGPT
The Art of AI Prompting: Refining Instructions for Precision and Control
Creating precise AI-generated responses isn’t just about luck—it’s about mastering prompt engineering. This skill has become essential for knowledge workers, allowing us to give AI the right instructions to generate exactly what we need—with the right level of detail, in the correct style, and with a clear structure.
Lately, I’ve been experimenting with different techniques to develop an instructional designer co-pilot—an AI assistant that helps course creators refine learning objectives with the precision of a human expert.
Defining clear learning objectives is one of the most challenging aspects of course creation. Vague goals like “understand marketing strategy” aren’t useful because they don’t specify what a learner should actually be able to do. To ensure AI-generated learning objectives align with best practices, I needed a systematic approach—one that instructional designers already use.
One of the most popular frameworks in instructional design is Bloom’s Taxonomy, which categorizes cognitive skills—such as remembering, analyzing, and evaluating—helping to transform broad objectives into measurable and actionable learning outcomes.
My goal was to create a prompt that converts vague, generic learning goals into well-defined objectives based on Bloom’s Taxonomy. To achieve this, I tested different methods, and one approach stood out—iterative prompting.
By refining prompts step by step, I consistently improved the quality of AI-generated instructional content, making it clearer, more structured, and more effective. Now, let me walk you through my process—how I craft, test, and refine prompts using iterative prompting to achieve the best possible results.
Step 1: Drafting the Initial Prompt
I start with a role-based prompt, a technique where the AI is assigned a specific identity and task. Instead of giving a generic instruction, I define who the AI is and what it should do, providing context that shapes its responses. This approach makes outputs more precise, structured, and relevant to the task. A well-defined role improves AI-generated responses in several ways:
Improves Accuracy – Assigning a domain-specific role ensures more relevant answers. For example, “You are an instructional designer. Improve this learning objective using Bloom’s Taxonomy” yields structured educational goals instead of generic suggestions.
Enhances Relevance and Focus – A role keeps responses on-topic. Asking, “Explain machine translation” may result in a broad answer, but framing it as “You are a computational linguist. Explain machine translation in simple terms to a non-technical audience” ensures clarity and accessibility.
Provides Implicit Constraints – Defining a role naturally limits the scope of responses. “You are a legal expert in intellectual property” keeps the AI focused on that field without needing extra instructions.
Improves Instruction Following – When given a clear persona, AI interprets and follows instructions more reliably. A prompt like “You are an instructional designer. Convert vague learning goals into measurable objectives using Bloom’s Taxonomy” ensures a structured response aligned with best practices.
You can use this template to structure an effective prompt:
You are a prompt engineer and an expert in [topic].
Your task is to help me create a prompt for [specific task].
The prompt should follow these guidelines: [list important instructions].
It should include [key details, context, and constraints]
and produce an output in [desired format, tone, or style].
To apply this template to course design, we could write:
You are a prompt engineer and an expert in instructional design.
Your task is to help me create a prompt that converts vague learning goals into well-defined learning objectives using Bloom’s Taxonomy.
Ensure that each learning objective is clear, measurable, and action-oriented.
Use Bloom’s Taxonomy to structure objectives according to cognitive levels (e.g., Remember, Understand, Apply, Analyze, Evaluate, Create).
Ensure the output is formatted as a bullet list with each learning objective categorized by its cognitive level.
I use this prompt to generate the specific prompt I need. Here’s the result:
You are an instructional design expert skilled in developing precise learning objectives using Bloom’s Taxonomy. Your task is to take vague learning goals and convert them into clear, measurable, and action-oriented learning objectives, categorized by cognitive levels.
Instructions:
Analyze the provided vague learning goals and determine their intended outcomes.
Apply Bloom’s Taxonomy to create structured learning objectives at different cognitive levels (Remember, Understand, Apply, Analyze, Evaluate, Create).
Ensure clarity and measurability by using appropriate action verbs (e.g., "Define," "Explain," "Demonstrate," "Compare," "Justify," "Design").
Format the response as a bullet-point list, grouping objectives by cognitive level.Example Input:
Vague Learning Goal: Students should understand the basics of project management.Example Output:
- Remember: Define key project management terms (e.g., scope, milestones, stakeholders).
- Understand: Explain the importance of project planning and scheduling.
- Apply: Develop a simple project timeline using a Gantt chart.
- Analyze: Compare different project management methodologies (e.g., Agile vs. Waterfall).
- Evaluate: Assess the effectiveness of a project plan based on given constraints.
- Create: Design a project management plan for a small business initiative.Now, convert the following vague learning goals into structured learning objectives using the same format:
[Insert vague learning goals here]
Step 2: Using AI Self-Analysis to Improve Prompt Engineering
After running the initial prompt, I assess the AI’s response to determine whether it meets expectations or needs refinement. However, objectively evaluating AI-generated content can be challenging, especially when dealing with complex prompts. To make this process more effective, I prompt the AI to analyze its own output, using a secondary prompt designed for self-assessment.
LLMs process vast amounts of data and follow patterns, but they don’t inherently understand why a response may be weak or how to improve it. Asking the AI to critically evaluate its own output encourages a more structured review, leading to:
Improved clarity and coherence – The AI reassesses its wording and structure, refining vague or inconsistent sections.
More precise and relevant content – Self-analysis helps identify missing details or misaligned elements, ensuring the response better serves its purpose.
A more efficient and iterative workflow – Instead of manually diagnosing weaknesses, I leverage the AI’s pattern recognition to pinpoint areas for refinement faster.
To achieve this, I use a structured self-analysis prompt:
Analyze this prompt by assessing its clarity, specificity, structure, and effectiveness in generating the desired response. Identify weaknesses and strengths, and suggest targeted improvements. Then, rewrite the prompt in a more optimized version. Structure your response as follows:
1. Strengths
2. Weaknesses
3. Suggested Improvements
4. Optimized Version of the Prompt
This self-assessment method helps detect common problems, such as:
Lack of specificity – The AI may generate responses that are too broad.
Inconsistencies – Instructions may lead to varied or unpredictable outputs.
Missing key details – Important context or structure may need reinforcement.
Here’s the result of the self-assessment:
Strengths:
Clarity – The prompt provides a clear task and instructions for converting vague learning goals into well-defined objectives.
Use of Bloom’s Taxonomy – It explicitly instructs the AI to structure learning objectives using different cognitive levels.
Actionability – It emphasizes measurable objectives by suggesting action verbs from Bloom’s Taxonomy.
Well-structured output format – The use of bullet points and categorization by cognitive levels ensures an organized response.
Example provided – Demonstrating a sample transformation improves AI comprehension and alignment with expectations.
Weaknesses:
Overly broad instructions – While the prompt instructs the AI to apply Bloom’s Taxonomy, it does not explicitly guide how many objectives to generate per level or whether all levels must be covered.
Potential redundancy – The instruction to apply Bloom’s Taxonomy appears in multiple places, which could be streamlined.
Lack of constraints – There are no guidelines for subject specificity, depth, or complexity, which might lead to inconsistent outputs.
Missing refinement step – There is no directive to refine vague learning goals before converting them into structured objectives.
Suggested Improvements:
Clarify the expected number of learning objectives per cognitive level to ensure a balanced distribution.
Streamline instructions by removing repetitive mentions of Bloom’s Taxonomy.
Introduce optional constraints (e.g., subject-specificity, difficulty level, learning context).
Add a refinement step where the AI first clarifies the vague learning goal before transforming it into objectives.
Ensure flexibility by allowing the user to request only specific cognitive levels instead of always covering all six.
Optimized Version of the Prompt:
You are an instructional design expert specializing in learning objectives. Your task is to transform vague learning goals into clear, measurable, and action-oriented learning objectives using Bloom’s Taxonomy.
Instructions:
1. Refine the input by clarifying any ambiguous learning goals before transformation.
2. Convert the refined goal into structured learning objectives, categorizing them by cognitive level (Remember, Understand, Apply, Analyze, Evaluate, Create).
3. Ensure each objective is measurable by using action verbs aligned with Bloom’s Taxonomy.
4. Provide 1–2 objectives per level (unless otherwise specified).
5. Format the response as a bullet-point list, grouping objectives by cognitive level.Customization Options:
- If applicable, adapt the objectives to a specific subject or learning level (e.g., beginner, advanced).
- If the user requests, focus only on specific Bloom’s levels instead of all six.Example Input:
Vague Learning Goal: Students should understand the basics of project management.Example Output:
- Remember: Define key project management terms (e.g., scope, milestones, stakeholders).
- Understand: Explain the importance of project planning and scheduling.
- Apply: Develop a simple project timeline using a Gantt chart.
- Analyze: Compare different project management methodologies (e.g., Agile vs. Waterfall).
- Evaluate: Assess the effectiveness of a project plan based on given constraints.
– Create: Design a project management plan for a small business initiative.Now, refine and convert the following vague learning goals into structured learning objectives:
[Insert vague learning goals here]
As you can see, the LLM made minor refinements and added structured instructions. Even small wording changes can significantly impact the output—this is a language model, and words carry meaning and nuance.
When crafting prompts, expertise in language and communication is just as important as technical knowledge. While engineers and data scientists understand the technical mechanisms behind LLMs, those with a strong linguistic background often excel at designing effective prompts because they anticipate subtle shifts in meaning and interpretation.
One aspect I would reconsider is the constraint of generating one or two learning objectives per cognitive level. While it’s true that overly rigid constraints can limit variation, they can also improve clarity and consistency in structured outputs. The impact depends on the specific use case.
Step 3: Test, test, test
The best way to assess any instruction’s effect is through systematic testing. Instead of assuming a constraint will restrict creativity, it’s useful to compare multiple prompt variations and analyze the differences in output quality. Structured experimentation, rather than intuition alone, leads to the best results. Here’s a step-by-step approach to systematically evaluate prompt effectiveness.
1. Define the Testing Objectives
Before testing, it’s crucial to clarify what we’re measuring. Some key questions include:
Does adding constraints (e.g., “Generate 1-2 learning objectives per Bloom’s level”) enhance clarity or restrict creativity?
Does rewording the prompt improve the relevance and completeness of responses?
Do explicit examples improve the quality and consistency of generated outputs?
By defining the goal upfront, we can focus on measurable improvements rather than subjective impressions.
2. Create Variations of the Prompt
A robust test requires multiple prompt variations, each adjusting only one element at a time. For instance:
Baseline Prompt (No Constraints): “Convert this vague learning goal into structured learning objectives using Bloom’s Taxonomy.”
Constrained Prompt: “Generate exactly 1-2 learning objectives per Bloom’s level.”
Refined Prompt: “First refine the vague learning goal, then generate clear, well-structured objectives.”
Example-Enriched Prompt: “Use the following example as a guide: [Insert Example].”
Testing these different versions helps us identify which refinements enhance performance without introducing unnecessary limitations.
3. Select a Representative Sample of Inputs
To ensure results are generalizable, test prompts with diverse learning goals across different domains. Some examples:
Students should understand the basics of project management.
Learners will get familiar with marketing strategies.
Employees will learn about workplace safety.
Students should improve their critical thinking skills.
Learners will explore how AI is transforming industries.
Using a broad set of inputs helps avoid bias in the results.
4. Generate Outputs & Collect Data
Each input should be processed using all prompt variations, and the results should be systematically recorded in a structured format, such as a spreadsheet.
Key data points to evaluate:
Relevance – Do the objectives align with the learning goal?
Clarity – Are they easy to understand and well-structured?
Completeness – Do they effectively cover different cognitive levels?
Creativity – Are the objectives varied in phrasing and approach?
Conciseness – Are they overly long, too brief, or just right?
A simple scoring system (e.g., 1 to 5 scale) for each criterion allows for quantitative comparisons between different prompt versions.
5. Analyze the Results
After testing, compare outputs across different prompts and assess patterns. Example table:
This structured evaluation allows us to make data-driven refinements instead of relying on subjective preference. Once results are analyzed, adjustments can be made:
If constraints limit creativity, loosen them.
If examples improve clarity, make them a standard part of the prompt.
If objectives lack variety, modify wording to encourage more diverse outputs.
How did the prompt perform?
Curious about how the prompt we created in this article performed in generating learning objectives using Bloom’s Taxonomy? Here the answers for “Students should understand the basics of project management.”
Refined Goal:
Students will gain foundational knowledge of project management principles, methodologies, and tools, enabling them to apply best practices in planning, executing, and evaluating projects effectively.
Structured Learning Objectives:
Remember (Knowledge Recall)
Define key project management terms (e.g., scope, milestones, stakeholders, risk management).
List the five phases of the project management lifecycle (initiation, planning, execution, monitoring & controlling, closure).
Understand (Comprehension)
Explain the importance of project planning and how it impacts project success.
Describe the differences between predictive (Waterfall) and adaptive (Agile) project management approaches.
Apply (Practical Execution)
Develop a simple project timeline using a Gantt chart.
Use a Work Breakdown Structure (WBS) to organize project tasks into manageable components.
Analyze (Critical Thinking)
Compare different project management methodologies (e.g., Agile vs. Waterfall) in terms of flexibility, risk, and stakeholder involvement.
Identify common project risks and propose mitigation strategies.
Evaluate (Judgment and Assessment)
Assess the effectiveness of a project plan based on scope, time, cost, and quality constraints.
Critique real-world project case studies to determine the key factors leading to success or failure.
Create (Synthesis and Innovation)
Design a project management plan for a small business initiative, incorporating stakeholder communication and risk management strategies.
Develop a project charter that outlines objectives, scope, and key deliverables for a hypothetical project.
Would you like me to tailor these objectives for a specific audience (e.g., beginners, advanced professionals, students in a particular field)?
Conclusions
Prompt engineering is not about getting the perfect response on the first try—it’s a process of patience, trial, and refinement. AI-generated content improves dramatically when you move beyond one-shot prompting and embrace more sophisticated techniques like iterative prompting and AI self-analysis.
Once you apply these methods, you’ll unlock a new level of precision and control over AI-generated responses. Instead of passively accepting AI’s output, you’ll become an active collaborator, refining prompts to achieve optimal results. The difference is transformative—your prompts will evolve from generic queries to structured instructions that consistently generate high-quality responses.
However, the real leap happens when you integrate these techniques into building custom GPTs or leveraging the advanced features of AI tools like OpenAI’s projects. At this stage, you’re not just crafting individual prompts but designing AI-powered workflows that scale and adapt to your needs.
The key takeaway? Have fun, be curious, and experiment continuously. The more you refine and test, the better you’ll understand how to guide AI effectively. With the right approach, prompt engineering becomes more than just a skill—it’s a superpower that enhances productivity, creativity, and problem-solving in countless domains.
Reference: Prompt engineering best practices for ChatGPT
How to Cut Down on Useless Meetings
Let’s face it: most of us are drowning in meetings. The kind that starts with “quick sync?” and ends with “let’s schedule a follow-up.” When calls block every hour on your calendar, you start wondering when you're supposed to get any work done.
The good news? You’re not the only one drowning in meetings—most of your colleagues are quietly enduring the exact endless string of calls, often wondering why they’re there. That’s why you’ll likely face little resistance when you gently suggest an alternative to another back-to-back meeting. A shift toward more writing, a bit more structure, and the thoughtful use of AI isn’t just better for you—it’s a relief for everyone. Here’s how to start reclaiming your time and helping your team breathe a little easier.
Start with a Meeting Doc
One of the simplest and most effective strategies is using a meeting doc as a pre-call filter. Before scheduling a call, create a shared document where all participants can add discussion points, questions, and relevant context in advance. This alone changes the dynamic: instead of jumping into a meeting to figure out what needs to be discussed, you force clarity beforehand.
Even better, many of the questions get answered in the doc before the meeting ever happens. People comment, resolve doubts, and suggest alternatives asynchronously. Often, the meeting becomes shorter—or disappears entirely—because the core issues have already been resolved. And when you do meet, you come in prepared to make decisions, not just exchange status updates.
Use AI as a Writing Co-Pilot (Not a Meeting Substitute)
There’s a lot of hype about AI replacing meetings, but the more interesting (and sustainable) path is to use AI to smooth asynchronous collaboration. For example, instead of recapping a long Slack thread or digging through email chains, use AI to generate a concise summary you can share with the team. Train a GPT on your internal docs to answer recurring questions so you don’t have to explain the same thing to five different people in five different meetings.
The goal here isn’t to avoid human interaction—it’s to reduce the noise. When AI helps you communicate more clearly and efficiently, you create less need for “just to clarify” meetings.
Replace Email Chains with a Single Source of Truth
As a product manager, I constantly engage with stakeholders across different teams. For a long time, this meant long email chains, scattered conversations, and lots of “looping in” at the last minute. However, moving those conversations into Asana changed everything.
Now, every feature, request, or bug has a home. When someone has a question or needs an update, they go to the ticket. That ticket becomes a timeline, a reference point, a record of decisions. Conversations are threaded and visible, and you avoid the meeting that exists only to bring someone up to speed. This small shift—from fragmented email to centralized async updates—can save you dozens of meetings a month.
Record It Once, Use It Often
When you need to present something dynamic—like a new feature, a concept, or an idea—it often makes more sense to replace a long descriptive document or static presentation with a video. I personally use Loom for this. I’ll usually sketch a quick wireframe in Miro and then record a short video to walk through the idea. It works really well: people can absorb the information on their own time, rewatch if needed, and come to the next conversation with a clearer understanding of what’s being proposed.
Meetings Aren’t the Enemy—Disorganized Ones Are
None of this means you should cancel every meeting. Some conversations need to happen in real-time, and face-to-face (or screen-to-screen) is often the fastest way to get alignment. But when you start with documentation, use AI to reduce friction, and treat your time as a shared resource—not an always-available commodity—you start filtering out the noise.
The Most Impressive Uses of ChatGPT
Recently, I started using Reddit. Initially, I was skeptical about it, assuming it was just another chaotic social media platform filled with shallow discussions. However, after exploring various threads, I realized there is a lot of hidden value in the conversations. What struck me the most was the constant effort to keep interactions genuinely human. Unlike other platforms, where discourse often feels performative or dominated by algorithms, Reddit seems to foster organic, sometimes brutally honest discussions that reflect real-life experiences, challenges, and breakthroughs.
One such discussion that caught my attention was about the most impressive or unexpected ways people have used ChatGPT. As someone deeply interested in AI, I found it fascinating to see how people were not just using it for trivial tasks but were integrating it into their lives in meaningful, transformative ways. The stories shared ranged from personal growth journeys to professional breakthroughs, revealing a landscape of possibilities that extend far beyond simple chatbot interactions.
A striking example came from a user who had struggled with fitness for years. Through continuous conversations with ChatGPT, they found the motivation to change their lifestyle completely. What started as casual exchanges about diet and exercise evolved into a structured fitness journey, leading to weight loss, improved health, and even participation in ultra-hiking marathons. AI has become more than just a tool; it is a virtual accountability partner that adapts to people's progress and keeps them engaged.
Another story that stood out was about a parent navigating their child's serious medical condition. When their daughter was diagnosed with a brain tumor, they turned to ChatGPT to educate themselves about the diagnosis and possible treatments. The AI helped them structure their conversations with doctors, making it easier to process complex medical information. In a particularly validating moment, they later showed the transcript to a doctor, who confirmed that the AI’s responses were well-structured and accurate. It was a reminder of how AI when used thoughtfully, can serve as a powerful ally in moments of crisis.
Beyond personal life, ChatGPT has also become a crucial assistant in professional and educational spaces. One teacher described how they used it to refine grading rubrics, generate personalized exercises for students, and create a more structured learning environment. Another person, with no prior coding experience, built an entire inventory management system for their family business with ChatGPT’s guidance. In both cases, AI wasn’t replacing human expertise but rather enhancing it, acting as an on-demand collaborator that empowered users to accomplish things they previously thought were out of reach.
What’s interesting about these use cases is that they all share a common theme: AI as a companion in problem-solving. Whether it’s fitness, medical research, professional development, or creative endeavors, people aren’t passively consuming AI-generated content—they’re co-creating with it. One user, for instance, described how they had ChatGPT critique their writing, not in a mechanical, grammar-checking way, but with thoughtful insights that helped them improve their style and storytelling. Another used it as a sparring partner for political debates, prompting the AI to argue against their views to refine their understanding of different perspectives.
The thread also surfaced more unconventional, yet equally fascinating, applications. Someone shared how they used ChatGPT to overcome their aversion to running by analyzing their stride and recommending technique adjustments. Another person built a text-based adventure game that became the foundation for a novel they were developing. Some even used it as a therapist, engaging in deep, reflective conversations that helped them process emotions, manage anxiety, and set goals for self-improvement.
These experiences paint a picture of AI not as a distant, futuristic force but as something deeply integrated into daily life. What started as a tool for answering questions is evolving into an adaptable, personalized assistant that people rely on in profound ways. The most compelling takeaway from these stories isn’t just that ChatGPT is useful—it’s that people are shaping it into what they need it to be. And in that sense, it’s less about the technology itself and more about human ingenuity in finding ways to make it work for them.
If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.
Thanks for reading this episode of my newsletter. I hope I’ve been helpful. If you think my sketchbook might interest someone else, I’d appreciate it if you shared it on social media and forwarded it to your friends and colleagues.
Nicola