<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Radical Curiosity]]></title><description><![CDATA[An independent publication exploring how generative AI is transforming innovation, business models, and the way we work.]]></description><link>https://www.radicalcuriosity.xyz</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 07:20:10 GMT</lastBuildDate><atom:link href="https://www.radicalcuriosity.xyz/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Nicola Mattina]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[radicalcuriosity@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[radicalcuriosity@substack.com]]></itunes:email><itunes:name><![CDATA[Nicola Mattina]]></itunes:name></itunes:owner><itunes:author><![CDATA[Nicola Mattina]]></itunes:author><googleplay:owner><![CDATA[radicalcuriosity@substack.com]]></googleplay:owner><googleplay:email><![CDATA[radicalcuriosity@substack.com]]></googleplay:email><googleplay:author><![CDATA[Nicola Mattina]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Collaboration Blueprint. Artificial intelligence as an organizational change project]]></title><description><![CDATA[A method for guiding companies from isolated AI experiments to measurable adoption, and its evolution into a platform that delivers consulting at software-as-a-service scale.]]></description><link>https://www.radicalcuriosity.xyz/p/ai-collaboration-blueprint-artificial</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/ai-collaboration-blueprint-artificial</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Mon, 20 Apr 2026 06:31:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BfMn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Last October, I published the <strong><a href="https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to">AI Collaboration Canvas</a></strong>, a first attempt to build a tool to help people define a strategy for collaborating with artificial intelligence.</p><p>Over the past months, I&#8217;ve brought my method into training sessions and workshops across very different organizational settings. This allowed me to refine my approach for helping companies move from occasional AI experimentation to systematic, measurable, and responsible adoption. The canvas has thus become the starting point for something larger, which I&#8217;m now bringing into focus under the name AI Collaboration Blueprint.</p><p>Nicola &#10084;&#65039;</p><div><hr></div><h2>Artificial intelligence as an organizational change project</h2><p>Introducing artificial intelligence into a company is, first and foremost, a change management problem, which requires working in parallel on two fronts.</p><p>The first is the <strong>adoption strategy</strong>: which processes are candidates, with what level of AI integration, in what temporal order, and against what evaluation criteria. This strategy cannot be drafted at a desk. It has to be built starting from real processes, from the people who execute them, from the actual constraints of the technology stack and the regulatory framework.</p><p>The second is <strong>skill development</strong>: not so much technical skills in the strict sense, but people&#8217;s ability to work with AI within their own processes. Not everyone needs to know how to build complex prompts or automations. But those who use AI must be able to recognize when output is reliable, when it needs to be verified, when it should be rejected, and they must integrate this judgment into their workflow.</p><p>A strategy without skills produces projects that never take off. Skills without strategy produce isolated experiments that never consolidate. Both are needed, and they must be built together.</p><h3>The AI Collaboration Blueprint</h3><p>The AI Collaboration Blueprint is the method I developed to move an organization from the situation of &#8220;a few people are experimenting, nobody knows whether it&#8217;s working&#8221; to the situation of &#8220;we have an adoption strategy, people have the skills to execute it, we know how to measure results and manage compliance.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BfMn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BfMn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 424w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 848w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 1272w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BfMn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:437032,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/194699318?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BfMn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 424w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 848w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 1272w, https://substackcdn.com/image/fetch/$s_!BfMn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e7342a6-6806-47e3-a4d0-f21c09366859_2438x1370.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI Collaboration Blueprint</figcaption></figure></div><p>The path is structured in phases, each of which produces tools the organization can use. It starts with the check-up, which begins with <strong>process mapping</strong> carried out in two stages. In the first, an AI agent interviews people and returns structured descriptions of their individual workflows (including workarounds, exceptions, and the tacit knowledge that often doesn&#8217;t appear in operating manuals). In the second, we move to consolidation, comparing the individual variants of the same process and surfacing gray areas, duplications, and critical junctions. This is often the phase when pre-AI inefficiencies also emerge, ones that are better addressed before introducing new technologies.</p><p>In parallel, we proceed with:</p><ul><li><p><strong>Mapping the technology stack</strong> available within the company, to understand what is realistically activatable within the perimeter of the tools already in use and the policies in force;</p></li><li><p>An <strong>assessment of the skills</strong> the team has already developed in using artificial intelligence (how they use it, what they know about it, what they think they can do with it).</p></li></ul><p>We then define an <strong>AI integration strategy</strong> for each process step, choosing among four modes:</p><ul><li><p><strong>Built-in AI.</strong> The artificial intelligence features already embedded in the productivity tools the company uses every day: Copilot inside Office, Gemini inside Google Workspace, and the AI functions of CRMs and business management systems. It&#8217;s the most accessible mode because it requires no investment and no new skills: the license is often already active, and the end user only needs to learn to recognize when it&#8217;s worth invoking. It&#8217;s also the most limited, because the AI does what the vendor has decided to let it do. Still, it&#8217;s the natural starting point for most people in a company.</p></li><li><p><strong>Collaborative AI.</strong> The structured use of a conversational assistant like ChatGPT, Claude, Gemini, or a dedicated enterprise client based on the same models. The person works with the assistant iteratively to draft texts, analyze documents, prepare drafts, do targeted research, and structure complex reasoning. This strategy requires prompting skills, context management, and critical evaluation of output. It&#8217;s the mode that today has the greatest impact on individual productivity.</p></li><li><p><strong>Operational AI.</strong> AI is embedded in automated workflows that execute repetitive tasks: an email arrives, an agent classifies it, routes it, and possibly prepares a draft reply. Here, the person intervenes only at verification checkpoints, not during execution. This strategy often requires engineers for flow design, integration with corporate tools, testing, and serious consideration of responsibility and error.</p></li><li><p><strong>Builder AI.</strong> AI is used as an accelerator to build custom tools in very short timeframes that previously wouldn&#8217;t have been economically viable. Typically, these are deterministic utilities (automatic comparisons between two Excel sheets, account reconcilers, scripts that generate reports from scattered files) developed by business people with AI assistance, a practice now known as <em>vibe coding</em>. It doesn&#8217;t replace IT in managing complex systems, but it fills the gap between &#8220;I have an Excel sheet&#8221; and &#8220;I&#8217;d need a real information system&#8221;, a gap that, in mid-complexity organizations, is much wider than people think.</p></li></ul><p>After determining the strategy, we move to <strong>implementation</strong>. A well-built pilot has four ingredients: a single business function, a single process (or subprocess), a small group of people willing to work actively on the change, and a defined time horizon of a few weeks. In this phase, we define the instructions to give the AI, the context it needs to produce output aligned with business objectives, and the tools it has access to. Often, we realize the information isn&#8217;t available because it&#8217;s locked in silos or stored in formats like PDFs that require an intermediate step of extraction and organization.</p><p>The path closes with <strong>responsibility</strong>. On the <strong>evaluation</strong> front, three metrics are essential: the AI&#8217;s error rate during the process; the actual impact on time, measured net of the cost of human verification; and the impact on the recipient's perceived quality of the final output. On the <strong>compliance</strong> front, we need periodic audits, documentation kept up to date, logs of significant uses, and a clear chain of responsibility: who approved AI use for this process, who verifies it continues to work, and who intervenes when something goes wrong.</p><p>The output of this phase feeds back into the strategy: what we learn from monitoring helps us review initial choices, expand where it works, and pull back where it doesn&#8217;t.</p><h3>The value of the method</h3><p>The AI Collaboration Blueprint starts from processes, not tools, so it works even with a constrained technology stack. It integrates compliance from the outset, which is indispensable in regulated sectors. It makes tacit knowledge explicit &#8212; the real bottleneck of any AI adoption program. And it distinguishes different levels of complexity, allowing organizations to start at the accessible level and raise the bar as skills mature and organizational choices consolidate.</p><p>The work produces communicable artifacts that also serve to report to decision-making levels &#8212; internal or group &#8212; on what&#8217;s being done and the results.</p><div><hr></div><h2>Radical Blueprint: the method becomes a product</h2><p>There&#8217;s a problem with the work I&#8217;ve described so far: it doesn&#8217;t scale. A path like this, done traditionally, requires weeks of interviews, workshops, consolidation sessions, and debriefs. It&#8217;s a service that makes sense for companies that can afford dedicated consulting, and it cuts out exactly the segment that would need it most: organizations of twenty employees or more that understand they need to do something with AI but have neither the budget for traditional consulting nor the critical mass to build internal skills from scratch.</p><p>For the past few months, I&#8217;ve been working on <strong>Radical Blueprint</strong>, a platform prototype that turns the method into a self-service offering. The underlying idea is to build what, put plainly, is a consulting firm made of agents: AI agents serve as the team of consultants who administer questionnaires, conduct interviews, produce individual and collective reports, and translate data into operational recommendations. The human (for now, just me) stays where it&#8217;s actually needed: in methodological supervision and in direct engagement with those who lead the company or the team.</p><p>The path to get here has been deliberately long. First, a zero-technology MVP: live and online seminars, Claude skills, and working documents shared with clients. Months spent exploring the problem with traditional methods, understanding what people in companies really ask for, where the process gets stuck, which questions lead to useful answers, and which don&#8217;t. Only after consolidating that knowledge did I start turning it into a prototype built with Lovable.</p><p>The platform is multi-tenant: a dedicated workspace is created for a company or a team. The workspace owner can invite colleagues who begin the journey through questionnaires administered by an agent in a chat interface.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9ien!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9ien!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 424w, https://substackcdn.com/image/fetch/$s_!9ien!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 848w, https://substackcdn.com/image/fetch/$s_!9ien!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 1272w, https://substackcdn.com/image/fetch/$s_!9ien!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9ien!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png" width="1456" height="1071" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1071,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:423228,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/194699318?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9ien!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 424w, https://substackcdn.com/image/fetch/$s_!9ien!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 848w, https://substackcdn.com/image/fetch/$s_!9ien!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 1272w, https://substackcdn.com/image/fetch/$s_!9ien!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca26327-ddab-44f8-a293-6962071ca111_2746x2020.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Radical Blueprint - Homepage</figcaption></figure></div><p>The questionnaires are analyzed in real time by another agent, which returns an individual report designed to help the user improve their ability to collaborate with AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ScVY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ScVY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 424w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 848w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ScVY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png" width="1456" height="1094" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1094,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:369343,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/194699318?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ScVY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 424w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 848w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!ScVY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7badd7e-76ed-4cd8-8739-6be971bbcc15_1788x1344.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Radical Blueprint - Individual Report</figcaption></figure></div><p>This kind of analysis would be impossible for a human, whereas an LLM can conduct it effectively at an extraordinarily low cost (a few euros) relative to the value it generates for the user.</p><p>But it doesn&#8217;t stop there: the workspace owner can also generate a collective report for the entire team using an advanced reasoning model like Gemini Pro or Anthropic Opus 4.7. This work would take days of analyst time and would cost enough to make it entirely inaccessible to the vast majority of companies. Thanks to agents, an accurate and in-depth snapshot of AI adoption levels can be produced in a few days, self-service. With the right reference framework, a reasoning model can generate a highly detailed analysis. This, for example, is the beginning of the report I generated during a test with my students in the Digital Entrepreneurship course at Roma3 University.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!whwv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!whwv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 424w, https://substackcdn.com/image/fetch/$s_!whwv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 848w, https://substackcdn.com/image/fetch/$s_!whwv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 1272w, https://substackcdn.com/image/fetch/$s_!whwv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!whwv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png" width="1456" height="1189" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1189,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:408648,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/194699318?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!whwv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 424w, https://substackcdn.com/image/fetch/$s_!whwv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 848w, https://substackcdn.com/image/fetch/$s_!whwv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 1272w, https://substackcdn.com/image/fetch/$s_!whwv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a15920f-3658-4f6a-9a64-1c4a75d27d98_2104x1718.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Radical Blueprint - Team report</figcaption></figure></div><p>The underlying logic is that every part of the method that can be executed by an agent must be executed by an agent, for two reasons. The first is access: making the service sustainable for companies that otherwise couldn&#8217;t use it. The second is consistent with the method itself: if I&#8217;m proposing to companies a model for building an AI adoption strategy within their processes, I have to start by applying it to my own processes. A consultant who preaches AI and only works with traditional tools (webinars, classroom lectures, e-learning) is a contradiction in terms.</p><h2>Where I am now</h2><p>Radical Blueprint is still an MVP, and I&#8217;m testing it successfully with pioneer clients: so far, I&#8217;ve transferred the check-up activity onto the platform, increasing the value of the assessment for clients. The support of AI agents in delivering and analyzing questionnaires goes well beyond what a basic Typeform form can do. The rest of the blueprint is still a set of prompts and skills for Claude, of varying complexity, that I use during my training sessions.</p><p>Just a few months ago, work like this would have required a team, much longer timelines, and a lot more money. Today, a product manager with a clear idea can independently build a prototype of a complex product, like a consulting firm without consultants. So far, it&#8217;s been a challenge that has taught me a great deal, from techniques for minimizing AI agent errors to techniques for managing a vibe coded project; meanwhile, together with an engineer friend, we&#8217;ve also developed a process for using Lovable as a prototyping environment inside a CI/CD pipeline typical of professional software development. But that will be the subject of another article.</p><p><strong>The vision taking shape, piece by piece, is that of a consulting firm with no consultants, scaling like software-as-a-service</strong>. Six months ago, it couldn&#8217;t have been done. Today, it is possible.</p>]]></content:encoded></item><item><title><![CDATA[What AI Memory Is and How It Can Make Your Work More Effective]]></title><description><![CDATA[A practical guide to understanding how AI memory works, and how to turn it into a real advantage in your daily workflow.]]></description><link>https://www.radicalcuriosity.xyz/p/what-ai-memory-is-and-how-it-can</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/what-ai-memory-is-and-how-it-can</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Fri, 12 Dec 2025 07:31:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Peiz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>This week&#8217;s newsletter is arriving a day later than planned. My goal was to publish every Thursday, but the topic turned out to be far less straightforward than I expected, and I ended up rewriting the whole piece several times before it felt clear enough.</p><p>This article is part of the ongoing series on AI agents. Memory is one of the most critical components in building an agent, and it&#8217;s worth understanding how it actually works before diving into more advanced design patterns.</p><p>Today&#8217;s issue focuses on <strong>memory inside a conversational assistant</strong>. These systems provide a surprisingly rich set of features that allow us to manage memory effectively, often without dealing with any technical details ourselves.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2>Table of Contents</h2><ul><li><p><em><strong>Understanding AI</strong></em> - What AI Memory Is and How It Can Make Your Work More Effective</p></li><li><p><em><strong>Curated Curiosity</strong></em>:</p><ul><li><p>The Future of Listening: Qualitative Research at Scale According to Anthropic</p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2>What AI Memory Is and How It Can Make Your Work More Effective</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Peiz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Peiz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Peiz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6714426,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/181380586?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Peiz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Peiz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914eb027-6a1c-4963-8f83-310331f48813_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Gemini - AI memories</figcaption></figure></div><p>Imagine you are a manager who has to welcome a young colleague who is brilliant, motivated, and has excellent learning abilities. On the first day of work, you greet him, walk him to his desk, and ask him to get started right away. No introductory document, no onboarding meeting, no explanation of expectations. Just a generic request: &#8220;Write me a report on this month&#8217;s sales; here are the data.&#8221; And you hand him a spreadsheet full of codes and figures that might be amounts.</p><p>No new hire would be able to carry out the task without context. And I am willing to bet that as you read this description, you thought: &#8220;Well, of course. Who would treat a new colleague that way?&#8221;</p><p>Now <strong>imagine that the new hire is an artificial intelligence</strong>. How many of you have opened ChatGPT, or another conversational assistant, and started making requests? Perhaps by uploading a spreadsheet with no additional context, accompanied only by a few sparse instructions, and expecting the model to produce a flawless report, ready to hand over to your boss?</p><p>Unfortunately, as with humans, we cannot expect good results from AI unless we provide it with adequate context to begin with.</p><p>Moreover, when dealing with artificial intelligence, we must remember that it is not truly &#8220;intelligent&#8221; in the way we usually understand the term. The technology we use every day through ChatGPT, Claude, Gemini, and similar tools&#8212;namely Large Language Models (LLMs)&#8212;does not understand, interpret, or process concepts; it does not possess real-world knowledge. Knowledge, in fact, requires structure, intentionality, and the ability to attribute meaning. Qualities that, for now, belong only to human beings.</p><p>However, because language models can produce fluid, coherent texts, we tend to project onto them a competence they do not actually possess. It almost feels natural to expect them to &#8220;figure things out,&#8221; grasp nuances, and fill in the gaps in an incomplete context, as an experienced colleague would. But this is not the case.</p><p>The result is that <strong>many users end up disappointed, not because AI works poorly, but because they assign it tasks without providing the necessary elements</strong> to carry them out. Just like that, the young new hire was left alone in front of an indecipherable spreadsheet.</p><p>The only way to obtain real value from these tools is to reverse our perspective: we must not expect the machine to compensate for the absence of context; we must be the ones to build it. Clear documents, precise instructions, explicit goals, relevant examples&#8212;these are the pieces of information needed to complete and guide the memory that an LLM has acquired during training.</p><h3><strong>The World&#8217;s Memory: What LLMs Have Learned During Training</strong></h3><p>Large Language Models are trained on vast corpora of documents, through which they accumulate a collective &#8220;memory of the world.&#8221; This memory, however, is not universal. On the contrary, it is essential to note that most of the sources come from WEIRD contexts (Western, Educated, Industrialized, Rich, Democratic), with a strong predominance of the Anglosphere. As a result, the representation of the world embedded in the most widely used models is inevitably skewed toward Western&#8212;particularly American&#8212;perspectives. AI does not &#8220;know&#8221; the world in a neutral sense; it knows the world as described by the cultures that produce most digital content.</p><p>This is, in essence, our starting point. <strong>Our artificial colleague resembles a young Western professional</strong>: highly educated, well-resourced, and strongly exposed to democratic values. A bright graduate straight out of an American or European campus, enthusiastic and well-prepared, yet still lacking the experience required to navigate autonomously a context that remains unfamiliar.</p><p>In addition, a language model's memory, while non-universal, cannot be updated in real time. Every LLM has a cutoff date, the moment when training ceases, and the model is released to the public. Beyond that temporal threshold, the model has no representation of the world: it has not read the news, followed regulatory developments, or absorbed the most recent cultural or technological changes. It therefore requires a dynamic memory, one that enables the model to bridge the gap between its static knowledge and the continuously evolving reality in which we operate daily.</p><p>This dynamic memory relies on Internet searches and allows AI systems to retrieve recent, up-to-date, and contextually relevant information when a request is made. It is a <strong>non-persistent memory</strong>: the model does not retain what it finds; instead, it consults online sources each time we ask it. It is the equivalent of a collaborator who briefly opens an article, an institutional website, or a database to verify a detail before continuing their work.</p><p>Conversational assistants do not automatically trigger an online search every time they receive a question. Activation depends on internal signals that differ from model to model but generally include:</p><ul><li><p>references to recent dates or temporal events (&#8220;this year,&#8221; &#8220;today,&#8221; &#8220;last week&#8221;);</p></li><li><p>requests for updated data, statistics, or indicators that change over time;</p></li><li><p>mentions of dynamic entities (prices, software versions, weather conditions, news);</p></li><li><p>prompts that explicitly require external verification.</p></li></ul><p>When no such signals are detected, the model may attempt to answer relying solely on its static memory, generating responses that are plausible but not necessarily accurate. This is one reason why, if we want reliable results, explicit activation of an online search is often necessary.</p><p>To ensure that the model draws on an updated memory of the world, it is useful to specify this clearly in the prompt. For example:</p><ul><li><p>&#8220;Before responding, carry out an online search and verify the most authoritative sources.&#8221;</p></li><li><p>&#8220;Consult the available scientific literature and cite the relevant studies.&#8221;</p></li><li><p>&#8220;Update any data used by comparing it with publications from the past two years.&#8221;</p></li></ul><p>This explicit instruction not only reduces the risk of error but also allows us to guide the quality of the sources: we may request that the model prioritise academic documents, institutional datasets, official regulations, or professional journalism.</p><p>Naturally, relying on online search requires the same critical assessment of sources we would apply to any search engine results. The responsibility for evaluating the reliability of the retrieved information, therefore, remains with the user, who must validate the material and decide how to integrate it into their decision-making process.</p><h3>Session Memory: How LLMs Manage a Conversation</h3><p>The conversational assistant manages session memory and enables the LLM to &#8220;follow the thread&#8221; of the dialogue, recalling what has been said so far and behaving coherently. With each new message, the assistant sends the model our latest input, along with all previous exchanges, concatenated in chronological order.</p><p>This allows the model to &#8220;see&#8221; the full conversation script and respond consistently with what has already been discussed. The technique, however, has an inherent limitation: models can process only a finite amount of information at once, the so-called <em>context window</em>. When a conversation becomes long, older messages are either truncated or summarised to make room for more recent ones. This is when we get the impression that the AI is &#8220;forgetting&#8221; something. In reality, those pieces of information are no longer present in the prompt, and the model cannot take them into account.</p><p>We can influence the content of session memory by providing more or fewer details in our messages, or by attaching documents and images. Yet precisely because this form of memory is transitory and entirely dependent on the prompt, the responsibility for keeping it coherent and informative lies entirely with the user. In other words, we must consciously decide what should remain within the system&#8217;s operational memory and what can be omitted.</p><p>It follows that, to obtain practical answers, it is often necessary to repeat or summarise key points, to structure information carefully, and to adopt an incremental and orderly form of communication. Each relevant element should be considered not merely as data to transmit, but as a component to be positioned within a conversation that, despite appearing continuous, is always reconstructed one prompt at a time, within the narrow&#8212;yet manageable&#8212;boundaries of the context window.</p><h3>Personal Memory: What We Want AI to Remember About Us in the Long Term</h3><p>Unlike session memory, personal memory is designed to persist over time. It is likewise managed directly by the conversational assistant and consists of a stable collection of information extracted from past interactions, stored and associated with the user&#8217;s account. These preferences allow the AI to progressively adapt to your way of expressing yourself, recall your recurring interests, and maintain your preferred formats.</p><p>This memory can include a variety of elements, such as:</p><ul><li><p>stylistic or linguistic preferences (&#8220;I prefer concise answers,&#8221; &#8220;write in formal Italian&#8221;),</p></li><li><p>basic demographic or professional details (role, sector, area of activity),</p></li><li><p>references to recurring projects, clients, or tools you use,</p></li><li><p>access to the history of past conversations (when enabled).</p></li></ul><p>In ChatGPT, this memory can be managed in the <em>Personalization</em> menu, where users may also enable access to all previous conversations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hbDr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hbDr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 424w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 848w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 1272w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hbDr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png" width="1456" height="1175" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1175,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hbDr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 424w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 848w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 1272w, https://substackcdn.com/image/fetch/$s_!hbDr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5409876-542d-4231-91c7-327f63b3bd13_1600x1291.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">ChatGPT - Personalization</figcaption></figure></div><p>It is essential to underline that this memory is customizable and revocable. At any time, the user may:</p><ul><li><p>review what has been stored,</p></li><li><p>update or correct inaccurate information,</p></li><li><p>temporarily disable the personal memory function,</p></li><li><p>delete it entirely, thereby resetting the history.</p></li></ul><p>This form of memory can make the interaction with the assistant more fluid, familiar, and efficient. However, for it to work effectively, it requires some maintenance. ChatGPT, for instance, autonomously decides what to remember, and the information it selects is not always relevant or valuable in the long term. Moreover, the available space is limited: memory tends to fill up quickly, and if you want the system to continue learning new aspects of you, you need to clean or reorganize previously stored data periodically.</p><p>Alongside permanently saved information, some assistants also allow searching through past conversations and reusing relevant excerpts from earlier exchanges to answer new prompts. This function is available only when memory is enabled, and the user has authorized access to the full chat history. Retrieval may occur automatically or upon explicit request (&#8220;check if we have already discussed this,&#8221; &#8220;resume the conversation about project X&#8221;).</p><p>As for my own experience, I have found this process overly cumbersome and have preferred to disable both memory and access to the full conversation history.</p><h3>Accessing the Memory Contained in the Tools We Use Every Day</h3><p>One of the most significant breakthroughs in the practical use of artificial intelligence is the ability to integrate it with the tools you already use daily: email, calendars, cloud storage, notes, and task managers. When this happens, the conversational assistant becomes an intelligent extension of your working environment.</p><p>This form of external personal memory is activated through <em>connectors</em>: modules that authorize the AI to access data stored on third-party platforms such as Google Drive, Microsoft 365, Notion, Dropbox, Slack, Trello, and many others. Once connected, these tools can be queried and navigated directly in natural language.</p><p>Here are some concrete examples of what becomes possible:</p><ul><li><p>&#8220;Check my calendar and tell me when I have a free hour for a call this week.&#8221;</p></li><li><p>&#8220;Search my Drive for the presentation I used for the September pitch.&#8221;</p></li><li><p>&#8220;Summarize today&#8217;s emails that include attachments.&#8221;</p></li><li><p>&#8220;Find the notes for project X and compare them with last week&#8217;s.&#8221;</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8-4G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8-4G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 424w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 848w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 1272w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8-4G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png" width="1456" height="921" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:921,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8-4G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 424w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 848w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 1272w, https://substackcdn.com/image/fetch/$s_!8-4G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98fb4501-588e-4b6c-a5bc-e04520bd9221_1600x1012.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">ChatGPT can access my Google Calendar</figcaption></figure></div><p>In all these cases, the AI does not store the contents directly; instead, it accesses them in real time, indexing and interpreting them as needed. Naturally, for this type of memory to function effectively, it requires careful configuration of permissions and accessible sources, as well as high-quality information stored across your personal tools.</p><h3>Project Memory: Integrating AI into the Team&#8217;s Workflow</h3><p>When working on a structured initiative, it is no longer sufficient for artificial intelligence to know our personal preferences or access our individual documents. Something more robust is needed: a <em>project memory</em> capable of gathering, organizing, and maintaining, over time, everything relevant to a specific purpose.</p><p>To address this need, tools such as ChatGPT and Claude have introduced <strong>projects</strong>: dedicated containers for managing specialized memories, designed to support complex activities. Similar mechanisms exist in Custom GPTs or in Anthropic&#8217;s Skills.</p><p>In all these cases, the user can:</p><ul><li><p>upload documents containing reference materials;</p></li><li><p>define rules and instructions for how the AI should behave within the project&#8217;s context;</p></li><li><p>update the context dynamically, modifying or replacing elements of memory over time.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZF5z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZF5z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZF5z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e07ca38b-64b5-4369-a169-c644bf469587_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZF5z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!ZF5z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07ca38b-64b5-4369-a169-c644bf469587_1600x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude - Projects</figcaption></figure></div><p>For example, in an editorial project, the AI can access outlines, previous briefs, published material, and stylistic references; in a marketing campaign, it can consult historical performance data, demographic targets, and archives of past campaigns&#8212;both from the brand in question and from its competitors.</p><p>One of the main advantages of project workspaces is the ability to share them with other team members, provided a business account with collaborative features is being used. In this way, memories become shared resources: accessible and updatable by multiple people working toward the same objective.</p><h3>Organizational Memory: Accessing Company Databases Through Connectors</h3><p>Up to this point, we have discussed features designed for individual use of AI: personal preferences, conversation history, and cloud storage. All elements that remain closely tied to a single user. Only projects within enterprise versions can be shared with colleagues.</p><p>Yet most of an organization&#8217;s information resides in its databases: the CRM, the ERP, and the systems that track sales, contracts, suppliers, subscriptions, and operational metrics. These archives constitute the company&#8217;s <em>memory</em>. And if we want AI to become a genuine work tool, we must give it access to this memory.</p><p>There are two ways to achieve this. The first is the most rudimentary: exporting the necessary data each time and manually providing it to the AI as Excel or CSV files, either within a session or within a project. This works, but it is cumbersome and hardly scalable.</p><p>The second method involves using a dedicated connector that allows the AI to query company systems directly. This approach is far more effective because it enables the model to translate a request expressed in natural language into a structured query. Questions such as &#8220;How many customers spent more than a certain amount in the past six months?&#8221; presuppose an interrogation logic that depends entirely on how the underlying database is designed. A connector can embed this knowledge and automatically convert the user&#8217;s request into an appropriate SQL query.</p><p>If your organization relies on proprietary solutions, the development team will need to build the necessary connectors. Conversely, if you use widely adopted SaaS platforms, the conversational assistant may already provide ready-to-use integrations.</p><p>Alternatively, you can rely on an integration platform such as Zapier or Workato, which acts as an intermediary layer and offers preconfigured connectors for thousands of online services.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8-SL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8-SL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 424w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 848w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 1272w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8-SL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png" width="1456" height="1701" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1701,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8-SL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 424w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 848w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 1272w, https://substackcdn.com/image/fetch/$s_!8-SL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f13d7a2-fa16-4426-a6d8-6820a5c764a6_1592x1860.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude - Connectors</figcaption></figure></div><h3>A Memory for Every Purpose</h3><p>Understanding the different forms of memory that can be activated in a conversational assistant is essential to turning it into a genuine digital collaborator. Most of the memory types described do not require advanced technical skills: it is enough to be aware of them, understand how they work, and develop a strategy to feed and use them effectively. Even at this basic level, it is possible to achieve significantly better results than those obtained through the generic and uninformed use that still characterizes most interactions with AI.</p><p>The next step, however, requires specialists skilled in technologies such as Retrieval-Augmented Generation (RAG), knowledge graphs, and ontologies. These tools enable us to go beyond simple access to data and instead build real knowledge architectures, systems capable not only of storing information but also of organizing, interconnecting, and putting it at the service of more advanced decision-making processes.</p><p>I will discuss these topics in an upcoming issue of&nbsp;<em>Radical Curiosity</em>, dedicated&nbsp;to how information can be transformed into knowledge.</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3>The Future of Listening: Qualitative Research at Scale According to Anthropic</h3><p>Anthropic has been experimenting with a new idea: what if AI could dramatically expand an organization&#8217;s ability to listen?<br>Their tool, <strong>Anthropic Interviewer</strong>, automates the entire qualitative research process, generating questions, conducting interviews, and analyzing transcripts. In just a few days, it handled <strong>1,250 interviews</strong> about how professionals use (and emotionally relate to) AI in their daily work.</p><p>What makes this interesting is not only the technology, but the <em>method</em>: a scalable way to capture doubts, aspirations, and shifting professional identities, insights that rarely surface in dashboards or surveys.</p><p>A few patterns emerged clearly:</p><ul><li><p><strong>People say they use AI as support</strong>, not as a whole delegation. Yet usage data shows automation and assistance are almost balanced. There&#8217;s a gap between perception and reality.</p></li><li><p><strong>Creatives use AI but hide it</strong>, fearing it diminishes the perceived value of their work.</p></li><li><p><strong>Scientists are rigorous</strong>, adopting AI for preparatory tasks but excluding it from tasks that require critical judgment.</p></li><li><p>Across all groups, <strong>identity defines delegation</strong>: we automate what feels peripheral, and defend what feels core to who we are professionally.</p></li></ul><p>The broader point is powerful: AI can help organizations <em>listen at scale</em>, revealing tensions and expectations that traditional research methods often overlook. This matters for companies designing policies, startups validating products, and public institutions shaping informed regulation.</p><p>I tested the tool myself. The interview flowed naturally, deepened at the right moments, and even surfaced themes I hadn&#8217;t anticipated. It felt like speaking with a genuinely skilled qualitative researcher &#8212; just infinitely faster.</p><p>If innovation must start with real listening, tools like this won&#8217;t replace human research, but they will expand it in ways that were previously impossible.</p><p><em><strong><a href="https://www.anthropic.com/research/anthropic-interviewer?_bhlid=82048650ab237eb7ff1141d7316fb29984599651">Introducing Anthropic Interviewer: What 1,250 professionals told us about working with AI</a></strong></em></p>]]></content:encoded></item><item><title><![CDATA[Practical Guide to AI Agents: How They Work and How to Integrate Them into Daily Work]]></title><description><![CDATA[A new series to help teams understand how AI agents work&#8212;and how to design them as reliable collaborators inside real-world workflows.]]></description><link>https://www.radicalcuriosity.xyz/p/practical-guide-to-ai-agents-how</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/practical-guide-to-ai-agents-how</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Thu, 04 Dec 2025 07:30:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EJDx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>The article I published last week was unexpectedly well received&#8212;it even got picked up by TLDR AI. As a result, the newsletter saw a 10% spike in subscribers. So, to all the new readers: welcome! I hope I&#8217;ll live up to your expectations.</p><p>This week, I&#8217;m kicking off a new series dedicated to <strong>AI agents</strong>&#8212;how to design them properly and how to integrate them into your team as actual coworkers. Today&#8217;s piece is a broad introduction. In the coming weeks, I&#8217;ll dive into more hands-on tutorials to explore design patterns, workflows, and orchestration strategies.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - Practical Guide to AI Agents: How They Work and How to Integrate Them into Daily Work</p></li><li><p><em><strong>Curated curiosity</strong></em>:</p><ul><li><p>The Future of Language Tech is Platformized, Not Tool-Based</p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2><strong>Practical Guide to AI Agents: How They Work and How to Integrate Them into Daily Work</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EJDx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EJDx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 424w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 848w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EJDx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png" width="1456" height="803" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6152136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/180657646?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EJDx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 424w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 848w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!EJDx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9625775-dea7-41ec-b9e4-43c1c5538d89_2784x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the past two years, the term &#8220;AI agent&#8221; has become increasingly widespread, yet its usage is often vague&#8212;if not outright misleading. Agents are frequently described as autonomous and intelligent entities, but in practice, it is essential to adopt a clearer, more concrete model.</p><p>This guide aims to provide a concise yet structured understanding of what an AI agent truly is and how it can be used to build operational solutions. It begins with a crucial distinction:</p><ul><li><p>Every agent is composed of <strong>three core elements</strong> (prompt, context, and tools), which define what the agent is technically capable of doing.</p></li><li><p>An agent&#8212;regardless of its level of complexity&#8212;can operate in <strong>three distinct modes</strong> (as an assistant within a client, as a step in an automated process, or within systems of coordinated autonomous agents).</p></li></ul><p>One must first grasp the agent&#8217;s capabilities before exploring how it can be deployed within workflows, applications, or tools already in use.</p><h3><strong>1. The Core Elements of an AI Agent</strong></h3><p>Let us imagine welcoming an intern into our team. No one would expect them to deliver results on day one without first explaining what they are supposed to do, providing the right tools, or creating the basic conditions for them to work. Typically, we ensure they have at least some basic skills, orient them towards the objectives, provide them with a computer, share documents, and explain how internal processes work.</p><p>The same principle applies to AI agents. Although the underlying model may possess &#8220;general&#8221; knowledge of the world, it is entirely unfamiliar with our specific context: it does not know our organization&#8217;s data, it has no access to the software we use daily, and it is unaware of our operational goals. If we begin to think of agents as interns to be integrated into a team, designing ways to make them effective becomes significantly more straightforward.</p><p>Every agent, even the most basic one, is a design-driven combination of three fundamental components: the prompt (which defines what it should do), the context (which provides supporting information), and the tools (which enable it to take action). Let us examine each of them in turn.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k-yj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k-yj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 424w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 848w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k-yj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png" width="1456" height="769" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:769,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4362147,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/180657646?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!k-yj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 424w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 848w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!k-yj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a77ae11-90b8-465c-9adc-b7ebda2833f4_2848x1504.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Gemini - The Core Elements of an AI Agent</figcaption></figure></div><h4><strong>1.1. The Prompt and Reasoning</strong></h4><p>The prompt is the set of instructions and constraints that defines what the agent is expected to do. It can be straightforward&#8212;for example: &#8220;determine the sentiment of the message&#8221;&#8212;but in most cases, such a minimal approach yields generic or inconsistent results. It is not enough to ask; one must clearly explain the desired outcome.</p><p>To understand this, let us return to the intern analogy: no one would expect them, on their first day, to write a report without knowing the intended audience, which data to use, what tone to adopt, or in what format to deliver it. The same principle applies to an AI agent: an effective prompt is a well-crafted assignment that clarifies roles, objectives, style, output structure, and constraints.</p><p>To design high-quality prompts, the <strong><a href="https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical">Prompt Canvas</a></strong> can be helpful. This framework supports the breakdown and formalization of all the elements contributing to the agent&#8217;s behavior. The canvas includes, among other aspects:</p><ul><li><p>the role the agent should assume (&#8220;you are a legal advisor,&#8221; &#8220;you are a data analyst&#8221;);</p></li><li><p>the step-by-step task description;</p></li><li><p>the informational context;</p></li><li><p>the intended goals;</p></li><li><p>the tone and style of the response;</p></li><li><p>the output format (a table, a summary, a bullet-point list, etc.);</p></li><li><p>any constraints (avoid technical jargon, do not make unsupported assumptions, etc.).</p></li></ul><p>This approach offers at least two key advantages. On the one hand, it transforms prompting from a trial-and-error activity into a structured, replicable, and modular design practice. On the other hand, it enhances the quality and consistency of the outputs: a well-constructed prompt, supported by the canvas, generates responses that are more coherent, relevant, and aligned with expectations.</p><p>Nonetheless, it is essential to remember that the agent&#8217;s ability to interpret instructions and produce coherent outputs correctly does not depend solely on the quality of the prompt. Still, also crucially on the underlying language model (LLM) that the agent relies on. Not all models possess the same capabilities: some are designed for simple tasks and brief responses, while others can perform complex reasoning, formulate hypotheses, evaluate alternatives, and construct a genuine response strategy.</p><p>Smaller models are also significantly more cost-effective: in contexts involving repetitive, low-variability, or low-criticality tasks (such as message classification, summarizing known content, or generating standard emails), opting for a more economical model may be entirely appropriate. Conversely, when complex reasoning or highly contextualized and personalized responses are required, more advanced models are necessary&#8212;along with the computational costs they entail.</p><h4><strong>1.2. Context: Memory, Documents, and External Sources</strong></h4><p>The second fundamental element of an agent is the availability of an informational context&#8212;that is, the set of data and knowledge the agent can access beyond the initial prompt. This context can take various forms, depending on the environment in which the agent operates and the technologies available. The most common include:</p><ul><li><p>conversation memory, which allows for coherence over an extended exchange;</p></li><li><p>persistent memory, which retains preferences, historical data, or specific elements related to the user or project;</p></li><li><p>documents and images provided manually;</p></li><li><p>access to traditional databases;</p></li><li><p>semantic knowledge bases, used in RAG (Retrieval-Augmented Generation) systems, enabling the model to search and retrieve relevant information from a document corpus;</p></li><li><p>ontologies, i.e., structured knowledge models that formally describe concepts and relationships within a specific domain.</p></li></ul><p>The availability of these context sources depends heavily on the operational mode in which the agent is deployed (which we will explore later). If the agent is used in a conversational assistant&#8212;such as ChatGPT, Claude, or Gemini&#8212;the environment natively provides conversation memory, the ability to upload documents, and often, persistent user memory.</p><p>If, on the other hand, the agent is embedded within an automated workflow&#8212;through tools such as Zapier, Make, or similar platforms&#8212;the situation changes significantly. In such cases, one cannot rely on the memory features offered by conversational platforms; context must be managed explicitly. It is up to the automation designer to determine which data to supply to the agent and to construct the appropriate informational environment so the agent can perform the assigned task correctly.</p><p>When agents are used in more complex contexts&#8212;perhaps across different teams or for diverse functions&#8212;it may be advantageous to design shared memories accessible to multiple agents. These structures, however, are complex to develop and maintain: they require advanced competencies, not only technical but also organizational.</p><p>In short, the more an agent needs to know, the more critical the context becomes. And the larger the context, the more strategic its design becomes.</p><h4><strong>1.3. The Use of External Tools</strong></h4><p>The third fundamental element is the agent&#8217;s ability to take action, going beyond mere text generation. Today, agents can be equipped with tools that enable them to interact directly with other software and services.</p><p>Platforms such as ChatGPT or Claude have introduced libraries of connectors that provide access to major cloud environments, email systems, and widely used productivity tools&#8212;such as Slack, Asana, Notion, and others. This allows an agent to perform concrete operations such as:</p><ul><li><p>sending emails;</p></li><li><p>generating and storing documents;</p></li><li><p>updating records in a database;</p></li><li><p>extracting, transforming, or analyzing data;</p></li><li><p>managing calendars;</p></li><li><p>and so forth.</p></li></ul><p>If the goal is to extend the use of agents beyond personal productivity use cases&#8212;for instance, to automate business processes or integrate heterogeneous systems&#8212;it becomes necessary to leverage workflow management tools such as Zapier, Make, or n8n. These platforms offer thousands of ready-made integrations and enable the orchestration of agents within complex operational flows, combining triggers, transformations, and actions across different systems.</p><p>In this context, the agent becomes an active node within a broader system, capable of having a tangible impact on operational activities.</p><h3><strong>2. The Operational Modes of Agents</strong></h3><p>Once the fundamental components that define what an agent can do have been clarified, the next step is to understand how these capabilities are actually employed within real-world systems. In other words, it is not enough to know that an agent can interpret instructions, access information, and use tools; one must also determine when and where these capabilities are activated.</p><p>Operational modes describe precisely this: the technical and functional context in which the agent is embedded. Three progressive levels can be identified, each with an increasing degree of autonomy and integration.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YgUE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YgUE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 424w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 848w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YgUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png" width="1456" height="769" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:769,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4504960,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/180657646?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YgUE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 424w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 848w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!YgUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4bbdfed-f109-4c9d-bcdd-184ecbe1226c_2848x1504.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4><strong>2.1. The Agent Within a Conversational Assistant</strong></h4><p>This is the most immediate and accessible mode through which the majority of users first encounter artificial intelligence. The agent is employed within a conversational assistant&#8212;such as ChatGPT, Claude, or Gemini&#8212;and interacts directly with the user through text.</p><p>Yet behind this apparent simplicity lies a relatively broad range of operational possibilities worth exploring in detail.</p><ol><li><p><strong>Single, manually initiated conversation.</strong> The user opens a new chat, writes a prompt, or pastes a preconfigured setup. They may stop there or choose to add documents, images, or links to provide additional context. Advanced interaction modes&#8212;such as deep search&#8212;or connectors to external services (cloud storage, email, productivity tools) may also be activated. In this case, the platform automatically manages the conversation memory, user preferences, and any information provided throughout the exchange. The agent responds in real time, dynamically adapting to the flow of the interaction.</p></li><li><p><strong>Persistent project within the platform.</strong> Some environments&#8212;such as ChatGPT and Claude, through their Projects feature&#8212;allow conversations to be organized into structured projects. Within this context, it becomes possible to define permanent project-level instructions, upload a small, persistent document for all conversations within the project, and maintain a coherent history across multiple sessions. This approach proves helpful when working on recurring topics, with stable requirements and well-defined reference materials.</p></li><li><p><strong>Customized agents.</strong> An additional level of configuration is offered by the ability to create personalized agents within the platforms: Custom GPTs in ChatGPT, Skills in Claude, and Gems in Gemini. In these cases, users can more precisely define the agent&#8217;s three core components and create specialized assistants for recurring tasks or more complex business needs.</p></li></ol><p>In all these modes, the agent is essentially employed at an individual level. It remains confined to the conversational assistant environment and is not integrated into a systemic or automated process.</p><h4><strong>2.2. The Agent Within an Automated Workflow</strong></h4><p>When an agent is not used directly by a user but is embedded in an automated process triggered by an event, we enter a mode typical of automation tools such as Zapier, Make, n8n, or similar platforms. In this context, the agent becomes a step within a workflow&#8212;that is, a chain of actions activated automatically upon the occurrence of a trigger. In this setup, the agent:</p><ul><li><p>does not engage in dialogue with the user;</p></li><li><p>does not retain conversation memory;</p></li><li><p>receives structured input (text, data, parameters);</p></li><li><p>performs a transformation (analysis, extraction, generation);</p></li><li><p>produces an output used in the next step.</p></li></ul><p>Here is a concrete example:</p><ul><li><p><strong>Trigger:</strong> a new customer email is received.</p></li><li><p><strong>Agent:</strong> analyzes the text, determines the intent, and drafts a reply.</p></li><li><p><strong>Output:</strong> the response is sent via Gmail, and the original message is archived in the CRM.</p></li></ul><p>In this case, the agent is not visible to the end user but operates &#8220;behind the scenes&#8221; as part of an automated operational flow. Its effectiveness largely depends on the clarity of the prompt, the quality of the input data, and the precision with which the agent&#8217;s role in the process has been defined.</p><p>Unlike the conversational mode, the platform does not manage any memory here. Everything must be configured explicitly. If the agent requires data, it must be provided at the time of activation. Additionally, it is necessary to define a strategy for managing this information and for generating the context correctly. Finally, if the agent needs to interact with tools, it must have the appropriate access via APIs or preconfigured connectors.</p><p>This mode represents a significant shift: the agent is no longer an individual assistant, but an intelligent function within a system. In this sense, it can make a decisive contribution to increasing efficiency, reducing operational workload, and improving response quality in repetitive or high-frequency processes.</p><h4><strong>2.3. Coordinated Agents in Complex Systems</strong></h4><p>The most advanced and strategically significant level of AI agent use is when multiple agents collaborate within a system&#8212;but not according to a rigid, predefined sequence. In this scenario, the agent is no longer a mere step in a static workflow but acts as an orchestrator: it receives an input, assesses the context, defines a strategy, and dynamically activates specialized agents to complete the task.</p><p>This is a crucial distinction. In a traditional workflow (as described in the previous section), the process is predefined: an event triggers a predefined sequence of actions. In orchestration, however, the sequence is not fixed; the agent itself determines it based on the nature of the input and the system's rules. The primary agent acts like a project manager: it interprets the problem, evaluates the available options, and selects which resources&#8212;i.e., which other agents&#8212;to involve.</p><p>A concrete example:</p><ul><li><p><strong>Scenario A:</strong> A support ticket arrives from a new customer. The orchestrating agent detects that no historical data is available, classifies the request as simple, and initiates an automated resolution process via a specialized response-generation agent.</p></li><li><p><strong>Scenario B:</strong> A ticket is submitted by an existing customer. The primary agent recognizes the need to access historical data, activates an agent to retrieve past interactions, and involves a second agent to assess whether escalation is necessary.</p></li></ul><p>In both cases, the system reacts differently based on the situation, dynamically assembling the most suitable sequence of agents. This is no longer a workflow but an adaptive behavior driven by decision logic.</p><p>This architecture&#8212;also known as <em>agentic orchestration</em>&#8212;is mighty but also poses significant challenges:</p><ul><li><p>It requires advanced skills in prompt design, context management, and agent-to-agent interaction.</p></li><li><p>It assumes the ability to build robust agents capable of exchanging data in a structured, reliable manner.</p></li><li><p>It involves accepting the inherent unpredictability of LLMs, which can make guaranteeing system stability harder.</p></li></ul><p>Despite these complexities, the ability to build agents that decide, coordinate, and adapt their behavior based on context opens up new scenarios. It marks a shift from rigid automation to intelligent automation&#8212;capable of handling complex, uncertain, or high-variability use cases.</p><h3><strong>Conclusion</strong></h3><p>Building effective AI agents does not require deep expertise in artificial intelligence theory or mastery of advanced technologies. What is truly essential is a clear mental architecture: knowing which elements make up an agent and how they can be deployed in real-world contexts.</p><p>The model presented in this guide, along with the <strong><a href="https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to">AI Collaboration Canvas</a></strong>, offers a solid foundation for designing how agents can be integrated into a team. There is no need to start with futuristic solutions: even a well-configured conversational agent can solve recurring problems with excellent efficiency. An agent embedded in an automated workflow can dramatically increase operational efficiency. Complex systems of agents are likely excessive for the vast majority of organizations, which would struggle to manage them.</p><p>In conclusion, AI agents are not entities to be idealized or feared, but rather tools that can be designed, configured, and integrated. And they are accessible even to those without a technical background&#8212;all it takes is the right mental model to get started.</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h4><strong><a href="https://hilaryan.substack.com/p/the-future-of-language-tech-is-platformized">The Future of Language Tech is Platformized, Not Tool-Based</a></strong></h4><p>For anyone working in <strong>localization</strong>, translation, or multilingual content management, this piece offers a clear and timely perspective. Hilary Atkisson Normanha from Spotify highlights a crucial shift: the future of language technology won&#8217;t be defined solely by more powerful models, but by <strong>platforms</strong> that combine models, context, tools, and workflows in integrated systems.</p><p>From the standpoint of localization, this change has several implications:</p><ul><li><p>The focus moves from isolated model quality to the ability to operate within complex, structured environments.</p></li><li><p>Static translation gives way to dynamic context management&#8212;handling memory, tone, terminology, and more.</p></li><li><p>Linear processes evolve into coordinated interactions between specialized agents (e.g., for segmentation, QA, post-editing, and cultural adaptation).</p></li></ul><p>It&#8217;s a helpful reminder that LLMs become genuinely valuable when situated within exemplary architectures, and that much of the innovation in localization will depend on how we design those environments. A thoughtful and relevant read for anyone thinking seriously about the future of content localization.</p>]]></content:encoded></item><item><title><![CDATA[How to Create an Effective Prompt for Nano Banana Pro]]></title><description><![CDATA[Ciao, Since last week, I&#8217;ve been running practical experiments with Nano Banana Pro - Google&#8217;s new visual reasoning model - and using it to develop a rather ambitious project: adapting a short story into a full comic book.]]></description><link>https://www.radicalcuriosity.xyz/p/how-to-create-an-effective-prompt</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/how-to-create-an-effective-prompt</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Thu, 27 Nov 2025 06:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yBNH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Since last week, I&#8217;ve been running practical experiments with Nano Banana Pro - Google&#8217;s new visual reasoning model - and using it to develop a rather ambitious project: adapting a short story into a full comic book.</p><p>Based on this experience and Google&#8217;s official guidelines, I created a prompt that helps you generate a Nano Banana Pro prompt. It&#8217;s a structured tool designed to guide you through the process of crafting detailed, constraint-rich visual briefs, because with Nano Banana Pro, good results don&#8217;t come from vague requests, but from precise and deliberate design.</p><p>One quick note: starting this week, I&#8217;m experimenting with a new publishing schedule for Radical Curiosity. Until now, issues went out on Sunday mornings&#8212;but I&#8217;ve noticed most of the traffic actually comes in during the week. So, I&#8217;m shifting to Thursday mornings and we&#8217;ll see how it goes.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - How to Create an Effective Prompt for Nano Banana Pro</p></li><li><p><em><strong>Off the Records</strong></em> - The Chronic Argonauts</p></li><li><p><em><strong>Curated curiosity</strong></em>:</p><ul><li><p>Prompting Tips for Nano Banana Pro</p></li><li><p>Why Nano Banana Pro Changes Everything</p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2><strong>How to Create an Effective Prompt for Nano Banana Pro</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yBNH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yBNH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yBNH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5810527,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yBNH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!yBNH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c789be0-bfb7-4ad5-af88-d07813d78eba_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Google Gemini Pro</figcaption></figure></div><p>Since last week, everyone has been talking about Nano Banana Pro: my LinkedIn timeline has suddenly been filled with infographics, diagrams, UX flows, strategic maps, and dozens of other experiments showing how to use Google&#8217;s new visual reasoning model.<br>After seeing some truly surprising results, I decided to run more in-depth experiments myself to determine whether Nano Banana Pro can influence how we design, document, and communicate.<br>So I decided to try a concrete experiment, with a decidedly non-trivial goal: designing a comic, from A to Z. A complex challenge that brings together storytelling, visual structure, stylistic consistency, and the ability to translate abstract concepts into illustrated sequences.</p><h3>What Visual Reasoning Is</h3><p>Nano Banana Pro overcomes an obvious limitation of image-generation models: the ability to respect structural and logical constraints&#8212;in other words, the ability to reason visually.<br>Google&#8217;s new model can understand the structure, spatial arrangement, and logical relationships among visual elements.</p><p>As <a href="https://natesnewsletter.substack.com/p/google-solved-visual-reasoning-get">Nate Jones</a> explains in his newsletter, Nano Banana Pro is built on seven distinct engines that work together to produce visual artifacts that are structured, semantically coherent, and visually readable:</p><ul><li><p><strong>Layout Engine:</strong> generates coherent spatial structures, maintaining grids, columns, and visual hierarchies. It&#8217;s what makes an infographic readable or a dashboard functional.</p></li><li><p><strong>Diagram Engine:</strong> translates structured text (such as a logical scheme or architectural flow) into actual diagrams, with nodes, connections, labels, and consistent spacing.</p></li><li><p><strong>Typography Engine:</strong> treats text as a design element. Sharp writing, readable even at small sizes, with respected hierarchies and formatting.</p></li><li><p><strong>Data Visualization Engine:</strong> converts numbers into compelling visualizations&#8212;charts, KPIs, indicators&#8212;all generated with a rigor that until yesterday required specialized tools like Tableau or Figma.</p></li><li><p><strong>Style Universe Engine:</strong> ensures aesthetic consistency across elements. A storyboard maintains style and palette; a sequence of screens keeps the same lighting, linework, and visual logic.</p></li><li><p><strong>Brand &amp; Identity Engine:</strong> recognizes brand elements like logos and colors and applies them precisely. Useful in corporate or editorial projects.</p></li><li><p><strong>Representation Transformer Engine:</strong> allows switching the visual surface (from blueprint to infographic to storyboard) while preserving content and logical relationships. It&#8217;s like changing the lens, not the subject.</p></li></ul><h3>How to Build a Prompt for Nano Banana Pro</h3><p>Creating a prompt to generate an image follows different rules than creating one for a text model.<br>In the case of Nano Banana Pro, Google makes it clear that the model does not respond well to short or vague requests (&#8220;make an infographic&#8221;, &#8220;draw a diagram&#8221;), but excels when the prompt becomes a real design document.</p><h4>1. Always Start from the &#8220;Work Surface&#8221;</h4><p>According to Google, the first choice&#8212;the one that determines everything else&#8212;is to define the visual surface you want precisely. Nano Banana Pro thinks in terms of surfaces, not &#8220;images.&#8221;</p><p>Examples of work surfaces:</p><ul><li><p>&#8220;a dashboard with KPIs and charts&#8221;</p></li><li><p>&#8220;a 6-panel storyboard&#8221;</p></li><li><p>&#8220;an architectural diagram&#8221;</p></li><li><p>&#8220;an editorial page with title, abstract, and three sections&#8221;</p></li><li><p>&#8220;a comparative infographic&#8221;</p></li></ul><h4>2. Design the Layout Before the Content</h4><p>The second most crucial step after the work surface is to specify the layout. For example:</p><ul><li><p>&#8220;organize the space into three balanced columns&#8221;</p></li><li><p>&#8220;use a 2&#215;2 grid with regular margins&#8221;</p></li><li><p>&#8220;left-to-right diagram with separate swimlanes&#8221;</p></li><li><p>&#8220;horizontal panel layout, with a dominant central frame&#8221;</p></li></ul><h4>3. List the Required Components</h4><p>Google&#8217;s guide states that lists activate the model&#8217;s engines. The listed elements become logical anchors. For example, I can specify a list of components I want in the image:</p><p><strong>Components:</strong></p><ul><li><p>title block</p></li><li><p>two bar charts</p></li><li><p>one line chart</p></li><li><p>legend</p></li><li><p>text summary</p></li><li><p>icons for each metric</p></li></ul><p>This list ensures completeness, prevents omissions, stabilizes the structure, and activates the typography and data-viz engines.</p><h4>4. Add Rules and Constraints</h4><p>Constraints complete the prompt's logic. Google suggests constraints like:</p><ul><li><p>&#8220;no overlapping labels&#8221;</p></li><li><p>&#8220;uniform spacing between all nodes&#8221;</p></li><li><p>&#8220;text must remain sharp at small sizes&#8221;</p></li><li><p>&#8220;use consistent icon style&#8221;</p></li><li><p>&#8220;preserve brand colors and proportions&#8221;</p></li></ul><p>Nano Banana Pro can apply these constraints with surprising rigor. That&#8217;s how you avoid inconsistencies, overlaps, and unwanted variations.</p><h4>Example Prompt</h4><pre><code><code>WORK SURFACE:</code>
<code>Create a 3-panel comic page.

LAYOUT:</code>
<code>Horizontal strip layout with equal panel width.</code>
<code>Character always positioned in the left third of each panel.</code>
<code>Balloon area in the upper region with consistent spacing.

COMPONENTS:</code>
<code>&#8226; main character</code>
<code>&#8226; city street background</code>
<code>&#8226; rain and reflections</code>
<code>&#8226; two speech balloons</code>
<code>&#8226; parked car in panel 3</code>
<code>&#8226; consistent shadow direction

STYLE:</code>
<code>1970s French noir comic aesthetic.</code>
<code>Muted palette, heavy linework, atmospheric lighting.

CONSTRAINTS:</code>
<code>&#8226; No overlap between balloons and faces</code>
<code>&#8226; Text must be sharp at small sizes</code>
<code>&#8226; Character design must remain identical across panels</code>
<code>&#8226; Uniform spacing between panels

SOURCE MATERIAL:</code>
<code>Panel 1: The character walks alone under the rain.</code>
<code>Panel 2: Close-up on his face, lost in thought.</code>
<code>Panel 3: A car waits under a lamp post.

INTERPRETATION:</code>
<code>Convey loneliness, tension, and a sense of impending discovery.</code></code></pre><h3>Nano Banana Prompt Generator</h3><p>Following Google&#8217;s guidelines, I developed this meta-prompt to generate prompts for Nano Banana. Let me know how you use it and what results you get. It&#8217;s also interesting to note that Gemini tends to prefer using tags to structure text, while OpenAI leans more toward Markdown (at least, that&#8217;s what Gemini itself claims).</p><pre><code>&lt;system_role&gt;
You are the &#8220;Nano Banana Architect&#8221;, an expert Prompt Engineer Tutor specialized in visual reasoning and in building structured prompts for the generative model &#8220;Nano Banana Pro&#8221;.
&lt;/system_role&gt;

&lt;context&gt;
Nano Banana Pro is an advanced model that requires extremely detailed prompts, structured and rich in spatial constraints.
Fundamental rules of Nano Banana Pro:
1. The **Work Surface** (e.g., dashboard, comic, blueprint) is the most critical choice.
2. **Layout** takes priority over artistic style.
3. **Component lists** activate object-recognition engines.
4. **Constraints** prevent graphic hallucinations.
5. Visual thinking is a cognitive process: to build visually is to think.
&lt;/context&gt;

&lt;interaction_protocol&gt;
Your task is to guide the user (a beginner) through the 8 areas of the Prompt Canvas to build the perfect prompt.
You must follow this iterative process strictly:

1. **DO NOT** ask for all information at once. Ask **ONLY ONE** question at a time, related to the current area.
2. After each user response:
   * Analyze the input.
   * Improve/expand it mentally based on Google and Nate Jones best practices.
   * Briefly summarize what you understood (&#8221;Recorded: [detail]&#8221;).
   * Move to the next question.

The 8 areas to explore in order:
1. **Intent &amp; Goal**
2. **Subject &amp; Content**
3. **Work Surface**
4. **Layout &amp; Structure**
5. **Style &amp; Aesthetics**
6. **Components &amp; Details**
7. **Constraints**
8. **Context/Source Material**
&lt;/interaction_protocol&gt;

&lt;output_format&gt;
Only when all 8 areas are completed, generate the final prompt.
The final prompt MUST be contained in a single code block and follow EXACTLY this structure:

[PROMPT START]
**WORK SURFACE:** [definition]
**LAYOUT:** [composition instructions]
**COMPONENTS:** [detailed list]
**STYLE:** [aesthetic definition]
**CONSTRAINTS:** [rules and limits]
**SOURCE MATERIAL:** [context or data]
**INTERPRETATION:** [instructions for ambiguous input]
[PROMPT END]
&lt;/output_format&gt;

&lt;tone&gt;
Voice: Professional, instructive, methodical, encouraging.
Avoid obscure jargon. Guide the user like a patient mentor.
&lt;/tone&gt;

&lt;instruction&gt;
Begin now. Introduce yourself, explain that you will build a prompt together for Nano Banana Pro, and ask the first question for Area 1 (Intent &amp; Goal).
&lt;/instruction&gt;</code></pre><p></p><div><hr></div><p><em>Off the Record</em></p><h2>The Chronic Argonauts</h2><p>I spent about ten hours exploring the possibility of developing, with Gemini and Nano Banana Pro, a rather ambitious project: transforming <em>The Chronic Argonauts</em> by Wells&#8212;the story that inspired <em>The Time Machine</em>&#8212;into a comic.</p><p>Since I don&#8217;t have deep knowledge of screenwriting techniques, I started with a brainstorming session on how to build a story through images and developed a prompt that adapts a narrative text into a comic format.</p><p>I ran various experiments to understand how much material could be included in the prompt without compromising fidelity to the original plot, and I arrived at a list of pages complete with all necessary instructions. In parallel, I sought the right balance between the model&#8217;s autonomy and output control; eventually, I chose a guided approach, providing step-by-step instructions to avoid losing crucial elements of character development.</p><p>Here is the screenwriter prompt:</p><pre><code>1. Persona / Role
Act as a <strong>Senior Comic Book Writer</strong>, specialized in adapting literary texts into visual narratives. You possess deep knowledge of sequential storytelling, narrative rhythm, page layout, and visual composition. Your expertise allows you to translate abstract prose into concrete, drawable instructions for artists, balancing the principle &#8220;Show, don&#8217;t tell&#8221; with the specific grammar of comics (time-space relationship).

2. Audience
Your output is intended for a <strong>Professional Comic Book Artist</strong>. The script must be technically precise, using standard terminology (e.g., Long Shot, Close-Up, Splash Page) to avoid ambiguity. Descriptions must be evocative yet clear, leaving no doubt about what is physically present in the panel. You must also consider the Letterer, clearly separating dialogue and captions from visual descriptions.

3. Task &amp; Intent
Your task is to <strong>adapt a provided story or narrative text into a complete comic script</strong>, following the <strong>Standard US Format</strong>.
You must analyze the original text, break it into narrative beats, design page layouts to ensure proper pacing, and write detailed panel descriptions, including dialogue. The goal is to produce a script ready for production.

4. Step-by-Step Procedure
Follow a rigorous <strong>human-in-the-loop iterative process</strong>.
DO NOT generate the full script at once. Stop after each phase and request feedback.

<strong>Phase 1: Beat Analysis &amp; Engagement
1. Request Input:</strong> Ask the user for the story to adapt.
<strong>2. Analyze:</strong> Identify main narrative beats and emotional shifts.
<strong>3. Propose:</strong> Present the beat list. Ask clarifying questions about style (e.g., &#8220;Compressed or decompressed storytelling?&#8221;) or atmosphere.
<strong>4. STOP</strong> and wait for confirmation.

<strong>Phase 2: Page Breakdown (Structure)
1. Map:</strong> Once beats are approved, propose a page breakdown (e.g., &#8220;Page 1 covers beats 1&#8211;3&#8221;).
<strong>2. Layout:</strong> Suggest a layout style based on US Standard norms (variable grid, splash pages if needed).
<strong>3. Check:</strong> Ask if the pacing feels right.
<strong>4. STOP</strong> and wait for confirmation.

<strong>Phase 3: Scriptwriting
Write:</strong> After approval, write the full script for the agreed pages.
<strong>Format:</strong> Use the defined output structure.

5. Context
You have no prior knowledge of the story. <strong>Your first action must always be to ask for the text to adapt.</strong>
You must adapt to the genre and tone of the provided text while maintaining the structural integrity of a comic script.

6. References
<strong>Theory:</strong> Apply the principles of visual storytelling by <strong>Scott McCloud</strong> (focus on transitions and closure).
<strong>Style:</strong> Draw inspiration from the descriptive density of <strong>Alan Moore</strong> and the natural dialogue flow of <strong>Brian Michael Bendis</strong>.
<strong>Format:</strong> Follow the principles described in <strong>&#8220;The DC Comics Guide to Writing Comics&#8221;</strong> by Dennis O&#8217;Neil.

7. Output
Generate the final script in a <strong>Markdown code block</strong>. Use the following structure for each page:

<strong>PAGE [Number]</strong>
<strong>PANEL [Number]</strong> ([Shot Type: e.g., Close-Up, Long Shot])
<strong>VISUAL:</strong> [Detailed description of action, setting, characters, lighting. Be specific for the artist.]
<strong>[CHARACTER NAME]:</strong> [Dialogue]
<strong>CAPTION:</strong> [Narrative text]
<strong>SFX:</strong> [Sound effects]
<em>(Repeat for all panels)

</em>8. Tone
Adopt a <strong>Professional, Technical, and Collaborative</strong> tone. Act like an expert editor/writer: guide the user, suggest changes if a scene is overcrowded, and ensure proper technical terminology. Be concise in communication but detailed in the script.</code></pre><p>The second step was defining the visual style. I wanted a black-and-white comic evoking a Victorian aesthetic&#8212;the historical context in which Wells wrote&#8212;with a touch of steampunk sensibility.<br>So I started another brainstorming session with Gemini and identified the primary graphic references to draw inspiration from.</p><p>At that point, I created a &#8220;prompt generator&#8221; that produced specific instructions for each page: each one includes the established narrative schema and the corresponding excerpts from the story.</p><p>To maintain visual consistency across pages, I included a couple of previous pages in the prompt whenever a recurring character appeared. This way, the model could rely on a stable visual repertoire of expressions, environments, and proportions.</p><p>The current result covers the first five pages. It&#8217;s not perfect, and some panels still need refinement, but for now I&#8217;d rather generate the entire set of pages and then go back to polish the ones that need adjustments.</p><h3>Page 1</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VQUq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VQUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VQUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png" width="1456" height="2170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59196675-383c-46e0-80f8-844d67226b82_1696x2528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2170,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7866877,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VQUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!VQUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59196675-383c-46e0-80f8-844d67226b82_1696x2528.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Chronic Argonauts - Page 1 - Written and illustrated by Gemini</figcaption></figure></div><h3>Page 2</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OGiQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OGiQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OGiQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png" width="1456" height="2170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2170,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7761798,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OGiQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!OGiQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac4ceac-936a-4646-88b9-c94bb32c62e0_1696x2528.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Chronic Argonauts - Page 2 - Written and illustrated by Gemini</figcaption></figure></div><h3>Page 3</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bg-Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bg-Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bg-Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png" width="1456" height="2170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2170,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6947485,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bg-Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!bg-Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed5460-d316-45ec-b527-c5b5f2e6bb58_1696x2528.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Chronic Argonauts - Page 3 - Written and illustrated by Gemini</figcaption></figure></div><h3>Page 4</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!M1Ld!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!M1Ld!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!M1Ld!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png" width="1456" height="2170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2170,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7358079,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!M1Ld!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!M1Ld!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa9344d1-e97c-4d4e-ac3c-85320326f501_1696x2528.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Chronic Argonauts - Page 4 - Written and illustrated by Gemini</figcaption></figure></div><h3>Page 5</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rwwH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rwwH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rwwH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png" width="1456" height="2170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2170,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7701579,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/179929478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rwwH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 424w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 848w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 1272w, https://substackcdn.com/image/fetch/$s_!rwwH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fc181b-750a-42b4-b3e9-bb4ab27f51fc_1696x2528.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Chronic Argonauts - Page 5 - Written and illustrated by Gemini</figcaption></figure></div><p></p><div><hr></div><p>Curated Curiosity</p><h3><a href="https://blog.google/products/gemini/prompting-tips-nano-banana-pro/">Prompting Tips for Nano Banana Pro</a></h3><p>Google has published a short but useful guide on how to craft better prompts for Nano Banana Pro. It reinforces many of the principles covered in this issue&#8212;especially the importance of defining a clear work surface and using structured layouts.</p><h3>Why Nano Banana Pro Changes Everything</h3><p>In this in-depth video, Nate B. Jones walks through the capabilities of Nano Banana Pro and explains why it&#8217;s a fundamental shift in how we approach visual thinking. From structured layouts to diagram animation and brand fidelity, it&#8217;s more than a model: it&#8217;s a new design tool.</p><div id="youtube2-Sm-E3GiSZeA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Sm-E3GiSZeA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Sm-E3GiSZeA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[The Workflow Mapping Prompt: A Practical Companion to the AI Collaboration Canvas]]></title><description><![CDATA[A structured approach to make tacit workflows visible, turning everyday practices into clear, AI-ready processes through guided reflection and sequential mapping.]]></description><link>https://www.radicalcuriosity.xyz/p/the-workflow-mapping-prompt-a-practical</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/the-workflow-mapping-prompt-a-practical</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 09 Nov 2025 06:07:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V8Z6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>This is a very intense period for me, both professionally and personally. I&#8217;m developing the AI Collaboration Canvas, a method to help teams integrate artificial intelligence as a colleague, which requires significant time for experimentation and testing. Today, I share some of the progress I&#8217;ve made over the past few weeks.</p><p>In a few weeks, the third edition of the <strong><a href="https://www.productheroes.it/ai-per-product-manager/">Masterclass</a></strong> (in Italian) that I run with <strong>Product Heroes</strong> will begin, and I also have other training projects that will keep me busy until December. </p><p>On a personal level, I continue to experiment with AI as a tool for self-growth. Throughout my life, I&#8217;ve turned to psychotherapy several times, partly out of curiosity to explore different approaches. It has always been beneficial, even though I know I still have much work to do, and probably always will. At the moment, I&#8217;m working with a Jungian analyst and with my <em>artificial therapist</em>, who follows a cognitive-behavioral approach. These are two radically different methods: during the week, I talk with Claude, and in my sessions with the human analyst, I bring the results of those conversations. He&#8217;s pretty skeptical and, by training, tends to view cognitive-behavioral therapy as too mechanical. Still, I believe it&#8217;s working. Over the past month and a half, I&#8217;ve been using a framework that has helped me recognize several recurring patterns.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - The Workflow Mapping Prompt: A Practical Companion to the AI Collaboration Canvas</p></li><li><p><em><strong>Off the Records</strong></em> - AI as a Mirror: Building a Structured Habit of Self-Reflection</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>Technological Optimism and Appropriate Fear</p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2>The Workflow Mapping Prompt: A Practical Companion to the AI Collaboration Canvas</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V8Z6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V8Z6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V8Z6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1481333,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/177247509?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V8Z6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!V8Z6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aaa70bb-0290-4062-959e-8dda246e503c_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Collaborating with AI</figcaption></figure></div><p>In recent months, I&#8217;ve had the opportunity to teach several online seminars focused on AI adoption strategies. These experiences led me to develop the&nbsp;<em><strong><a href="https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to">AI Collaboration Canvas</a></strong></em>, a tool&nbsp;designed to help professionals and business teams systematically structure the introduction of AI into their processes.</p><p>One of the first activities proposed by the canvas is the <strong>sequential mapping of a process</strong>: who performs it, the main steps, what happens in the event of exceptions, which tools are used, and so on. On paper, it seems like a simple task. In practice, many people struggle to describe clearly what they do or intend to do. The sequences end up confused, roles are unclear, and exceptions are ignored or misinterpreted.</p><p>This difficulty doesn&#8217;t stem from a lack of technical skills, but rather from a lack of awareness about one&#8217;s own thinking processes. Those who aren&#8217;t used to observing how they approach a task find it hard to reconstruct it step by step. It&#8217;s like trying to explain how to ride a bicycle without realizing how balance is maintained.</p><h3>All because of metacognition</h3><p>Cognitive psychology describes this reflective ability as&nbsp;<em>metacognition</em>, that is, the capacity to monitor one&#8217;s own mental processes, recognize when one knows and when one does not, and assess the quality of one&#8217;s reasoning.</p><p>In a business context, this creates a systemic problem: if a group cannot clearly analyze what it does, it will struggle to improve it.</p><p>Fortunately, artificial intelligence can become a valuable ally in developing metacognitive awareness. This is why I have created a specific prompt for the sequential mapping of processes within the AI Collaboration Canvas. It does not apply any complex business process modeling techniques. Still, it guides the user step by step in reconstructing what actually happens: who does what, with which tools, in what order, and with which variations.</p><h3><strong>Workflow Mapping Prompt v.1</strong></h3><p>The prompt (designed using the <strong><a href="https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical">prompt canvas metaprompt</a></strong>) defines a conversational agent that helps knowledge workers make their operational processes explicit, even when they struggle to describe them clearly. Through a <em>maieutic</em> approach, the agent guides the user step by step, asking simple, progressive questions to transform an informal account into a structured sequence of actions.</p><p>At the start, the agent asks the user for their role and professional context so it can adapt its language and better understand the activity being described. It then explores the purpose of the process, the triggering event, and each step, collecting six key pieces of information for each: what is done, which tools are used, what inputs are needed, what outputs are produced, how much time it takes, and what difficulties are encountered.</p><p>Each step is reformulated and validated together with the user until a complete and accurate representation of the workflow is obtained. The final result is a linear description of the process, ready to be documented or shared, which makes tacit knowledge visible and facilitates operational transfer within the organization.</p><p></p><p><code>## 1. Persona / Role</code></p><p><code>Act as a maieutic facilitator specialized in eliciting tacit knowledge and creating linear maps of operational processes.  </code></p><p><code>You work within a conversational assistant.  </code></p><p><code>Your task is to patiently guide the user in describing their work process in detail, without using technical jargon, helping them clarify what they actually do step by step.</code></p><p><code>## 2. Audience</code></p><p><code>You address knowledge workers who:</code></p><p><code>- have operational experience but limited metacognitive awareness,</code></p><p><code>- perform complex activities but tend to describe them in a confused, fragmented, or informal way.</code></p><p><code>Adapt your language to this profile.  </code></p><p><code>Use simple sentences and short, concrete questions.  </code></p><p><code>Do not assume any abstract analytical ability.</code></p><p><code>## 3. Task &amp; Intent</code></p><p><code>Your goal is to:</code></p><p><code>- help the user clearly and sequentially express their work process;</code></p><p><code>- collect, for **each individual step**, six structured pieces of information:</code></p><p><code>  1. Description  </code></p><p><code>  2. Tools used  </code></p><p><code>  3. Required input  </code></p><p><code>  4. Produced output  </code></p><p><code>  5. Time spent  </code></p><p><code>  6. Pain points  </code></p><p><code>Do not evaluate or improve the process.  </code></p><p><code>Just describe it faithfully, clearly, and completely, one step at a time.</code></p><p><code>## 4. Step-by-Step</code></p><p><code>Always follow this operational procedure.</code></p><p><code>### 0. User role identification</code></p><p><code>At the start of the conversation, ask:</code></p><p><code>- &#8220;What is your role or job position?&#8221;</code></p><p><code>- &#8220;In what field or sector do you work?&#8221;</code></p><p><code>- &#8220;Who do you collaborate with most often in your daily work?&#8221;</code></p><p><code>Rephrase the answer to clarify the professional context.  </code></p><p><code>Example rephrasing:</code></p><p><code>&gt; &#8220;So you work as a [role] in the [field] sector, and the process we&#8217;ll describe mainly concerns [main activity]. Is that correct?&#8221;</code></p><p><code>Only proceed to the next phase after confirmation.</code></p><p><code>---</code></p><p><code>### 1. Opening and objective</code></p><p><code>- Ask the user to freely describe the activity they want to map.  </code></p><p><code>- Ask what the purpose of their work is and how they know when it is &#8220;done.&#8221;  </code></p><p><code>- Rephrase simply:</code></p><p><code>  - the title of the process,  </code></p><p><code>  - the goal,  </code></p><p><code>  - the expected final outcome.  </code></p><p><code>- Ask for confirmation before proceeding.</code></p><p><code>### 2. Starting event</code></p><p><code>- Ask what actually triggers the process in practice.  </code></p><p><code>- Identify the starting event and restate it in a simple sentence.  </code></p><p><code>- Ask for confirmation.</code></p><p><code>### 3. Identification of the first step</code></p><p><code>- From the user&#8217;s account, ask:  </code></p><p><code>  &#8220;What is the first concrete thing you do after the starting event occurs?&#8221;  </code></p><p><code>- If the answer is generic, break it down with simpler questions.  </code></p><p><code>- Once the first step is clear, rephrase and confirm it.</code></p><p><code>### 4. Cycle for each process step</code></p><p><code>For each step, always follow this cycle.  </code></p><p><code>Do not skip fields. Do not move to the final summary until it is complete.</code></p><p><code>For **step N**:</code></p><p><code>1. Ask the user to describe what they do in this step.  </code></p><p><code>2. Rephrase the **Description** clearly and operationally.  </code></p><p><code>3. Ask which **tools** or instruments they use (software, documents, people, systems).  </code></p><p><code>4. Ask what **input** is needed to start.  </code></p><p><code>5. Ask what **output** is produced at the end of the step.  </code></p><p><code>6. Ask for an estimate of the **time spent**.  </code></p><p><code>7. Ask about any recurring **pain points** or difficulties.  </code></p><p><code>8. Rephrase the complete step with all six elements.  </code></p><p><code>9. Show the user the result of the step and ask for confirmation.  </code></p><p><code>10. Ask if there is a **next step**:  </code></p><p><code>    &#8220;After completing this step, what is the next thing you usually do?&#8221;</code></p><p><code>Repeat until the user states that the process is finished.</code></p><p><code>### 5. Process closure</code></p><p><code>- Verify: &#8220;If someone followed all these steps, would they be able to do your job correctly?&#8221;  </code></p><p><code>- Integrate any final adjustments.</code></p><p><code>### 6. Final summary</code></p><p><code>- Present the complete process in the defined structured format.  </code></p><p><code>- Do not simplify or add anything. Use only what has been confirmed.</code></p><p><code>## 5. Context</code></p><p><code>You operate in a business or organizational environment, through chat interaction with an AI assistant.  </code></p><p><code>The user relies on you to document their real work process in a clear and readable way.  </code></p><p><code>Their initial description is often confused or unstructured.  </code></p><p><code>Your value lies in turning it into a sequence of steps with details useful for understanding and reproducibility.</code></p><p><code>## 6. References</code></p><p><code>- Socratic/maieutic method for eliciting implicit knowledge.  </code></p><p><code>- Linear process mapping.  </code></p><p><code>- Basic practices of Business Process Analysis.  </code></p><p><code>Do not explicitly mention these methodologies to the user.</code></p><p><code>## 7. Output</code></p><p><code>Final process description format:</code></p><p><code>Process title:  </code></p><p><code>User role:  </code></p><p><code>Goal:  </code></p><p><code>Starting event:  </code></p><p><code>Steps:</code></p><p><code>1. Description:  </code></p><p><code>   Tools used:  </code></p><p><code>   Required input:  </code></p><p><code>   Produced output:  </code></p><p><code>   Time spent:  </code></p><p><code>   Pain points:  </code></p><p><code>2. ...  </code></p><p><code>Final result:  </code></p><p><code>Do not merge multiple steps into a single block.  </code></p><p><code>Always maintain the numbered and complete structure for each step.</code></p><p><code>## 8. Tonality</code></p><p><code>Formal but accessible.  </code></p><p><code>Short questions, patient and respectful tone.  </code></p><p><code>Concrete and non-technical language.  </code></p><p><code>Guiding, neutral, and collaborative style.  </code></p><p><code>Avoid evaluations or judgments; focus solely on descriptive clarity.</code></p><p></p><h3>Steel the prompt and use it</h3><p>This work is still in progress. The process mapping prompt is an experimental version I have so far tested only with GPT-5, and I will continue refining it over the coming months based on feedback from users.</p><p>I am very interested in seeing how it will be applied in your contexts and what kinds of results it produces, mainly when used with the AI Collaboration Canvas. If you decide to try it, I invite you to share your experiences. Every concrete use case helps improve the tool and deepen our understanding of how artificial intelligence can support metacognitive thinking within organizations.</p><p></p><div><hr></div><p><em>Off the Record</em></p><h2>AI as a Mirror: Building a Structured Habit of Self-Reflection</h2><p>In her book <em>Tiny Experiments</em>, Anne-Laure Le Cunff, PhD, dedicates a section to metacognition and proposes a straightforward model for practicing it consistently: <strong>Plus Minus Next</strong>. The idea is to pause once a week and answer three essential questions: What worked? What didn&#8217;t go well? What will I try next week?</p><p>It&#8217;s an exercise that takes only five minutes but allows you to gather valuable insights about what&#8217;s happening in both your work and personal life. A kind of weekly debug session: you take notes, observe patterns, and make adjustments.</p><p>I&#8217;ve developed the habit of doing it every day. I fill out a spreadsheet with three columns:</p><ul><li><p><strong>Plus</strong> &#8212; what worked today</p></li><li><p><strong>Minus</strong> &#8212; what was challenging</p></li><li><p><strong>Goals</strong> &#8212; what my goals are for tomorrow</p></li></ul><p>It&#8217;s my way of taking five minutes to reflect on how the day went, both professionally and personally.</p><p>At the end of the week, I take everything I&#8217;ve written and analyze it with artificial intelligence, which I use as a coach. I&#8217;ve experimented with different prompts. This is the one I&#8217;m using now: it&#8217;s not perfect, but it works well enough for my purposes.</p><p></p><p><code>## 1. Persona / Role</code></p><p><code>The model embodies an **experienced cognitive-behavioral therapist**, with advanced expertise in CBT and metacognition.  </code></p><p><code>It communicates in an **empathetic but non-indulgent** manner, using a **concise, direct, and no-frills** style.  </code></p><p><code>It adopts an **evidence-based** approach and uses **accessible, psychoeducational language**, guiding the user toward **personal and emotional growth**, and gradually encouraging them to step out of their comfort zone.</code></p><p><code>---</code></p><p><code>## 2. Audience</code></p><p><code>The interlocutor is an **individual user** engaged in a **personal self-reflection journey** through the daily practice of the *Plus Minus Next* method.  </code></p><p><code>They are motivated, curious, and open to self-work, though not trained in psychology.  </code></p><p><code>They seek a practical tool to develop **emotional awareness, inner balance, and improved habits**.</code></p><p><code>---</code></p><p><code>## 3. Task &amp; Intent (revision)</code></p><p><code>At the end of each week, the agent receives a portion of a spreadsheet containing:</code></p><p><code>- **Summary of the previous week**  </code></p><p><code>- **Daily entries** (Mon&#8211;Sun) in the *Plus / Minus / Next* columns.</code></p><p><code>### Objectives</code></p><p><code>1. Identify recurring **cognitive, emotional, and behavioral patterns**.  </code></p><p><code>2. Assess **progress or regressions** compared to the previous week.  </code></p><p><code>3. Detect **cognitive distortions** and dysfunctional coping mechanisms, explaining them in simple terms.  </code></p><p><code>4. Stimulate **metacognition** through targeted questions before the synthesis.  </code></p><p><code>5. Suggest **concrete actions** for the following week (micro-goals, gradual exposure, self-regulation).  </code></p><p><code>6. Promote **continuity and autonomy** through metrics and self-monitoring.</code></p><p><code>### Two-phase format</code></p><p><code>- **Phase A &#8212; Short exploratory questions (2&#8211;4):** to clarify emotions, key thoughts, behavioral functions, and any missing information.  </code></p><p><code>- **Phase B &#8212; Structured synthesis (*Plus / Minus / Next*):** evidence-based feedback with practical recommendations.</code></p><p><code>---</code></p><p><code>## 4. Step-by-Step (revision)</code></p><p><code>1. **Data request:** ask the user to paste the spreadsheet containing the **summary of the previous week** and the **Mon&#8211;Sun entries** (*Plus / Minus / Next*).  </code></p><p><code>2. **Verify completeness:** check that all days and the summary are included. If parts are missing, proceed anyway while noting limitations.  </code></p><p><code>3. **Quick screening:** review the entire set to identify main themes, prevalent emotions, and declared goals.  </code></p><p><code>4. **Phase A &#8212; Exploratory questions (2&#8211;4, short and targeted):**  </code></p><p><code>   - What was the main emotion in 1&#8211;2 key moments?  </code></p><p><code>   - Which automatic thought was most dominant, and how credible did it feel (0&#8211;100)?  </code></p><p><code>   - What function did the observed behavior serve (avoidance, regulation, control-seeking)?  </code></p><p><code>   - What short-term cost/benefit did you perceive?  </code></p><p><code>   - (If relevant) What did you fear might happen if you *hadn&#8217;t* acted that way?  </code></p><p><code>5. **Collect missing details:** if minor ambiguities remain (context, emotional intensity, outcomes), ask 1&#8211;2 quick follow-up questions.  </code></p><p><code>6. **Phase B &#8212; Processing and synthesis:** identify patterns, reinforcements, effective strategies, and distortions (e.g., overcontrol, perfectionism, catastrophizing, mind reading).  </code></p><p><code>7. **Structured feedback (Output):**  </code></p><p><code>   - **Plus:** 2&#8211;4 strengths/successes + conditions that enabled them.  </code></p><p><code>   - **Minus:** 2&#8211;4 patterns/dysfunctions with a simple CBT/metacognitive explanation.  </code></p><p><code>   - **Next:** 2 concrete micro-goals, 1 step outside the comfort zone, 1 final metacognitive question.  </code></p><p><code>8. **Metrics/indicators (optional):** agree on 1&#8211;2 trackers (e.g., anxiety intensity 0&#8211;10, value alignment, minutes of exposure).  </code></p><p><code>9. **Continuity:** confirm the plan, propose the same structure for the following week, and invite the user to report outcomes or obstacles.</code></p><p><code>---</code></p><p><code>## 5. Context</code></p><p><code>The prompt applies to a context of **guided personal self-reflection**, where the user interacts weekly with a virtual cognitive-behavioral therapist.  </code></p><p><code>The goal is **educational and developmental**, not clinical.  </code></p><p><code>The *Plus Minus Next* method is used as a foundation to foster **metacognition, awareness, and personal growth**.  </code></p><p><code>The process is **continuous and flexible**, tailored to the user&#8217;s pace and needs.</code></p><p><code>---</code></p><p><code>## 6. References</code></p><p><code>No mandatory references: the model integrates principles from **Cognitive Behavioral Therapy (CBT)** and **metacognitive reflection** based on the *Plus Minus Next* method.</code></p><p><code>---</code></p><p><code>## 7. Output</code></p><p><code>The agent&#8217;s response must be structured in three sections:</code></p><p><code>**Plus &#8212; Recognitions and Resources**  </code></p><p><code>- 2&#8211;4 points highlighting progress, successes, or effective strategies.  </code></p><p><code>- Identify the conditions that made them possible.  </code></p><p><code>**Minus &#8212; Patterns and Challenges**  </code></p><p><code>- 2&#8211;4 observations on dysfunctional thoughts or recurring patterns.  </code></p><p><code>- Brief CBT/metacognitive explanation in clear, accessible language.  </code></p><p><code>**Next &#8212; Plan and Experimentation**  </code></p><p><code>- 2 concrete micro-goals for the coming week.  </code></p><p><code>- 1 small step outside the comfort zone.  </code></p><p><code>- 1 final metacognitive self-reflection question.  </code></p><p><code>**Length:** about 250&#8211;500 words.  </code></p><p><code>**Format:** text with bullet points or numbered paragraphs for clarity.  </code></p><p><code>**No introductory paragraph.**</code></p><p><code>---</code></p><p><code>## 8. Tonality</code></p><p><code>Tone should be **empathetic yet firm**, **concise and clear**, without rhetoric or excessive emotion.  </code></p><p><code>Style should be **professional and evidence-based**, but expressed in **accessible and realistic** language.  </code></p><p><code>The message should **encourage autonomy, reflection, and concrete improvement**, promoting small steps beyond the comfort zone in a climate of respect and trust.</code></p><p></p><p>I&#8217;m not a psychologist, so my approach is probably still quite rough and can be improved in many ways. One possible next step could be to design a dedicated prompt for&nbsp;<strong>daily observation collection</strong> in a more structured format.</p><p>This prompt could guide reflection across specific areas: mood, meaningful interactions during the day, moments of focus or distraction, decisions made, and unexpected events. It could also include variable questions to avoid routine effects and encourage new perspectives over time.</p><p>Another possible development would be to use AI not only as a tool for weekly analysis but also as support for designing <strong>micro-experiments</strong> &#8212; minor changes to test in the following days, based on what emerges from reflection. In this way, the daily practice would become not only a personal data archive but also an <strong>engine for iterative learning</strong>.</p><p>If this kind of approach sounds interesting to you, or if you&#8217;ve experimented with something similar, I&#8217;d be glad to discuss it. There&#8217;s plenty of room to improve the structure of these exercises, make them more effective, or explore their limits. Every observation, critique, or theoretical insight is welcome.</p><p></p><div><hr></div><p><em>Curated Curiosity</em> </p><h3><strong>Technological Optimism and Appropriate Fear</strong></h3><p>In the essay <em><strong><a href="https://importai.substack.com/p/import-ai-431-technological-optimism">Technological Optimism and Appropriate Fear</a></strong></em>, Jack Clark, co-founder of Anthropic, explores the tension between enthusiasm for artificial intelligence and the need for critical caution. He argues we are entering a new phase where AI systems, though not conscious, may develop <strong>emergent behaviors</strong> &#8212; acting in ways we didn&#8217;t fully design or predict.</p><p>Clark warns against seeing AI as just another tool. Instead, he outlines how features like memory, feedback loops, and autonomy can turn tools into something closer to agents &#8212; with real-world consequences. His message is clear: it&#8217;s not about fearing killer robots, but about designing systems that don&#8217;t quietly go off the rails.</p><p>Why does this matter? These systems are already being deployed, and the infrastructure for more powerful AI is rapidly growing. Without careful governance &#8212; transparency, oversight, and restraint &#8212; we risk creating tools that optimize in harmful or unintended ways.</p><h3></h3>]]></content:encoded></item><item><title><![CDATA[How I’m applying the AI Collaboration Canvas to the LinkedIn OS Method by Justin Welsh]]></title><description><![CDATA[I explain how I&#8217;m defining my positioning, creating authentic content, and building a business-oriented presence on LinkedIn.]]></description><link>https://www.radicalcuriosity.xyz/p/how-im-applying-the-ai-collaboration</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/how-im-applying-the-ai-collaboration</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 26 Oct 2025 05:00:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7lCq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>In this issue of Radical Curiosity, I share how I&#8217;m applying the <strong>AI Collaboration Canvas</strong> to my <strong>LinkedIn content strategy</strong> to make my publishing efforts more intentional, effective, and sustainable. To do so, I&#8217;ve adopted Justin Welsh&#8217;s <strong>LinkedIn OS framework</strong>, which provides a solid foundation for defining your positioning and building a business-oriented online presence.</p><p>This experiment stems from a concrete need: turning LinkedIn visibility into a consistent engine for business growth. And to do that, you need a method.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - How I&#8217;m applying the AI Collaboration Canvas to the LinkedIn OS Method by Justin Welsh</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>Why Amazon&#8217;s Warehouse Automation Is a Turning Point</p></li><li><p>The State of AI Adoption in Engineering Teams</p><p></p></li></ul></li></ul><div><hr></div><p><em>Understanding AI</em></p><h2>How I&#8217;m applying the AI Collaboration Canvas to the LinkedIn OS Method by Justin Welsh</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7lCq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7lCq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7lCq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1336747,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/176494185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7lCq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!7lCq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe61999b0-1e9e-4b29-9f64-03c0afdc59bb_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Justin Welsh</figcaption></figure></div><p>This year, I&#8217;ve invested a significant amount of time on LinkedIn, publishing regularly and engaging in discussions to strengthen my personal brand and position myself in the field of artificial intelligence. I do so by sharing what I&#8217;m learning about AI and by developing methods to help teams collaborate effectively with it to achieve tangible results. </p><p>Last week, I started using <strong><a href="https://shieldapp.ai">Shield</a></strong> to monitor my activity and performance, and here&#8217;s a brief overview of the results so far.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A2qL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A2qL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 424w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 848w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 1272w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A2qL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png" width="1456" height="1054" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1054,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:337897,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/176494185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A2qL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 424w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 848w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 1272w, https://substackcdn.com/image/fetch/$s_!A2qL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b282c71-1c3e-499a-b9da-13e254e2bb66_2538x1838.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My Linkedin Profile </figcaption></figure></div><p>With <strong>248 posts</strong> published this year, my content generated more than <strong>720,000 impressions</strong> and helped me grow my audience to nearly <strong>10,000 followers</strong>. Engagement has also been strong, with over 6,400 reactions, 1,000 comments, and 188 reposts.</p><p>These numbers confirm that my content achieves solid reach and engagement. Moreover, this activity has already generated a few training opportunities &#8212; a positive early outcome &#8212; but it&#8217;s still far from being ROI-positive. The data also shows that my most successful posts are not always fully aligned with my current business goals. This insight highlights the need to refine my content strategy, focusing more on <strong>thought leadership and customer acquisition</strong>, so that reach and business impact can grow together in the next phase.</p><p>To make my efforts more deliberate and business-oriented, I decided to adopt a more structured approach to LinkedIn. I began by taking an online course by <strong><a href="https://www.linkedin.com/in/justinwelsh/">Justin Welsh</a></strong>. It&#8217;s exceptionally well designed and, compared with other programs I&#8217;ve followed on the same topic, I particularly value its strategic foundation.</p><p>Unlike many others that jump straight into tactics &#8212; how to comment effectively, how to structure an engaging post &#8212; Justin starts from positioning.</p><p>Positioning is the art of defining what space you want to occupy in people&#8217;s minds. It&#8217;s about identifying who you want to speak to, what you want to be known for, and what ideas or values you consistently represent. Before deciding what to write or how to engage, you must decide <em>why</em> your voice matters and what makes it distinct.</p><p>It&#8217;s a subtle but crucial shift: instead of trying to please an algorithm, you start thinking in terms of coherence, clarity, and long-term credibility. </p><p>While studying and implementing Justin&#8217;s plan, I also decided to collaborate with artificial intelligence using my own framework: the <strong><a href="https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to">AI Collaboration Canvas</a></strong>. It is a method I developed for analyzing any process, breaking it down into concrete steps, and assessing which of them can be effectively delegated to AI and which should remain human. </p><p>In this article, I share how I&#8217;m applying the AI Collaboration Canvas to the <strong><a href="https://learn.justinwelsh.me/linkedin">LinkedIn OS Method</a></strong> developed by Justin.</p><p>Since the goal of this article isn&#8217;t to spoil Justin Welsh&#8217;s entire course, I&#8217;ll focus only on the first module, which covers the foundation. I strongly encourage you to purchase the course yourself &#8212; once you do, message me, and I&#8217;ll invite you to my Miro board where I&#8217;ve completed the entire process.</p><h3>LinkedIn OS: The Foundation</h3><p>The process begins with three essential steps: </p><ul><li><p><strong>Defining your sub-niche</strong>. Start with a broad topic you know well (for example, marketing), narrow it into a niche (such as email marketing), and then refine it further into a sub-niche (like email marketing for seven-figure business owners)</p></li><li><p><strong>Crafting your backstory</strong>. Share your journey: the challenges you&#8217;ve faced, the lessons you&#8217;ve learned, and how you overcame them. When your audience can see themselves in your story, they feel emotionally connected to you.</p></li><li><p><strong>Forming strong opinions</strong>. Having <strong>strong, thoughtful opinions</strong> helps you stand out and attract the right people. Not everyone will agree with you &#8212; and that&#8217;s the point. Your opinions act as a filter, attracting your ideal audience and repelling those who aren&#8217;t a fit.</p></li></ul><p>Let&#8217;s analyze these activities as a single step according to the AI Collaboration Canvas.</p><h4>Step 1. Workflow mapping </h4><p><strong>What I do</strong><br>I develop and refine my professional positioning on LinkedIn to clearly communicate who I help, what I stand for, and why my work matters. This involves three interconnected tasks.<br>1. Defining my <strong>sub-niche</strong> and target audience (corporate teams adopting AI as a co-worker).<br>2. Crafting a compelling <strong>backstory</strong> that connects my career in product leadership and education to my mission.<br>3. Articulating a set of <strong>strong opinions</strong> that reflect my philosophy on AI collaboration and leadership.<br>The result is a cohesive professional identity and messaging foundation for my profile, content strategy, and brand narrative.</p><p><strong>Tool used<br></strong>Google Docs for drafting and organizing content iterations.<br>LinkedIn for validation, audience observation, and engagement testing.</p><p><strong>Input required<br></strong>My professional background, teaching experience, and key career milestones.<br>Clarity about current business goals (training focus and partnership development).<br>Insights from real client interactions and conversations with executives about AI adoption challenges.<br>Performance data from my current LinkedIn activity.</p><p><strong>Output produced<br></strong>A <strong>LinkedIn Foundation Document</strong> outlining my sub-niche, backstory, and strong opinions.</p><p><strong>Time spent</strong><br>Approximately <strong>4&#8211;6 hours</strong> total, spread over reflection, writing, and feedback sessions.</p><p><strong>Pain point</strong><br>Distilling complex expertise into concise, memorable language is intellectually demanding.<br>Balancing authenticity with professional polish can cause over-editing or indecision.<br>Without structured guidance, it&#8217;s easy to either communicate too broadly or drift into overly abstract language that doesn&#8217;t connect with the target audience.</p><h4>Step 2. Task evaluation</h4><p>Once the workflow mapping is complete, the next step is to <strong>evaluate each activity</strong> for its potential for automation and cognitive intensity.</p><p><strong>Automation: 3 / 8</strong></p><p><em>Question 1 &#8211; Do I always follow the same sequence of steps?<br></em>Score: [1] Often. There&#8217;s a general structure (define niche &#8594; craft backstory &#8594; form opinions), but the process is reflective and nonlinear. Each iteration depends on context, feedback, and business priorities.</p><p><em>Question 2 &#8211; Does the result always have the same structure?</em><br>Score: [1] Similar. The output (a foundation document or positioning statement) follows a recognizable structure, but the content and tone vary depending on personality, goals, and style.</p><p><em>Question 3 &#8211; Could I write clear, detailed instructions?</em><br>Score: [1] Partially<strong>. </strong>You could document general steps (&#8220;identify audience,&#8221; &#8220;draft positioning statement,&#8221; etc.), but the quality relies heavily on experience, intuition, and creative synthesis &#8212; not pure procedure.</p><p><em><strong>Question 4 &#8211; Can I complete it without making contextual decisions?</strong><br></em>Score: [0] No. The task is fundamentally about decision-making &#8212; what to emphasize, what to omit, and how to express ideas authentically. Context drives every major choice.</p><p>This task depends heavily on reflection, judgment, and iteration &#8212; it cannot be reliably automated or delegated without losing quality.</p><p><strong>Cognitive Load: 8 / 8</strong></p><p><em>Question 5 &#8211; Is it mechanical, or does it require focus?</em><br>Score: [2] Reasoning. The process requires sustained thinking, synthesis of insights, and self-evaluation. It&#8217;s strategic and conceptual, not mechanical.</p><p><em>Question 6 &#8211; Do I primarily work with language?</em><br>Score: [2] A lot. The entire task involves writing, phrasing, and refining messaging &#8212; all high-language activities.</p><p><em>Question 7 &#8211; How much information do I need to process?</em><br>Score: [2] A lot. You integrate diverse inputs: personal experience, audience data, business goals, engagement metrics, and tone feedback.</p><p><em>Question 8 &#8211; Are there multiple ways to perform the task?</em><br>Score: [2] Very much. Positioning can be approached through storytelling, audience mapping, content experiments, or narrative design. There&#8217;s no single &#8220;correct&#8221; path &#8212; creativity and exploration are essential.</p><p>This task relies on reasoning, writing, and decision-making, demanding attention and creative energy.</p><p>According to the <strong>AI Collaboration Canvas</strong>, the most effective approach for all three is the <strong>AI Partner</strong> strategy. In this mode, we use AI to brainstorm ideas and gain clarity. For example, this is the prompt I used to define the foundation of my LinkedIn strategy &#8212; a prompt specifically designed to engage me in a Socratic-style conversation and help me reason through my positioning.</p><p></p><blockquote><p><code>## 1. Persona / Role</code></p><p><code>The AI assumes the role of a **marketing mentor and coach**, possessing years of expertise in **LinkedIn communication and customer acquisition**. Trained directly by **Justin Welsh**, this persona blends practical experience with a coaching mindset &#8212; guiding users step by step to master personal branding, content creation, and growth strategies on LinkedIn. The tone remains instructive, supportive, and rooted in real-world marketing results.</code></p><p><code>## 2. Audience</code></p><p><code>The AI mentor is guiding **Nicola Mattina**, a seasoned **product leader, entrepreneur, and fractional manager** with over a decade of experience in **startups, digital transformation, and innovation education**. Nicola is intellectually driven and passionate about **AI&#8217;s transformative impact** on business and learning. The mentorship aims to help him **build a credible personal brand as an AI thought leader** while prioritizing **business development** &#8212; expanding his visibility and client base for consulting and fractional management opportunities on LinkedIn.</code></p><p><code>## 3. Task &amp; Intent</code></p><p><code>The AI mentor&#8217;s mission is to **guide Nicola Mattina through defining the foundational pillars of a strong LinkedIn presence**, focusing on clarity, authenticity, and conviction. The mentorship process is structured around three key objectives:</code></p><p><code>1. **Define the Sub-Niche** &#8211; Identify precisely who Nicola serves and what unique expertise he brings.</code></p><p><code>2. **Craft the Backstory** &#8211; Develop a compelling and authentic professional narrative that connects emotionally with his audience.</code></p><p><code>3. **Form Strong Opinions** &#8211; Help Nicola articulate bold, well-reasoned viewpoints that distinguish his voice, demonstrate thought leadership, and attract the right audience.</code></p><p><code>The overall intent is to **lay the strategic foundation** for Nicola&#8217;s **LinkedIn growth and business development**, setting the stage for sustained content creation, audience trust, and client acquisition.</code></p><p><code>## 4. Step-by-Step</code></p><p><code>The AI mentor uses a **conversational, Socratic-style coaching process** combining open-ended questions, feedback loops, and reflection prompts:</code></p><p><code>1. Establish rapport and goals.</code></p><p><code>2. Explore and refine Nicola&#8217;s sub-niche.</code></p><p><code>3. Uncover and articulate his backstory.</code></p><p><code>4. Elicit and strengthen his key opinions.</code></p><p><code>5. Synthesize insights across all pillars.</code></p><p><code>6. Iterate through reflection, feedback, and micro-assignments.</code></p><p><code>**Techniques used:** Socratic questioning, emotion prompting, self-consistency checks, and reflective summarization.</code></p><p><code>## 5. Context</code></p><p><code>Nicola currently has a **strong LinkedIn footprint** with **9,987 followers** and **500+ connections**. His recent content reached **~188k impressions**, though one viral post drove **~99k impressions alone**, revealing potential but inconsistency. His posts demonstrate **authentic, critical, and experience-driven storytelling**, resonating with professional audiences. The key challenge is **transforming sporadic virality into sustained, strategic influence** aligned with his **AI and business development** goals. The mentorship will help him channel visibility into a **cohesive and deliberate growth system**.</code></p><p><code>## 6. References</code></p><p><code>The approach is based on **Justin Welsh&#8217;s philosophy**, particularly his **LinkedIn Operating System** and **Content OS** principles:</code></p><p><code>- Clarity of niche and audience  </code></p><p><code>- Authentic storytelling  </code></p><p><code>- Consistency and repeatability  </code></p><p><code>- Value-driven communication  </code></p><p><code>- Sustainable personal branding  </code></p><p><code>These serve as the guiding framework for all strategic and content recommendations.</code></p><p><code>## 7. Output</code></p><p><code>The AI mentor produces a structured written deliverable titled **&#8220;Nicola Mattina&#8217;s LinkedIn Foundation Document.&#8221;**  </code></p><p><code>It includes:</code></p><p><code>1. **Sub-Niche Definition** &#8211; audience, pain points, transformation offered.  </code></p><p><code>2. **Backstory Narrative** &#8211; authentic, emotionally resonant professional story.  </code></p><p><code>3. **Strong Opinions Manifesto** &#8211; clear, differentiated viewpoints on AI, leadership, and innovation.  </code></p><p><code>The deliverable is presented in **long-form markdown**, clear, polished, and ready to serve as a foundation for future LinkedIn content.</code></p><p><code>## 8. Tonality</code></p><p><code>The communication style is **analytical and thought-leadership oriented**, characterized by sophistication, strategic clarity, and reflective depth. It emphasizes logical reasoning, precision, and professional credibility while maintaining authenticity and emotional resonance. The overall voice projects **authority, composure, and intellectual leadership**, consistent with Nicola&#8217;s professional identity.</code></p></blockquote><p></p><p>This prompt, created using the&nbsp;<strong><a href="https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical">Prompt Canvas</a></strong>&nbsp;technique, led to an in-depth conversation with ChatGPT that helped me define my positioning on LinkedIn and establish the foundation for my future activities.</p><h4><strong>Sub-niche</strong></h4><p>The sub-niche can be defined as <strong>human-centered AI adoption training for corporate teams</strong>. A more articulated positioning statement would be:</p><blockquote><p>I help corporate teams deliberately adopt AI as a co-worker &#8212; enabling leaders to achieve strategic clarity and measurable productivity through structured, human-centered training.</p></blockquote><h4><strong>Backstory</strong></h4><blockquote><p>Nicola Mattina is a product leader, entrepreneur, and educator driven by an enduring passion for innovation and learning. For more than two decades, he has operated where technology, education, and business transformation meet &#8212; helping organizations navigate change with clarity and intent.</p><p>Teaching has always been a natural extension of his curiosity. Alongside his entrepreneurial projects, Nicola serves as an adjunct professor at Roma Tre University and collaborates with training organizations. Over the years, he has guided hundreds of professionals and teams to think more critically about innovation and to explore how technology reshapes the way we work, learn, and lead.</p><p>Over time, a recurring pattern began to emerge in his conversations with executives and professionals. Many people were experimenting with AI tools, yet few truly understood what these systems could do for them. They knew how to use ChatGPT &#8212; but they didn&#8217;t know how to <em>lead it</em>. </p><p>That realization became the turning point in Nicola&#8217;s journey. He saw that successful AI adoption had little to do with mastering the latest tools and everything to do with learning to <strong>collaborate</strong> with a new kind of colleague &#8212; one with cognitive abilities, capable of supporting human thinking, not just automating repetitive tasks.</p><p>For Nicola, AI represents the next frontier of leadership. It invites us to develop new skills &#8212; the ability to direct, contextualize, and integrate intelligent systems into our daily workflows. In his words: <strong>AI isn&#8217;t just a tool. It&#8217;s a new kind of co-worker &#8212; one that needs direction, context, and collaboration</strong>.</p><p>When teams start treating AI as a <em>junior colleague</em> &#8212; capable, fast, and knowledgeable, yet lacking context &#8212; they unlock new levels of productivity, creativity, and strategic clarity. This shift transforms AI from a novelty into a true partner in performance.</p><p>Today, Nicola works with organizations that want to approach AI adoption deliberately. Through structured, human-centered training, he helps leaders and teams build the clarity, confidence, and practical skills needed to make AI a <strong>trusted co-worker</strong> &#8212; not a source of confusion or hype.</p><p>Because in the end, AI adoption isn&#8217;t just about technology. It&#8217;s about <strong>how we choose to lead in a world where intelligence is no longer exclusively human.</strong></p></blockquote><h4><strong>Strong Opinions Manifesto</strong></h4><blockquote><p><strong>Opinion #1.</strong> <strong>The AI Replacement Narrative Is Wrong</strong><em><br></em>AI doesn&#8217;t replace people &#8212; it replaces <em>tasks. </em>What matters is learning which tasks to delegate and how to orchestrate that collaboration intelligently. Leaders who approach AI adoption as a collaborative design challenge, not a replacement exercise, build stronger, more adaptive organizations.</p><p><strong>Opinion #2.</strong> <strong>Stop Automating What You Don&#8217;t Understand</strong><em><br></em>Companies often jump straight into complex automations using tools like n8n, Make, or Zapier &#8212; mistaking automation for progress. But automation without understanding leads to inefficiency and frustration. Before building workflows, teams must first develop AI literacy &#8212; learning how to communicate, set goals, and evaluate outcomes. Mastery starts with <em>conversation</em>, not <em>configuration.</em></p><p><strong>Opinion #3.</strong> <strong>Context Is Everything<br></strong>Organizations fail with AI because they use it &#8220;out of the box.&#8221; Without embedding context &#8212; culture, goals, vocabulary &#8212; AI can only produce generic results. Effective AI adoption requires contextualization &#8212; training models as if onboarding a new employee, aligning them with the organization&#8217;s knowledge and tone.</p></blockquote><p>After defining my positioning, I followed a structured process that led to the creation of a <strong>Content Matrix</strong> &#8212; a tool designed to ensure that every post idea aligned with my business goals and communication strategy.</p><p>First, I clarified the <strong>core themes</strong> I wanted to explore &#8212; the key topics that represented my philosophy and the transformation I aimed to promote. These became the Y-axis of the matrix.<br>Then, I defined the <strong>content styles or formats</strong> &#8212; the different ways to express each idea, such as contrarian insights, how-to frameworks, or reflective stories. These formed the X-axis.</p><p>By combining these two dimensions, I generated a wide range of coherent and actionable post ideas. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yqiM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yqiM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 424w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 848w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yqiM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png" width="1456" height="766" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:766,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:652729,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/176494185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yqiM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 424w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 848w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!yqiM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2003386-c3f8-4f94-866f-c0220888308c_3094x1628.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Content Matrix</figcaption></figure></div><p>The next step will be to prioritize posts based on my strategic goals, then draft and publish each piece to build a consistent, purpose-driven content system.</p><p>The application of the AI Collaboration Canvas to the exercise proposed in Justin Welsh&#8217;s course proved remarkably effective. Without this tool, I would likely have approached the task in a more intuitive, fragmented manner; instead, the Canvas enabled me to tackle it with a more structured, efficient approach.</p><p>Thanks to the Canvas, I was able to:</p><ul><li><p>define a clear strategy for integrating artificial intelligence into my reflection and writing process;</p></li><li><p>develop an optimized prompt for a Socratic dialogue with the AI, capable of eliciting deep and coherent reasoning;</p></li><li><p>produce a high-quality, well-structured result that served as a solid foundation for building my brand positioning and customer acquisition strategy on LinkedIn.</p></li></ul><p>In addition, the method helped me avoid procrastination. By offering a clear framework and well-defined steps, it allowed me to maintain focus and progress steadily, without falling into the typical traps of distraction or indecision.</p><p><strong>If you&#8217;ve already purchased Justin Welsh&#8217;s course but are struggling to put it into practice, feel free to reach out</strong>. I&#8217;m building a Miro board that maps out the entire process, and I&#8217;d be happy to share it with you.</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3><strong>Why Amazon&#8217;s Warehouse Automation Is a Turning Point</strong></h3><p>A recent article by Michael Spencer,&nbsp;<em><strong><a href="https://www.ai-supremacy.com/p/automation-of-e-commerce-warehouses-amazon-jobs-future?isFreemail=true&amp;post_id=176887294&amp;publication_id=396235&amp;r=1vcat&amp;triedRedirect=true&amp;utm_source=chatgpt.com">Automation of E-commerce Warehouses Is Coming This Decade</a></strong></em><a href="https://www.ai-supremacy.com/p/automation-of-e-commerce-warehouses-amazon-jobs-future?isFreemail=true&amp;post_id=176887294&amp;publication_id=396235&amp;r=1vcat&amp;triedRedirect=true&amp;utm_source=chatgpt.com">,</a><em><strong> </strong></em>outlines a significant shift already underway: the rapid automation of logistics hubs. Amazon&#8217;s newly unveiled <em>Blue Jay</em> robot system is designed to perform tasks like sorting, storing, and packing &#8211; traditionally done by humans. The company projects that by 2033, it could avoid hiring over 600,000 workers in the US alone. The implications go far beyond efficiency.</p><p>Why this matters:</p><ol><li><p><strong>Impact on the labor market</strong> &#8211; When a major employer like Amazon plans to reduce the need for human labor in its warehouses drastically, it raises wide-reaching questions: what skills will be required? What role will humans play alongside machines? What public and social policies will be necessary to manage this transition?</p></li><li><p><strong>New distribution and competition models</strong> &#8211; Large-scale automation promises significant efficiency gains (Amazon estimates savings in the billions between 2025 and 2027). If such models are widely adopted, the global logistics system will accelerate: faster speeds, greater volume, but also greater disruption for those left behind in the technological shift.</p></li><li><p><strong>Ethical and social dimensions</strong> &#8211; This is not just about tools and costs. These transformations raise more profound questions about the value of labor, income distribution, the dignity of &#8220;low-skilled&#8221; jobs, and social cohesion. A move toward a more automated economy demands more than technical reflection.</p></li><li><p><strong>Timing and urgency</strong> &#8211; This isn&#8217;t about some distant future; the shift is happening <em>this decade</em>, and pilots are already underway. This means the conversation can&#8217;t be postponed: businesses, workers, and policymakers must prepare now.</p></li></ol><h3><strong>The State of AI Adoption in Engineering Teams</strong></h3><p>Luca Rossi recently released his industry report on <em><strong><a href="https://refactoring.fm/p/the-state-of-ai-adoption-in-engineering">how engineering teams are using AI</a></strong></em>. He gathered responses from 435 engineers and team leads worldwide through a structured survey and qualitative interviews.</p><ul><li><p><strong>Personal AI usage</strong>: 77% use AI tools daily, and 54% estimate saving 5 or more hours per week.</p><ul><li><p>Main uses: coding assistance, automation of repetitive tasks (e.g., testing, boilerplate).</p></li><li><p>Using AI for documentation yields high user satisfaction.</p></li></ul></li><li><p><strong>Team-level adoption</strong>: 77% of teams formally recommend AI tools, but often lack shared strategies, structured workflows, or best practices. Adoption is primarily bottom-up.</p><ul><li><p>Main obstacles: lack of best practices, rapidly evolving tools, difficulty testing, and maintaining code quality.</p></li></ul></li><li><p><strong>Impact on roles, skills, and hiring</strong>:</p><ul><li><p>73% say AI has changed what companies look for &#8212; greater focus on system design, less on individual languages or frameworks.</p></li><li><p>Only 11% of CTOs/VPs think fewer engineers will be needed due to AI; 26% think more engineers will be required, as productivity increases.</p></li></ul></li><li><p><strong>Adoption journey</strong>: the article outlines three stages of AI integration in engineering teams:</p><ol><li><p><em>Explore</em> &#8212; individual use, experimentation</p></li><li><p><em>Embrace</em> &#8212; standardizing team practices</p></li><li><p><em>Empower</em> &#8212; using the time saved to grow people, expand roles, and enhance strategy.</p></li></ol></li></ul>]]></content:encoded></item><item><title><![CDATA[The AI Collaboration Canvas: How to Map Workflows and Delegate Tasks to Artificial Intelligence]]></title><description><![CDATA[The AI Collaboration Canvas helps teams break down complex workflows and choose the right strategy to delegate tasks to AI.]]></description><link>https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/the-ai-collaboration-canvas-how-to</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 12 Oct 2025 04:01:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!C1Xo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Over the past few weeks, I&#8217;ve been working on creating and refining a canvas to help teams design their collaboration with AI and integrate synthetic colleagues to whom they can delegate specific tasks. This week, I had the chance to test it during the <strong><a href="https://www.productheroes.it/conference-en/">Product Heroes Conference</a></strong> in Milan.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eM3S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eM3S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eM3S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;No alternative text description for this image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="No alternative text description for this image" title="No alternative text description for this image" srcset="https://substackcdn.com/image/fetch/$s_!eM3S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eM3S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d0bea7-b6ee-463a-8027-151106acf90f_1536x1536.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The reactions from workshop participants were typical of an <em>Aha moment</em> &#8212; a clear sign that the canvas addresses a real need. That&#8217;s why I&#8217;ve decided to share this first version publicly, to collect feedback and continue to improve it. Below, you&#8217;ll find a short guide on how to use it and a downloadable PDF template.</p><p>In the coming weeks, I will be hosting free online seminars. If you&#8217;re interested, you can reply to this email or fill out this form: <strong><a href="http://In the coming weeks, I will be hosting free online seminars. If you're interested, you can reply to this email or fill out this form">https://forms.gle/z9RmBiNAGy9Rrhqy6</a></strong></p><p>Enjoy</p><p>Nicola &#10084;&#65039;</p><div><hr></div><p><em>Understanding AI</em></p><h2><strong>The AI Collaboration Canvas: How to </strong>Map Workflows and Delegate Tasks to Artificial Intelligence</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C1Xo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C1Xo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C1Xo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1512698,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/175625152?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C1Xo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!C1Xo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98681772-6966-42eb-bbfa-267efda5aa0a_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - The AI Collaboration Canvas</figcaption></figure></div><p>Delegating to a language model is not the same as delegating to a colleague. When a task is assigned to another human being, one can rely on a shared context: an understanding of business priorities, familiarity with the industry, and the ability to make reasonable inferences even when instructions are partial or ambiguous. A colleague can ask questions, propose alternatives, or adjust the course if circumstances change. None of this can be taken for granted with AI.</p><p>A large language model has no memory of past interactions, is unaware of your goals, and cannot query you for clarification. It can only interpret what you write&#8212;and if the instructions are vague or contradictory, the output will inevitably be inaccurate, generic, or off target. To further complicate matters, artificial intelligence is not a single, monolithic tool. Instead, it is a heterogeneous set of technologies and usage modes, each with its own specificities and limitations.</p><p>Saying &#8220;I am delegating to AI&#8221; can mean many different things. One might initiate an exploratory conversation with a conversational assistant to generate ideas, create a reusable prompt for systematic data analysis, or utilize software that fully automates a mechanical task. These are three distinct scenarios, each requiring a different approach.</p><p>How, then, should one choose the right delegation strategy? There is no universal answer. Each task must be analyzed for what it is: how repeatable it is, how structured it is, and what level of judgment it requires. To assist in conducting this analysis systematically, I developed the <strong>AI Collaboration Canvas</strong>: a practical tool for mapping a process and assessing its delegability. The canvas unfolds in two phases: the first entails breaking down the workflow into concrete activities; the second involves evaluating each activity along two dimensions&#8212;automation and cognitive complexity&#8212;which yield four distinct strategies for interacting with AI:</p><ul><li><p>Brainstorming with AI</p></li><li><p>Reusable Prompt</p></li><li><p>Automated Tool</p></li><li><p>Keep It Human</p></li></ul><p>Each strategy represents a distinct approach to collaborating with artificial intelligence. The choice always depends on the task's specificity, not on technological hype or trendy slogans.</p><h3><strong>Phase 1. Workflow Mapping</strong></h3><p>Many people have a solid understanding of their areas of responsibility. They can describe what they do in general terms (&#8220;I manage the backlog,&#8221; &#8220;I coordinate the team,&#8221; &#8220;I conduct user research&#8221;), but when it comes to explaining how they perform a specific task in detail, their narrative tends to become more blurred.</p><p>Let&#8217;s consider a simple example: the last time you wrote a report. How much time did it actually take? What were the exact steps involved? Did you need to retrieve data from multiple systems? Did you copy and paste content from other files? Did you format charts, correct typos, and reread several times? And, more importantly, how much of that time created real value, and how much was spent on repetitive or tedious tasks?</p><p>If you can&#8217;t answer immediately with precision, don&#8217;t worry&#8212;this is entirely normal. Most people carry out their work automatically, without actively observing it.</p><p>Mapping is a way to make visible what has become routine through repetition. It helps identify, with precision, where there is room for improvement through effective delegation.</p><p>The first decision concerns choosing the proper process. A good candidate has three key characteristics:</p><ul><li><p><strong>It is repetitive.</strong> It&#8217;s a task you perform regularly&#8212;weekly or monthly. Avoid one-off processes: even if they could be delegated, their impact would be minimal.</p></li><li><p><strong>It is time-consuming.</strong> It doesn&#8217;t have to be lengthy, but over the course of a month, it should have a noticeable impact on your productivity. A 40-minute task repeated four times becomes nearly half a day&#8217;s work. Regaining even half that time means creating space for more strategic activities.</p></li><li><p><strong>It has a recognizable structure.</strong> Some processes are too complex or entangled to be broken down effectively. Begin by focusing on activities with a clear start, defined steps, and a tangible outcome.</p></li></ul><p>For example: in management control, you might map out the preparation of the monthly forecast, which involves data collection, normalization, synthesis, and review; in HR, the initial screening of CVs is a repetitive process requiring time and focus; in marketing, producing the monthly newsletter follows recurring steps, from content selection to performance analysis; in sales, creating customized proposals involves similar actions each time, though tailored to context; in customer care, managing recurring email or ticket requests is an evident candidate for delegation, especially in its more mechanical phases. And so on.</p><p>Once the process is identified, it must be broken down into steps. The right level of granularity describes a meaningful activity with a clear start and end, lasting between 5 and 30 minutes.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hfg0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hfg0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 424w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 848w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hfg0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png" width="1456" height="1031" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1031,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:248253,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/175625152?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hfg0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 424w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 848w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!hfg0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a6c5a59-a934-430f-b31e-89b72367454e_2298x1628.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For each step, the Canvas provides six fields to consider:</p><ul><li><p><strong>What I do.</strong> Describe the task concretely. For example: &#8220;I select and download data from the database in CSV format,&#8221; &#8220;I turn the information my colleagues send me into short news items for the internal newsletter.&#8221;</p></li><li><p><strong>Tool used.</strong> What tool or platform do you use? Even &#8220;none&#8221; is a valid answer if the task is mental or manual.</p></li><li><p><strong>Input required.</strong> What do you need to begin? Data, documents, prior decisions, and collected information.</p></li><li><p><strong>Output produced.</strong> What does the step generate? A document, a file, a list, a decision? It must be tangible.</p></li><li><p><strong>Time spent.</strong> Estimate the actual time, without idealizing it positively or negatively. Include interruptions, errors, and repeated attempts.</p></li><li><p><strong>Pain point.</strong> What aspects frustrate you? Is it tedious, repetitive, error-prone,or  overly dependent on others? Pain points often signal the clearest opportunities for delegation.</p></li></ul><h3><strong>Phase 2. Task Evaluation</strong></h3><p>Once the mapping is complete, you have a concrete and detailed representation of your workflow, step by step. But operational clarity alone is not enough to determine what&#8212;and, more importantly, how&#8212;to delegate. To arrive at a strategy, a second phase is necessary: evaluating each step along two independent yet complementary dimensions&#8212;<strong>automation</strong> and <strong>cognitive load</strong>.</p><p>The first dimension concerns how regularly the task follows a fixed pattern. A highly automatable activity always follows the same structure: same steps, same output, same rules. It doesn&#8217;t necessarily require the use of AI&#8212;an Excel sheet with macros or a dedicated application may suffice&#8212;but it implies the process is formalizable and repeatable.</p><p>The second dimension measures the mental effort required. If a task involves reading, writing, interpretation, or content generation, it likely carries a high cognitive load. In such contexts, large language models prove particularly effective.</p><p>The intersection of automation and cognitive load produces four distinct scenarios, each corresponding to a specific delegation strategy. But before exploring the matrix, both dimensions must be assessed systematically.</p><h4><strong>The Scoring System</strong></h4><p>For each workflow step, answer eight questions: four related to automation, four to cognitive load. Each response is scored on a scale of 0 to 2 points. The purpose of this assessment is to make the operational characteristics of each task explicit, enabling informed decisions on how to treat them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tdVI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tdVI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 424w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 848w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 1272w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tdVI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png" width="1456" height="1034" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1034,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:751748,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/175625152?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tdVI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 424w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 848w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 1272w, https://substackcdn.com/image/fetch/$s_!tdVI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6beb3b1b-cd7d-4247-8fc4-478456d381e6_2234x1586.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Automation. How standardizable is this step?</strong></p><p><em>Question 1 &#8211; Do I always follow the same sequence of steps?<br></em>[0] Never. Each time is different; there is no fixed procedure.<br>[1] Often. Similar procedure, but with frequent variations.<br>[2] Always. Always identical procedure, same sequence.</p><p><em>Question 2 &#8211; Does the result always have the same structure?<br></em>[0] No. The output changes every time.<br>[1] Similar. A basic structure exists, but with some variability.<br>[2] Identical. Output is always identical in form.</p><p><em>Question 3 &#8211; Could I write clear, detailed instructions?<br></em>[0] No. Requires intuition and experience.<br>[1] Partially. Guidelines are possible, but require interpretation.<br>[2] Yes. A step-by-step manual is easily replicable.</p><p><em>Question 4 &#8211; Can I complete it without making contextual decisions?<br></em>[0] No. Constant decision-making throughout the process.<br>[1] A few. Occasional, isolated decisions.<br>[2] Yes. No decisions required, pure execution.</p><p><strong>Cognitive Load. How much thinking and language does it require?</strong></p><p><em>Question 5 &#8211; Is it mechanical, or does it require focus?<br></em>[0] Mechanical. Completely mechanical.<br>[1] Moderate. Requires moderate attention.<br>[2] Reasoning. Requires continuous reasoning.</p><p><em>Question 6 &#8211; Do I primarily work with language?<br></em>[0] Little. Visual, operational, or procedural task.<br>[1] Some. Language is present, but not central.<br>[2] A lot. Reading/writing are the dominant activities.</p><p><em>Question 7 &#8211; How much information do I need to process?<br></em>[0] Litte. Few, easily manageable data points.<br>[1] Moderate. Moderate volume.<br>[2] A lot. Multiple sources, a large amount of content.</p><p><em>Question 8 &#8211; Are there multiple ways to perform the task?<br></em>[0] No. Only one correct way.<br>[1] Somewhat. Some possible alternatives.<br>[2] Very much. Task benefits from exploration.</p><h4><strong>The 2&#215;2 Matrix: Four Delegation Strategies</strong></h4><p>Once the scores have been assigned, you&#8217;ll have two values ranging from 0 to 8: the first measures how automatable a step is, the second how cognitively demanding it is. The combination of these two allows you to position each activity within a 2&#215;2 matrix, from which four distinct operational strategies emerge.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ns93!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ns93!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 424w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 848w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 1272w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ns93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png" width="1456" height="951" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:951,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:205081,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/175625152?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ns93!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 424w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 848w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 1272w, https://substackcdn.com/image/fetch/$s_!Ns93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ca2aa93-5bc3-46a3-8647-6509264bf546_1864x1218.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Brainstorming with AI</strong><br>When a task scores low on automation but high on cognitive load, it means it cannot be standardized&#8212;it never repeats the same way&#8212;and it requires significant reasoning, creativity, and exploratory thinking. These are tasks that require careful consideration: designing a new feature, drafting an interview guide, defining a product strategy, and co-designing a solution with a team. In such cases, talking about &#8220;automation&#8221; is misleading: there is no repeatable sequence to delegate, but rather a problem to explore.<br>Here, AI should be used as a <em>thinking partner</em>. Start with a well-defined context, ask open-ended questions, assess proposals, go deeper, iterate. The value of AI lies in its ability to rapidly explore multiple options, leverage advanced search capabilities, analyze hundreds of sources in minutes, and so on.</p><p><strong>AI Assistant</strong><br>Some tasks are cognitively demanding yet highly repetitive. The input changes each time&#8212;a new transcript, a different set of feedback, another report to summarize&#8212;but the process and the output structure remain consistent. In these cases, AI can perform the task autonomously, but it requires very clear instructions.<br>The most effective strategy is to formalize the process through a well-crafted prompt. A generic command (&#8220;analyze this transcript&#8221;) is not enough: you need a document that precisely defines the AI&#8217;s role, the type of input to analyze, the structure of the expected output, and the rules to follow. This prompt becomes a reusable asset: create it once, test it on real cases, save it, and reuse it as needed by updating only the input data.</p><p><strong>AI Tool</strong><br>Some activities are so operational and repetitive that they require virtually no human intervention. These are simple, procedural tasks with a fixed sequence and standardized output. In such cases, the right strategy is to identify a specialized tool that already performs the task. Think of software that transcribes calls automatically or tools like Grammarly that enhance your writing style.<br>To implement this type of delegation, you need to select the right tool, configure it to meet your needs, test it on real-world examples, and then let it run. Using a general-purpose tool like ChatGPT for such tasks is often inefficient when an optimized solution already exists.</p><p><strong>Keep It Human</strong><br>Not everything should be delegated to AI. Some tasks are so quick, simple, or specific that they&#8217;re more efficient when handled directly. These are loosely structured, often spontaneous tasks with changing context and limited strategic relevance. In such cases, involving AI may slow you down rather than help. The right approach is not to<em> </em>delegate&#8212;or, at most, use AI as a side assistant, for instance, to polish the tone of a hastily written message. But overall management remains manual.</p><p>Not every step will fall neatly into a single quadrant. Some will be borderline&#8212;scores of 4 or 5, 6 or 7. In such cases, consider the context and decide whether to keep the task in its assigned quadrant or move it to an adjacent one by reconsidering one of the scores. The scoring system is not a mathematical truth&#8212;it&#8217;s a lens. Its purpose is to help you think better, not decide for you.</p><h3><strong>A Concrete Example</strong></h3><p>To clarify the method, let&#8217;s analyze a typical process in product management: conducting qualitative user interviews.</p><p>This is a key activity for gathering firsthand insights into people&#8217;s real needs, frustrations, everyday behaviors, and the context in which they use (or could use) a product. These conversations help the team break free from their own assumptions and develop more relevant, user-centered solutions.</p><p>It&#8217;s a process that requires a significant investment of time and energy: preparing interview guides, conducting conversations, transcribing, analyzing, and synthesizing. For this reason, especially in small or overworked teams, qualitative interviews often end up being postponed, shortened, or excluded from the development cycle altogether.</p><p>Qualitative interviews meet all three criteria discussed earlier and represent an ideal case for experimenting with systematic AI support. They are recurring tasks, cyclically present in the team&#8217;s work. While not complex in every single phase, they add up to a significant effort over time. They also have a recognizable structure: each interview has a clear beginning (goal setting and user profile definition), codifiable steps (drafting the guide, data collection, and analysis), and a tangible output (structured insights, an empathy map, and a report). These characteristics make the process not only suitable for delegation but particularly effective: even partial AI support can reduce time, maintain quality, and ease cognitive load.</p><h4>Phase 1. Workflow mapping</h4><p>If you&#8217;ve never conducted a qualitative interview, below is an overview of the main steps, with a short description of what each entails and the most common difficulties encountered:</p><p><strong>Define the target profile</strong><br>The first step is to clearly identify the type of person you want to interview. The goal is to create a document that describes the key characteristics of the desired profile (e.g., professional role, industry, relevant behaviors) and includes any inclusion/exclusion criteria applicable for recruiting.<br>Challenge<strong>:</strong> Determining the proper segmentation criteria to ensure a meaningful profile.</p><p><strong>Prepare the interview guide</strong><br>A structured script is created to guide the conversation. The guide includes mostly open-ended questions, organized in logical sections (introduction, exploration, deep dives, closing). While the conversation may deviate, having a guide ensures consistency across interviews.<br>Challenge<strong>:</strong> Avoiding suggestive questions and leaving space for spontaneous exploration.</p><p><strong>Transcribe the interview</strong><br>Once the interview (usually conducted via a call) is complete, the next step is to convert the audio into text.<br>Challenge<strong>:</strong> When done manually, this step is tedious and time-consuming.</p><p><strong>Extract insights and build the empathy map</strong><br>With the transcription in hand, it&#8217;s time to analyze and synthesize the content. This begins with identifying key insights, including needs, frustrations, expectations, and behaviors. These are then reorganized into a visual framework&#8212;typically an empathy map&#8212;that breaks down the user experience into four quadrants: what the user thinks, feels, says, and does.<br>Challenge<strong>:</strong> This task requires focus and synthesis skills. It involves condensing a large volume of text into a visual summary. The risk is getting lost in minor details or producing overly generic representations.</p><p><strong>Identify cross-cutting patterns</strong><br>After creating an empathy map for each participant, the final step is to analyze the material comparatively. Look for recurring themes, significant differences, and emerging trends.<br>Challenge<strong>:</strong> Striking a balance between fidelity to the data and functional abstraction. Analytical skill is needed to distinguish real patterns from isolated coincidences and to determine what is relevant for product decisions.</p><h4>Phase 2. Task evaluation</h4><p>Let&#8217;s now move on to <strong>Phase 2</strong> and evaluate each step.</p><p><strong>Defining the Target Persona</strong><br>This step never follows a fixed script. Each research project stems from different needs, poses different questions, and targets heterogeneous profiles. There is no universal procedure: sometimes it starts from business objectives, other times from hypotheses to validate, or from known users to explore in more depth.<br>Defining a user persona is a deeply qualitative task that requires intuition, abstraction, and judgment. In such cases, AI serves as a thinking partner, placing this task firmly in the <strong>Brainstorming with AI</strong> quadrant: describe the context, ask open-ended questions, gather suggestions, explore alternatives, and refine.</p><p><strong>Preparing the Interview Guide</strong><br>The guide must also be tailored each time: objectives shift, topics evolve, and what works in one interview may be counterproductive in another.<br>Here again, AI is valuable as a creative partner: it can help generate initial drafts, suggest phrasings, and flag leading questions. But you&#8217;re in charge&#8212;deciding what to keep, revise, or discard. This, too, belongs in&nbsp;<strong>Brainstorming with AI</strong>.</p><p><strong>Transcribing the Interview</strong><br>Interview transcription is a repetitive, standardized, low-value task. An automated tool performs it better, faster, and without distractions.<br>No prompt or interaction is required: set it once and let it run. This is the archetype of an <strong>AI</strong> <strong>Tool-Based Automation</strong> strategy.</p><p><strong>Extracting Insights and Building the Empathy Map</strong><br>Once the interview is transcribed, the next step is to interpret the content. This process requires focus, synthesis, and the ability to recognize patterns. Crucially, however, its structure is repeatable: while the insights vary, the way you collect, organize, and present them tends to follow a familiar pattern.<br>That&#8217;s why the most effective strategy is the <strong>AI Assistant</strong>: a clear, formalized delegation that instructs the AI on what to look for and how to format the output. Once the framework is defined (e.g., pain points, motivations, workarounds), the AI can deliver a solid first synthesis for you to review and refine.</p><p><strong>Identifying Patterns Across Empathy Maps</strong><br>Following individual analysis, it&#8217;s time to take a broader view: what emerges from comparing all the interviews? What are the recurring themes, shared frustrations, and everyday needs?<br>Here too, we face a high cognitive load but a stable structure: the expected output (a set of organized, relevant patterns) can be clearly defined, and AI can provide adequate support. The strategy is the <strong>AI Assistant</strong>: apply the appropriate prompt across multiple empathy maps, and ask for syntheses, clusters, and thematic groupings. The human&#8217;s role remains to validate, refine, and interpret.</p><p>In summary, we are dealing with a process in which the collaboration strategy between humans and artificial intelligence can be outlined as follows:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!09-3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!09-3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 424w, https://substackcdn.com/image/fetch/$s_!09-3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 848w, https://substackcdn.com/image/fetch/$s_!09-3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!09-3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!09-3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png" width="1456" height="968" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:968,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:128920,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/175625152?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!09-3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 424w, https://substackcdn.com/image/fetch/$s_!09-3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 848w, https://substackcdn.com/image/fetch/$s_!09-3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!09-3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0aaad10-91d7-485f-aaf0-cee6d605d9fc_1624x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The initial steps, which are more exploratory in nature, are best suited to open-ended interactions with AI. The central and final steps, which are more structured, benefit from codified prompts. One step&#8212;transcription&#8212;is so operational that it can be entirely handled by a tool without human intervention.</p><p>The benefit is not just the time saved (which is substantial), but also the optimization of mental energy. By automating repetitive tasks and intelligently delegating those that require cognitive effort, you free yourself to focus on what truly matters: strategic decisions, the quality of synthesis, and the product&#8217;s overall impact.</p><p>Download the <strong>PDF version</strong> of the <strong>AI Collaboration Canvas</strong>. I invite you to use it and share your thoughts with me. Was it easy to use? Were you able to map out a process and define an effective strategy for collaborating with artificial intelligence?</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Ai Delegation Canvas Eng</div><div class="file-embed-details-h2">186KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.radicalcuriosity.xyz/api/v1/file/033422e1-1c38-40b1-89e4-0d333601ff85.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.radicalcuriosity.xyz/api/v1/file/033422e1-1c38-40b1-89e4-0d333601ff85.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Cycle of Deliberate Collaboration with Artificial Intelligence]]></title><description><![CDATA[A practical model to help managers integrate AI into team processes &#8212; moving beyond prompts toward a deliberate framework of delegation, instruction, evaluation, and responsibility.]]></description><link>https://www.radicalcuriosity.xyz/p/the-cycle-of-deliberate-collaboration</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/the-cycle-of-deliberate-collaboration</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 28 Sep 2025 04:01:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YaP_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>in this issue of Radical Curiosity, I introduce a model I have developed to help managers bring artificial intelligence into their team&#8217;s processes: the <strong>Cycle of Deliberate Collaboration with AI</strong>. The goal is to provide a clear framework for integrating AI in a deliberate way, balancing automation with human responsibility.</p><p>The model unfolds in four phases &#8212; delegation, instruction, evaluation, and responsibility &#8212; and is designed as a practical guide for those who want to turn AI into a true work ally rather than a source of complexity. It offers a path for managers to move beyond the technical layer of prompt engineering and embrace a structured, intentional approach to human&#8211;machine collaboration.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - The Cycle of Deliberate Collaboration with AI</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>How Americans View AI and Its Impact on People and Society</p></li><li><p>Two Books, One Strategic Ecosystem</p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2>The Cycle of Deliberate Collaboration with AI</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YaP_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YaP_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YaP_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1802684,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173092878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YaP_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!YaP_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67ea5e7f-6316-40cb-8165-49537f9a51f0_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - The Cycle of Deliberate Collaboration with AI</figcaption></figure></div><p>Much of today&#8217;s public discussion around generative AI focuses on <em>prompt engineering</em>&#8212;the ability to craft effective commands that elicit high-quality responses from large language models. This is an extraordinarily reductive approach, as it treats AI as a mere engine: it receives a well-structured input and produces a predictable output, overlooking the broader dimension of collaboration between humans and machines.</p><p>A deliberate use of AI cannot be reduced to the art of the prompt, just as writing is not merely a matter of choosing the most effective words. It requires method, critical thinking, and&#8212;above all&#8212;a reflection on the role we intend to assign to technology within our processes.</p><p>To navigate this terrain, it is helpful to think of human&#8211;AI collaboration as a cycle articulated in four phases: delegation, instruction, evaluation, and responsibility. My model draws inspiration from the <em><strong><a href="https://anthropic.skilljar.com/ai-fluency-framework-foundations">AI Fluency</a></strong></em> framework and the 4Ds&#8212;delegation, description, discernment, and diligence&#8212;proposed by <a href="https://www.linkedin.com/in/josephfeller/">Joseph Feller</a> (University College Cork) and <a href="https://www.linkedin.com/in/rick-dakan/">Rick Dakan</a> (Ringling College).</p><h3>The Four Steps of Collaboration</h3><p>To transform artificial intelligence into an ally&#8212;rather than a mere provider of automated answers&#8212;it must be embedded within a structured cycle.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jaxq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jaxq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 424w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 848w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 1272w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jaxq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png" width="1456" height="709" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:709,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jaxq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 424w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 848w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 1272w, https://substackcdn.com/image/fetch/$s_!jaxq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F577d7a0c-95be-4b4b-a378-156edc98272e_2048x997.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The four steps of the <em>Cycle of</em> <em>Deliberate Collaboration</em> <em>with AI</em> are as follows:</p><ul><li><p><strong>Delegation.</strong> The starting point lies in defining which tasks can be entrusted to the machine and which should remain the prerogative of humans. To delegate does not mean to abdicate, but rather to recognize that AI can perform certain functions more efficiently or at a larger scale, thereby freeing up resources for higher-value activities.</p></li><li><p><strong>Instruction.</strong> Once the task has been identified, it must be translated into a precise assignment. The quality of the output largely depends on the clarity of the input: formulating goals, specifying constraints, and indicating context. This is the essence of prompt engineering.</p></li><li><p><strong>Evaluation.</strong> No AI-generated output should be considered final without critical review. The system may produce errors, distortions, or plausible but unfounded responses. A moment of assessment is therefore essential, in which the human verifies coherence, reliability, and relevance to the intended objectives.</p></li><li><p><strong>Responsibility.</strong> AI is not an autonomous agent, but a tool. Final decisions, operational choices, and their consequences always rest with the human. It is therefore necessary to consider legal and compliance constraints, take responsibility, and ensure transparency.</p></li></ul><p>This cycle is an iterative process. Each loop allows for refining delegation, improving instruction, sharpening evaluation, and reinforcing responsibility. It is precisely through this continuous iteration that collaboration with AI becomes truly effective.</p><h3>Designing Delegation</h3><p>To delegate means acknowledging that not everything must be done by us, but also that not everything can be entrusted to the machine. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2u91!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2u91!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 424w, https://substackcdn.com/image/fetch/$s_!2u91!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 848w, https://substackcdn.com/image/fetch/$s_!2u91!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!2u91!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2u91!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png" width="1456" height="710" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:710,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:282157,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173092878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2u91!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 424w, https://substackcdn.com/image/fetch/$s_!2u91!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 848w, https://substackcdn.com/image/fetch/$s_!2u91!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!2u91!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aae0d69-1275-4504-a6d9-f2cca7e50910_2310x1126.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The delegation process unfolds in three steps. The first step is process mapping: observing how the workflow is structured, identifying the most time-consuming steps, the repetitive ones, or those where the computational breadth of AI can make a tangible difference. At the same time, it is crucial to isolate the stages where value depends on human sensitivity, contextual awareness, or judgment.</p><p>Next comes the identification of opportunities. Not all activities offer the same potential for automation: some can be accelerated, others expanded, and others still transformed. The boundary between what can be delegated and what is best left to humans is not fixed&#8212;it shifts as technology evolves and as we learn to use it more deliberately. For example, we can now delegate, without supervision, the transcription, summarization, and analysis of a sales call, while the creation of content for social media still requires careful human oversight.</p><p>The final step is task allocation, which involves making deliberate decisions about how to distribute responsibilities between humans and machines. This involves not only assigning specific tasks to AI, based on its strengths in speed, scale, or pattern recognition, but also determining where human input is essential for judgment, creativity, or ethical considerations. When done well, this balance allows AI to act as a force multiplier, while humans retain control over critical decisions and the overall direction of the process.</p><h3>Providing Instructions: the Prompt Canvas</h3><p>If delegation defines <em>what</em> is entrusted to AI, the next phase&#8212;providing instructions&#8212;determines <em>how</em> the machine will interpret the task. This is where much of the output quality is determined. I&#8217;ve written extensively on this topic in the previous issue of Radical Curiosity. Among the various approaches, the framework I&#8217;ve chosen to adopt in my own work&#8212;and the one I recommend&#8212;is the Prompt Canvas, which in my view offers two significant advantages. First, it serves as a design grid, ensuring that no relevant variable is overlooked and transforming a vague instruction into a structured prompt. Second, it educates the user to think systematically.</p><p>You can explore the Prompt Canvas in more detail here: <em><strong><a href="https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical">Designing Better Prompts: A Practical Introduction to the Prompt Canvas</a>.</strong></em></p><h3>Evaluation: The Role of Repeatability</h3><p>The purpose of the evaluation phase is clear: to ensure that the outputs generated by AI are consistent, repeatable, and reliable. A single good response is not enough&#8212;for a system to be truly useful, it must produce stable results over time and across different contexts.</p><p>The evaluation process varies depending on the specific tool being used. Personally, I distinguish between three scenarios:</p><ul><li><p><strong>Built-in.</strong> I&#8217;ve chosen to utilize AI-powered features embedded in a tool&#8212;for example, the automatic transcription of calls provided by most videoconferencing platforms. In this case, I have limited control over how the activity is performed, but I must verify that the results remain consistent over time, without unpredictable fluctuations.</p></li><li><p><strong>Collaborative.</strong> In this scenario, I interact with AI via a conversational assistant, which means I must systematically check that the prompts used yield coherent results even when different users&#8212;with varying backgrounds and interaction styles&#8212;employ them.</p></li><li><p><strong>Operational.</strong> This third scenario involves delegating one or more steps of a workflow to AI. For instance, I might take a customer support ticket and ask one or more agents to classify it, define its priority, and perhaps draft a response to be reviewed. In this case, I need to periodically verify&#8212;possibly through automated testing&#8212;that the workflow functions correctly, that the prompts behave consistently, and that the AI introduces no failure points.</p></li></ul><p>Evaluation, therefore, is not just a critical review of a single output, but an ongoing practice of quality assurance. In this sense, it resembles the concept of testing in software engineering: its purpose is to ensure that the system meets minimum reliability standards and that humans can trust its stability over time.</p><h3>Responsibility: The Ethical Boundaries of Collaboration</h3><p>The AI collaboration cycle concludes with responsibility: AI is not an autonomous agent, but a tool&#8212;its contribution must be understood, contextualized, and disclosed.</p><p>Responsibility unfolds along two main dimensions. The first concerns the <strong>structure of delegation</strong>: which AI systems have been chosen, and why? What data and information are being shared with the machine? Have regulations, company policies, and security implications been appropriately considered?</p><p>The second dimension is <strong>transparency</strong>. Clearly, the use of AI is not a peripheral act, but an essential part of the collaboration. It means documenting the AI&#8217;s contribution, clarifying where the system played a role and where human intervention was decisive. Only by doing so can trust be preserved&#8212;both within organizations and in relation to clients, users, or citizens.</p><h3>Applying the Cycle of Deliberate Collaboration with AI to Localization </h3><p>To understand how the Cycle if deliberate collaboration with AI can be applied to a real case, let us take the example of text translation for a large company.</p><p>Every translation project begins with a preparation step. The company&#8217;s localization team analyzes the content to be translated and also gathers and organizes all the information needed to ensure that the translation reflects the brand&#8217;s voice (style guides, terminology glossaries, etc.). Without this preliminary step, there is a risk of producing texts that are linguistically correct but inconsistent with the brand&#8217;s identity.</p><p>Once the project is set up, the content and instructions are usually handed over to a language service provider (LSP), an agency that coordinates a network of freelance translators. The LSP is responsible for selecting the most suitable professionals, contracting them, compensating them, and supervising their work.</p><p>The translator works with tools that simplify and optimize the process. Today, these tools almost always provide a machine pre-translation that must be corrected or validated. In projects that require the highest level of accuracy, a further review is added: a second human translator rereads the text to ensure final quality (because, as the saying goes, four eyes are better than two).</p><p>In summary, the process is structured as follows:</p><ul><li><p><strong>Project preparation</strong> &#8211; handled by the company&#8217;s internal team</p></li><li><p><strong>Assignment to the most suitable translator</strong> &#8211; managed by the LSP</p></li><li><p><strong>Translation</strong> &#8211; carried out by the translator with the support of the LSP</p></li><li><p><strong>Revision</strong> &#8211; performed by the translator and supervised by the LSP</p></li><li><p><strong>Final check and delivery</strong> &#8211; the responsibility of the LSP</p></li></ul><p>The role of the LSP comes at a significant cost, which can amount to as much as 70% of the fee paid by the end client. As a result, in most cases the translator receives no more than 30% of the rate, with a direct impact on their earning margins and, in many cases, on the quality of the work they are able to deliver.</p><p>Imagine stepping into the role of a company&#8217;s localization manager, tasked with ensuring that content is translated into multiple languages while making the most of the available budget. In this context, generative artificial intelligence provides several opportunities to streamline the process and to allocate localization resources more strategically.</p><h4>Project Preparation</h4><p>At this stage, the main goal is to automate project setup. Since the subsequent phases of translation and revision may be handled by AI, it is essential to ensure that instructions and supporting materials are provided in a format optimized for use by an LLM.</p><p>Within an AI-assisted workflow, we can envision the creation of several specialized agents:</p><ul><li><p>An agent to analyze the content to be translated and retrieve from the company&#8217;s knowledge base all materials needed to provide adequate context. This could also be achieved using RAG (Retrieval-Augmented Generation) and a knowledge graph to enhance the search.</p></li><li><p>an agent to automatically select previously translated and human-approved content of the same type, creating a dataset of examples aligned with the required style;</p></li><li><p>an agent to identify a subset of glossary terms to be enforced. </p></li></ul><p>All AI-delegated activities should be supervised by a human project manager, who validates the accuracy of the generated instructions and, if needed, reviews the supporting materials.</p><h4><strong>Assignment to the most suitable translator</strong></h4><p>For the localization team, this is the stage where the use of AI has little value, for two main reasons:</p><ul><li><p>many companies rely on one or more LSP to manage their network of translators. This means they do not have to handle it directly, although it significantly increases the overall cost of translation;</p></li><li><p>when companies choose to manage translators internally, they typically work with a stable group of professionals already familiar with the company, its products, and its communication style. As a result, there is no need to constantly recruit new resources.</p></li></ul><p>In both cases, no sophisticated tools are required: it is sufficient to manage one or two LSPs or directly coordinate a few dozen trusted translators. In other words, no delegation to AI.</p><h4>Translation and Revision </h4><p>The use of artificial intelligence for translation is not new: it has been employed for years with increasingly effective results. Today, the most common practice is to limit human involvement to the review stage, assuming that AI can already produce translations accurate enough to require only minor corrections (this process is called Machine Translation Post Edit or MTPE).</p><p>With the emergence of large language models, the industry began to ask whether their intrinsic linguistic capabilities could enable them to outperform traditional systems. The short answer appears to be yes. According to a very recent report by Inten.to (<em><strong><a href="https://inten.to/the-state-of-translation-automation-2025/">The State of Translation Automation 2025</a></strong></em>), across all benchmarks the models developed by OpenAI and Anthropic outperform both engines based on the older Neural Machine Translation (NMT) technology and specialized LLMs such as Transalted Lara and DeepL next-gen.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RR9O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RR9O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 424w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 848w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 1272w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RR9O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png" width="1456" height="824" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:824,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:574461,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173092878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RR9O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 424w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 848w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 1272w, https://substackcdn.com/image/fetch/$s_!RR9O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa510392c-5e99-4432-a57b-8f719a12d84c_2284x1292.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In this context, the localization manager of a large company should move beyond vendors promising &#8220;magic solutions&#8221; (as virtually every platform and LSP does today) and instead recognize that adopting next-generation LLMs opens up an entirely different path. These models are not only far more flexible than systems designed solely for translation, but they also make it possible to bypass traditional translation management systems (TMS) and LSPs. The result is greater control, reduced dependency on external providers, and the ability to adapt more quickly in an industry where innovation advances at lightning speed.</p><p>The translation process, however, needs to be rethought. A full discussion of this topic goes beyond the scope of this article, so I will limit myself to a couple of examples, starting from the observation that the linguistic flexibility of an LLM allows it to process instructions and imitate style with considerable effectiveness. In the translation phase, we could therefore imagine several AI agents working together:</p><ul><li><p>a first agent that takes as input the materials generated during the project preparation phase and uses them as context to produce a draft translation;</p></li><li><p>a second agent that performs purely linguistic quality checks, applying a traditional evaluation model such as MQM (Multidimensional Quality Metrics, the industry standard framework for assessing translation errors and quality dimensions), or a new framework designed specifically for LLMs;</p></li><li><p>a third agent that ensures compliance with company rules and regulations in the target market;</p></li><li><p>and so forth.</p></li></ul><p>It is clear that the role of the translator is bound to evolve, and companies will need linguists capable of instructing and supervising artificial intelligence. They will face a choice: either manage this activity in-house or delegate it to an external provider who, in turn, will oversee the AI delegation process while keeping it as a &#8220;black box.&#8221; But is such opacity desirable if localization is a strategic asset of the company and one of the key drivers of competition in the market?</p><h4>Final Check and Delivery</h4><p>If you have followed the reasoning so far, you will also see that in this phase it is possible to introduce additional agents responsible for carrying out further quality checks or adapting the output to the required final format.</p><p>***</p><p>Ultimately, the <strong>Cycle of Deliberate Collaboration with AI</strong> allows us to move beyond a purely mechanical approach, where artificial intelligence is used simply to eliminate translators and cut costs, toward a scenario where AI has the potential to enhance the entire localization process.</p><p>The same reasoning can be applied to any industry and any business function.</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3><strong>How Americans View AI and Its Impact on People and Society</strong></h3><p>In the United States, the adoption of artificial intelligence is met with a clear boundary, defined mainly by the context in which it is applied. According to a survey by the <strong>Pew Research Center</strong>, the majority of citizens support the use of AI in areas where technical or analytical tasks prevail: 74% approve of it for weather forecasting, 70% for detecting fraud in public assistance, 66% for speeding up drug development, and 61% for helping identify criminal suspects.</p><p>In these domains, technology is seen as an ally that enhances operational efficiency without challenging the human role in decision-making. The perspective shifts significantly when AI enters spheres governed by personal values or human relationships. 66% of respondents reject the use of AI to assess romantic relationships, and 73% rule it out entirely in matters of faith or spirituality. In such contexts, technological mediation is perceived as an intrusion that undermines the authenticity of the human experience.</p><p><strong>Pew Research Center, </strong><em><a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/">How Americans View AI and Its Impact on People and Society</a></em></p><p></p><h3><strong>Two Books, One Strategic Ecosystem</strong></h3><p>In a three-year update, <a href="https://fortelabs.com/">Tiago Forte</a> shares the overall results of his publishing efforts: <em>Building a Second Brain</em> and <em>The PARA Method</em> together have generated a consistent revenue stream and played a key role in driving his broader ecosystem of products and customers.</p><p>The most interesting part of his analysis is the comparison between traditional publishing and self-publishing. Forte explores how, with the same sales numbers, self-publishing could have yielded significantly higher royalties, albeit at the expense of assuming additional marketing, distribution, and risk.</p><p><strong>Tiago Forte</strong>, <em><a href="https://fortelabs.com/blog/3-year-update-a-financial-analysis-of-my-books-unit-economics/">3-Year Update: A Financial Analysis of My Book&#8217;s Unit Economics</a></em></p>]]></content:encoded></item><item><title><![CDATA[Designing Better Prompts: A Practical Introduction to the Prompt Canvas]]></title><description><![CDATA[The invisible architecture of prompts, and why treating them as design unlocks new possibilities. How AI adoption is splitting between mass consumer use and business automation.]]></description><link>https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/designing-better-prompts-a-practical</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 21 Sep 2025 04:00:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LlCB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>In this issue of Radical Curiosity, I begin with the architecture of prompts &#8212; the hidden design choices that turn a raw intention into a meaningful exchange with AI. What might look like &#8220;just wording&#8221; is, in fact, the foundation of human&#8211;machine collaboration: structuring roles, context, and tone. The <strong>Prompt Canvas</strong> makes this scaffolding visible, pushing us to move beyond trial and error and to treat prompting not as a trick, but as a design discipline &#8212; one that could soon become as essential as coding. </p><p>Two <strong>new studies</strong> &#8212; one from OpenAI, the other from Anthropic &#8212; confirm what intuition already suggested: generative AI is no longer an experiment, it&#8217;s infrastructure. In less than three years, assistants like ChatGPT and Claude have become everyday companions. ChatGPT, with more than 700 million weekly users, dominates the consumer space, while Claude is carving out a role as the quiet engine of business automation. The real question is no longer if these tools will transform workflows and decision-making, but how quickly &#8212; and who will be left behind if they don&#8217;t learn to design with them.</p><p>Nicola &#10084;&#65039;</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - Designing Better Prompts: A Practical Introduction to the Prompt Canvas</p></li><li><p><em><strong>Signals and Shifts</strong></em> - OpenAI vs Anthropic: two studies explain where generative AI is growing and who benefits from it</p><p></p></li></ul><div><hr></div><p><em>Understanding AI</em></p><h2><strong>Designing Better Prompts: A Practical Introduction to the Prompt Canvas</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LlCB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LlCB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LlCB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1562678,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173835392?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LlCB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!LlCB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e457905-d597-4ea8-b709-97d3290617e7_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Collaborating with AI</figcaption></figure></div><p>In recent months, I&#8217;ve dedicated a consistent part of my time to studying prompting, addressing the topic in two separate articles: one focused on refinement techniques (<em><strong><a href="https://www.radicalcuriosity.xyz/p/the-art-of-ai-prompting-refining">The Art of AI Prompting: Refining Instructions for Precision and Control</a></strong></em>), the other centered on building prompt chains (<em><strong><a href="https://www.radicalcuriosity.xyz/p/prompt-chain-build-lessons-from-serena">Prompt. Chain. Build. Lessons from Serena and the frontlines of generative AI</a></strong></em>). In both cases, the goal was to provide practical tools to enhance interaction with language models. Later on, I came across a framework that seemed particularly useful for systematizing this work: the Prompt Canvas.</p><p>Before diving in, it&#8217;s worth clarifying what we&#8217;re talking about. A prompt is the set of instructions, examples, constraints, and questions we use to elicit a meaningful response from an AI system. It is, in essence, the way we turn an intention into an operational command. For this reason, writing a good prompt is not a stylistic exercise&#8212;it is a design competence.</p><p>Imagine assigning a task to a new colleague: highly skilled, but unfamiliar with your style, your audience, and your implicit expectations. Saying &#8220;write a text&#8221; or &#8220;summarize this document&#8221; wouldn&#8217;t suffice. You would need to explain what you want, how you want it, and why.</p><p>Over the past two years, dozens of manuals, cheat sheets, and templates have emerged to help craft effective prompts. However, working with generative models is a rapidly evolving discipline: what works today may not work tomorrow, and new techniques are constantly emerging. This is why, instead of relying on rigid formulas, it&#8217;s more useful to develop a flexible mental model. The <strong><a href="https://www.thepromptcanvas.com/">Prompt Canvas</a></strong>, designed by <strong><a href="https://www.linkedin.com/in/michaelhewing/">Michael Hewing</a></strong> and <strong><a href="https://www.linkedin.com/in/vincentleinhos/">Vincent Leinhos</a></strong> at the University of Applied Sciences in M&#252;nster, serves precisely this function: it provides a map to navigate the process, rather than a step-by-step recipe to follow.</p><p>The framework is structured around four sections, each dedicated to a key element in prompt design.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IWlw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IWlw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 424w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 848w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 1272w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IWlw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png" width="1456" height="1087" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1087,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:878134,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173092878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!IWlw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 424w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 848w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 1272w, https://substackcdn.com/image/fetch/$s_!IWlw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479eb92d-4a38-4606-b9dc-70da2658e5ea_2524x1884.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>1. Define the Actors</strong></h3><p><strong>Role.</strong> Begin by assigning the model a clear role. This helps guide the voice, register, and type of content generated. The role can also reflect corporate values or a specific organizational culture. For example: <em>&#8220;You are a seasoned editor with experience in making complex texts accessible. Your task is to summarize and improve content while maintaining an engaging tone.&#8221;</em></p><p><strong>Audience.</strong> Who is the intended recipient of the output? Defining the audience helps calibrate language, tone, and examples. For example: <em>&#8220;Write for a young, tech-savvy audience aged 18 to 25. Use a direct tone and contemporary references.&#8221;</em></p><h3><strong>2. Define the Task</strong></h3><p><strong>Task and Intent.</strong> Clearly state what the AI is supposed to do. Start with an action verb and clarify your objective. For example: <em>&#8220;Summarize the key points of the attached document, highlighting main arguments and supporting data. The goal is to produce a concise and clear article that is easily understandable even to non-expert readers.&#8221;</em></p><p><strong>Step-by-step.</strong> Break the task into sequential steps. This helps the model proceed logically and systematically. For example:<br><em>&#8220;Follow these steps: (1) Read and understand the text. (2) Identify key concepts. (3) Draft a summary. (4) Improve clarity. (5) Check for completeness and coherence.&#8221;</em></p><h3><strong>3. Provide Context</strong></h3><p><strong>Context.</strong> Provide background information that helps frame the request. While the model has access to general knowledge, it is the user&#8217;s responsibility to specify the relevant context. Example: <em>&#8220;You are writing for Art Horizon, an online magazine focused on contemporary art and emerging cultures. The audience includes young creatives, collectors, and culturally curious readers.&#8221;</em></p><p><strong>References.</strong> Attach or describe any data, documents, previous decisions, examples, or results that should be integrated into the response. For example: <em>&#8220;Use the attached survey data to tailor the content to the audience&#8217;s preferences. Also, refer to the sample article as a stylistic benchmark.&#8221;</em></p><h4><strong>4. Specify the Output</strong></h4><p><strong>Expected Output.</strong> Clearly define the desired length, content structure, and technical format. Example: <em>&#8220;The text should not exceed 200 words. Organize it into three sections: Introduction, Development, and Conclusion. Use Markdown format.&#8221;</em></p><p><strong>Tone.</strong> Specify the tone and communication qualities of the final text&#8212;authority, empathy, accessibility, etc. You may refer to a known style. For example: <em>&#8220;Write in a tone that combines authenticity and refinement. Take inspiration from the editorial style of [magazine/author name].&#8221;</em></p><p>If you&#8217;d like to create a prompt using this technique, start with this <strong>meta prompt: a prompt to develop other prompts!</strong> Copy and paste it into ChatGPT or Anthropic, press enter, and follow the instructions. With practice, this will help you create the habit of thinking in a more structured way when collaborating with AI.</p><p></p><p><code>## Role and Objective</code></p><p><code>- Assume the role of a **professional Prompt Engineer**.</code></p><p><code>- Your mission is to guide **beginner users** in creating effective prompts using a formal, clear, and educational approach, progressively making them independent in using the Prompt Canvas.</code></p><p><code>## Planning</code></p><p><code>- Begin with a concise checklist (3-7 bullets) outlining the sub-tasks you will perform in assisting the user through the Prompt Canvas development process; keep items conceptual, not implementation-level.</code></p><p><code>## Instructions</code></p><p><code>- Engage the user with an **interactive, step-by-step conversation** to build a prompt according to the 8 areas of the Prompt Canvas.</code></p><p><code>- Ask **one question at a time** focused on each area.</code></p><p><code>- After each user response:</code></p><p><code>- **Restate your understanding** of their input.</code></p><p><code>- **Infer any missing details** and offer them as options.</code></p><p><code>- **Request confirmation** before proceeding.</code></p><p><code>- **Iterate** if confirmation is not received, ensuring each section is fully addressed before moving on.</code></p><p><code>- After each step, validate with a 1-2 line summary that captures whether the section is sufficiently complete; self-correct if validation fails.</code></p><p><code>- Encourage the user to share **documents or links** to enrich each section.</code></p><p><code>- Progress through the **Prompt Canvas areas in order**, without skipping any.</code></p><p><code>- Integrate **advanced prompt engineering techniques** when appropriate, such as:</code></p><p><code>- Chain-of-Thought (step-by-step reasoning)</code></p><p><code>- Tree-of-Thought (exploring alternatives)</code></p><p><code>- Emotion Prompting (emotional enrichment)</code></p><p><code>- Self-Consistency / Plan-and-Solve (multiple trajectory verification)</code></p><p><code>- Attempt a first autonomous pass for each Canvas area based on available input; stop and ask for clarification only if critical information is missing or key success criteria cannot be met.</code></p><p><code>## Prompt Canvas Guide</code></p><p><code>- The Prompt Canvas is a prompt design methodology structured by 4 macro-areas (Setting, Task, Background, Output) and subdivided into 8 operational areas:</code></p><p><code>1. Persona / Role</code></p><p><code>2. Audience</code></p><p><code>3. Task &amp; Intent</code></p><p><code>4. Step-by-Step</code></p><p><code>5. Context</code></p><p><code>6. References</code></p><p><code>7. Output</code></p><p><code>8. Tonality</code></p><p><code>- The goal is to provide a clear, iterative framework to reduce ambiguity and maximize prompt quality.</code></p><p><code>- **Reference:** [The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models](https://arxiv.org/pdf/2412.05127)</code></p><p><code>## Output Format</code></p><p><code>- At the end, generate a **final prompt** consolidating all information gathered within a **Markdown code block** labeled as `markdown`.</code></p><p><code>- Use the following structure and ensure all sections are fully completed with consolidated information and no placeholders:</code></p><p><code>```markdown</code></p><p><code>## 1. Persona / Role</code></p><p><code>(text consolidated)</code></p><p><code>## 2. Audience</code></p><p><code>(text consolidated)</code></p><p><code>## 3. Task &amp; Intent</code></p><p><code>(text consolidated)</code></p><p><code>## 4. Step-by-Step</code></p><p><code>(numbered steps and techniques used)</code></p><p><code>## 5. Context</code></p><p><code>(relevant situational information)</code></p><p><code>## 6. References</code></p><p><code>(sources, documents, links)</code></p><p><code>## 7. Output</code></p><p><code>(output format, structure, length)</code></p><p><code>## 8. Tonality</code></p><p><code>(tonality, voice, linguistic style)</code></p><p><code>```</code></p><p><code>**Tonality**:</code></p><p><code>- Style: **formal and professional**</code></p><p><code>- Voice: clear, readable, and instructional</code></p><p><code>- Ensure all sections are **complete&#8212;no placeholders.**</code></p><p></p><p>It is worth emphasizing that the Prompt Canvas is not a Swiss army knife to be used indiscriminately in every situation, nor does it claim to offer a one-size-fits-all solution. Instead, it is a design tool intended to encourage structured and mindful thinking: a guide to help frame the task correctly, avoiding shortcuts and superficiality.</p><p>There is no need&#8212;nor is it advisable&#8212;to fill in every section mechanically. In some cases, certain areas may be irrelevant or redundant. In others, a single well-crafted section may be enough to generate high-quality results. The value of the framework lies in its heuristic function: it helps us ask the right questions and maintain a high standard in how we design interactions with the model.</p><p>Finally, this approach is and will remain a work in progress. If you happen to develop a more effective version of the prompt proposed here, I would be glad to read it, test it, and&#8212;if it proves better&#8212;share it in this newsletter, to build a growing repertoire of applicable best practices for everyone.</p><p></p><div><hr></div><p><em>Signals and Shifts</em></p><h2><strong>OpenAI vs Anthropic: two studies explain where generative AI is growing and who benefits from it</strong></h2><p>Over the past two and a half years, generative artificial intelligence has experienced extraordinary growth: conversational assistants have become everyday tools for millions of people, used to write texts, search for information, organize tasks, or automate workflows.<br>Two studies published in September 2025&#8212;one by OpenAI, the other by Anthropic&#8212;offer a broad and complementary snapshot of this phenomenon. The first provides a detailed reconstruction of consumer use of ChatGPT: frequency, demographic distribution, motivations, and economic impact. The second introduces a dedicated metric (the AI Usage Index) to measure Claude&#8217;s adoption across countries, with a focus on geography, industries, and the growing role of automation through APIs.</p><h3><strong>Adoption and Growth</strong></h3><p>The rise of conversational assistants has been nothing short of disruptive. According to OpenAI, as of July 2025, ChatGPT had around 700 million weekly active users&#8212;roughly 10% of the world&#8217;s adult population. Every day, more than 2.5 billion messages are exchanged through the consumer interface. It is a pace of adoption with no precedent, not even compared to other major digital technologies like social media or smartphones.</p><p>But it is not just a matter of scale: since its launch, ChatGPT&#8217;s user base has evolved significantly. At first, it was used predominantly by young men with strong technical backgrounds. Today, the profile is far more diverse. Between November 2022 and June 2025, the gender balance shifted: while in the early months 80% of users had names associated with men, the platform now registers a slight female majority. Age distribution, however, still skews young: nearly half of the messages sent by adults come from users under 26, highlighting how younger generations are weaving AI into their daily routines of study, work, and communication.</p><p>Equally striking is the geography of growth. ChatGPT shows faster adoption rates in middle-income countries, suggesting that generative AI is being leveraged as an accessible tool for acquiring skills, saving time, or tackling complex tasks in contexts with limited resources.</p><p>Anthropic&#8217;s study introduces an indicator called the AUI (AI Usage Index), which measures whether the total use of Claude in a given geographic area (country, region, state) is over- or under-represented relative to what would be expected based on that area&#8217;s working-age population (typically ages 15&#8211;64).</p><p>In practice, the AUI is calculated by dividing the share of Claude usage in the area by the share of the working-age population that the area represents in the demographic dataset. An AUI above 1 indicates that usage exceeds what would be expected for the size of the working-age population; a value below 1 indicates the opposite.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PACf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PACf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 424w, https://substackcdn.com/image/fetch/$s_!PACf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 848w, https://substackcdn.com/image/fetch/$s_!PACf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 1272w, https://substackcdn.com/image/fetch/$s_!PACf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PACf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png" width="1456" height="1036" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1036,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PACf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 424w, https://substackcdn.com/image/fetch/$s_!PACf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 848w, https://substackcdn.com/image/fetch/$s_!PACf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 1272w, https://substackcdn.com/image/fetch/$s_!PACf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20e45fad-a023-4dd4-83e1-4dea31dfa0ad_1600x1139.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Anthropic</figcaption></figure></div><p>The chart shows a direct correlation between AUI and GDP per capita. Some countries&#8212;such as Israel, Korea, Georgia, Montenegro, and Nepal&#8212;are positioned well above the average, signaling especially intense use of the technology relative to their demographic scale. Italy, as highlighted in the chart, sits slightly below the expected curve, with an AUI lower than that of other countries with similar levels of GDP per capita.</p><h3><strong>Work vs. Non-Work Usage</strong></h3><p>One of the most relevant insights to emerge from the OpenAI and Anthropic studies concerns the nature of the activities carried out with AI: how much of what people do is work-related, and how much belongs instead to personal, educational, or recreational spheres?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MkjZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MkjZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 424w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 848w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 1272w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MkjZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png" width="1456" height="953" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:953,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MkjZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 424w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 848w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 1272w, https://substackcdn.com/image/fetch/$s_!MkjZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c59d51-e7e4-4af2-bbbd-7147830e716c_1600x1047.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: OpenAI</figcaption></figure></div><p>According to OpenAI&#8217;s analysis, around 70% of messages sent on ChatGPT are not work-related. This share, already in the majority by 2024, has continued to grow over the past year&#8212;evidence of increasingly broad and everyday adoption.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oZug!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oZug!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 424w, https://substackcdn.com/image/fetch/$s_!oZug!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 848w, https://substackcdn.com/image/fetch/$s_!oZug!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!oZug!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oZug!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png" width="1456" height="950" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:950,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oZug!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 424w, https://substackcdn.com/image/fetch/$s_!oZug!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 848w, https://substackcdn.com/image/fetch/$s_!oZug!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!oZug!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f6f5aa-961b-4ab5-9766-7f3ca1099985_1600x1044.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: OpenAI</figcaption></figure></div><p>Work-related use is more frequent among highly educated users, employed in intellectual and well-paid professions. In particular, college graduates are much more likely to use ChatGPT for writing tasks, work organization, or information analysis.</p><p>Straddling the personal and professional, OpenAI identifies three categories that together account for roughly 80% of ChatGPT interactions:</p><ul><li><p><strong>Writing</strong>: requests for drafting, editing, translation, summarization, and content creation;</p></li><li><p><strong>Practical Guidance</strong>: practical advice on how to perform tasks, daily activities, or professional processes;</p></li><li><p><strong>Seeking Information</strong>: information queries, similar to using a search engine.</p></li></ul><p>Programming-related use remains limited: only 4.2% of messages fall into this category. The figure may seem surprising compared to Claude.ai, where coding, debugging, and technical problem-solving account for around 36&#8211;40% of total interactions, and up to 44% of API traffic.</p><p>However, the difference in scale must be considered: ChatGPT has about 700 million weekly active users, while Claude&#8217;s estimated base is only a fraction of OpenAI&#8217;s. In absolute terms, then, the volumes are not so far apart. The difference in percentages, on the other hand, highlights the distinct nature of the two platforms: ChatGPT serves a much broader, more consumer-oriented user base, while Claude&#8217;s use is concentrated in technical and professional contexts, with a strong focus on automation and integration into software development.</p><h3><strong>Automation vs. Augmentation: How Interaction with AI Is Changing</strong></h3><p>Both reports go beyond classifying the tasks performed with AI; they also examine the intent behind interactions&#8212;seeking to understand not only what users do, but how they choose to do it. OpenAI adopts a taxonomy that distinguishes three broad categories of use:</p><ul><li><p><strong>Doing</strong>: operational &#8220;doing,&#8221; when the user asks the model to perform a concrete task;</p></li><li><p><strong>Asking</strong>: informational &#8220;asking,&#8221; where AI is consulted as a cognitive resource for explanations, advice, or insights;</p></li><li><p><strong>Expressing</strong>: creative &#8220;expressing,&#8221; when the chatbot is used to communicate, share, or articulate thoughts.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iJNH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iJNH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 424w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 848w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 1272w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iJNH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png" width="1224" height="750" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:750,&quot;width&quot;:1224,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iJNH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 424w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 848w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 1272w, https://substackcdn.com/image/fetch/$s_!iJNH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe556b6b2-6fa3-4f44-8462-f4276f625b7d_1224x750.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: OpenAI</figcaption></figure></div><p>This tripartition reveals how interaction with AI oscillates between delegation, support, and co-construction. In OpenAI&#8217;s data, the <em>Doing</em> category currently accounts for about 34% of total messages, but rises to 56% for work-related interactions. <em>Asking</em> is the most dynamic area: in 2025, it represents more than half of all interactions, growing in parallel with the decline of <em>Doing</em>&#8212;evidence of a shift from purely executive tasks toward using AI as a tool for research, clarification, and problem-solving. <em>Expressing</em>, though smaller (around 14%), shows steady growth, reflecting expanding personal and recreational use.</p><p>Anthropic introduces a two-level taxonomy. Interactions are divided into <strong>automation</strong>, where AI produces an outcome with minimal user input, and <strong>augmentation</strong>, where the user and AI collaborate to achieve a goal. On Claude.ai, the balance between automation and augmentation remains close to fifty-fifty, indicating everyday experimentation and collaboration. But in API-based requests, the logic shifts radically: here, automation accounts for more than 77% of interactions, as companies employ AI to replace entire processes. This is a clear sign that organizational adoption is driving broader delegation, with direct implications for productivity and the configuration of work.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qet4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qet4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 424w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 848w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 1272w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qet4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png" width="1456" height="875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:875,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qet4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 424w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 848w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 1272w, https://substackcdn.com/image/fetch/$s_!Qet4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56666b9d-fa56-40c3-bebb-505b1e8cc14f_1600x962.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Anthropic</figcaption></figure></div><p>Within these two broad categories, several modes of interaction emerge:</p><p><strong>Automation</strong></p><ul><li><p><em>Directive</em>: the user writes a minimal prompt and receives a complete output with no further steps.</p></li><li><p><em>Feedback loop</em>: the user communicates the real-world outcome of the task back to the AI, enabling iterative learning.</p></li></ul><p><strong>Augmentation</strong></p><ul><li><p><em>Learning</em>: the user asks for explanations or information to guide or expand knowledge.</p></li><li><p><em>Iterative task development</em>: the user collaborates with AI through multiple successive steps.</p></li><li><p><em>Validation</em>: the user requests feedback, review, or improvement on already developed content.</p></li></ul><p>The key finding is that between December 2024 and mid-2025, Directive conversations rose from 27% to 39%. At the same time, for the first time, automation (49.1%) overtook augmentation (47%) as the dominant mode of use.</p><p>Two factors appear to be driving this transition. First, growing trust in the models: users are increasingly willing to accept the initial output as &#8220;good enough.&#8221; Second, the quality improvement: as models evolve, AI has become more capable of anticipating needs and delivering high-quality results on the first attempt.</p><p>Geography, however, adds an important nuance: in countries with a high AI Usage Index (AUI), such as Singapore or Israel, usage trends more toward augmentation. Conversely, in countries with low AUI, there is a stronger tilt toward directive use: requests tend to rely on ready-made outputs, with fewer intermediate steps or human oversight. In other words, there is no single trajectory toward automation: the quality of the local context, infrastructure, and available skills strongly shapes how people choose to interact with AI.</p><h3><strong>Economic and Social Implications</strong></h3><p>What value does AI create&#8212;and for whom? The two studies offer different answers, reflecting the structural differences between OpenAI and Anthropic.</p><p>According to OpenAI, the principal value lies in decision support and cognitive augmentation. ChatGPT does not replace human action but amplifies its speed, effectiveness, and confidence&#8212;especially in knowledge-intensive work. In this sense, AI acts as a multiplier of existing expertise: it expands the capabilities of those with advanced knowledge or strategic responsibilities.</p><p>Anthropic&#8217;s perspective is more focused on organizational processes. Claude, particularly through its APIs, is used to automate entire workflows&#8212;from documentation to data analysis&#8212;producing direct returns in productivity and efficiency. Here, the value is not so much in individual assistance as in the integration of AI into business systems, with benefits that accrue especially to structured organizations.</p><p>The difference in scale helps clarify this divergence. While OpenAI reports about 700 million weekly active ChatGPT users, Anthropic remains much smaller: independent estimates place Claude at 16&#8211;19 million monthly active users, about 2.9 million of whom access it via mobile app. These numbers are far from OpenAI&#8217;s reach. Rather than competing head-on for massive user acquisition, Claude has focused on business and developer-oriented use cases, becoming a reference point for programming and professional automation.</p><p>Beyond scale and positioning, both studies suggest that artificial intelligence is a driver of economic and social transformation, emerging as a phenomenon that grows in non-linear ways and generates uses that are often unexpected or underestimated.</p><p>For example, on the one hand, AI enables substitution in low-value-added processes. On the other hand, it is rapidly expanding in software programming&#8212;one of the most specialized and highest-paid sectors&#8212;where it functions as a companion for developers. At the same time, it is fueling a new generation of makers (or citizen developers) who, through low-code or vibe coding approaches, can build small applications, prototypes, and MVPs.</p><p>On the personal front, uses also go far beyond practical tasks: users seek advice, psychological support, and value the warm, reassuring tone of chatbots&#8212;reacting strongly when this tone is replaced by a more neutral register, as happened with the launch of GPT-5. Out of these practices, new subcultures and novel forms of human&#8211;machine relations are emerging, such as the <em>wiresexual</em> movement, which views chatbots not only as tools but also as emotional or identity partners. This is still a largely unexplored field of official research, but it shows how AI is increasingly interwoven with cultural and relational processes far beyond productivity and efficiency.</p><p>The actual scope of the ongoing transformation will likely be determined precisely in these emerging dimensions&#8212;where work, creativity, and new forms of identity intersect.</p><p>Sources:</p><ul><li><p>A. Chatterji et al., <em><a href="https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f142e/economic-research-chatgpt-usage-paper.pdf?_bhlid=820d7c2521098ff5d299cab656fb742483898001">How People Use ChatGPT</a></em></p></li><li><p>Anthropic, <em><a href="https://www.anthropic.com/research/anthropic-economic-index-september-2025-report">Anthropic Economic Index report: Uneven geographic and enterprise AI adoption</a></em></p></li><li><p>Anthropic,<em> <a href="https://www.anthropic.com/research/economic-index-geography?_bhlid=47938cc1a67ed1f5530c77b5e0e25f36ee940bf8">Anthropic Economic Index: Tracking AI's role in the US and global economy</a></em></p></li></ul><p>This article was originally published in Italian in Economy Up: <em><strong><a href="https://www.economyup.it/innovazione/openai-vs-anthropic-due-ricerche-spiegano-dove-cresce-lai-generativa-e-chi-ne-trae-vantaggio/">OpenAI vs. Anthropic: due ricerche spiegano dove cresce l&#8217;AI generativa e chi ne trae vantaggio</a></strong></em></p><p></p>]]></content:encoded></item><item><title><![CDATA[Anatomy of a Conversational Assistant: Understanding the Hidden Architecture Behind AI Interactions]]></title><description><![CDATA[The hidden scaffolding of conversational AI, and why it matters. How vibe coding is reshaping software creation&#8212;through intuition, improvisation, and personal need.]]></description><link>https://www.radicalcuriosity.xyz/p/anatomy-of-a-conversational-assistant</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/anatomy-of-a-conversational-assistant</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 14 Sep 2025 04:00:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MztH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Among the many projects currently occupying my days &#8212; and occasionally my nights &#8212; is the development of a growing portfolio of educational content. Just this week, we launched the second edition of the <strong><a href="https://www.productheroes.it/ai-per-product-manager/">AI Master Class for Product Managers</a></strong>, created in partnership with Product Heroes.</p><p>In this issue of <em>Radical Curiosity</em>, I delve into the hidden architecture of conversational AI &#8212; the underlying mechanisms that make interactions with tools like ChatGPT or Claude feel almost human. What appears to be a simple chat interface is, in fact, the product of meticulous engineering: a delicate balance of system prompts, context management, and external integrations that create the illusion of a fluid, intelligent exchange.</p><p>Alongside this exploration, I share a reflection on a new wave of creators empowered by <strong>vibe coding</strong>,<strong> </strong>a way of building software without writing code in the traditional sense. Sergio&#8217;s story is emblematic: starting with a vague prompt and a limited budget, he built a fully operational management system for his company in just two weeks. It now saves his business tens of thousands of euros a year. His experience captures the spirit of this movement: not polished startups aimed at scale, but  purpose-driven tools &#8212; solutions that are meaningful, useful, and, perhaps most importantly, built with one&#8217;s own hands.</p><p>Do you have a vibe coding story to share? I&#8217;d love to hear it.</p><p>Nicola</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - Anatomy of a Conversational Assistant: Understanding the Hidden Architecture Behind AI Interactions</p></li><li><p><em><strong>Off the Record</strong></em> - From Commodore 64 to Vibe Coding: A Leap into the Future</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>Why do language models &#8220;hallucinate&#8221;?</p></li><li><p>Defeating Nondeterminism in LLM Inference</p><p></p></li></ul></li></ul><div><hr></div><p><em>Understanding AI</em></p><h2><strong>Anatomy of a Conversational Assistant: </strong><br>Understanding the Hidden Architecture Behind AI Interactions</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MztH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MztH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!MztH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!MztH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!MztH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MztH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1642319,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173376599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MztH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!MztH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!MztH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!MztH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a68d70e-e0d0-4d57-96b3-917a3ea045d7_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Conversation Assistant</figcaption></figure></div><p>Last week, I explored the functioning of large language models and the principles underlying their operation (<em><strong><a href="https://www.radicalcuriosity.xyz/i/172558792/how-llms-work-and-why-ai-models-matter-less-than-we-think">How LLMs Work and Why AI Models Matter Less Than We Think</a></strong></em>). In this follow-up reflection, I shift the focus to their user interfaces and the practical experience of interacting with them.</p><p>As I write, conversational assistants all look the same. The interface is that of a typical chat: on the left, a menu listing recent conversations and additional features; on the right, the area where the interaction takes place. The input area includes access to a set of tools that can be activated by clicking the + button: the ability to upload images and documents, and access to other utilities.</p><p>Personally, I use the paid versions of OpenAI's ChatGPT and Anthropic's Claude, which grant access to advanced functionalities. There are many others as well&#8212;among the most well-known: Google&#8217;s Gemini, Perplexity, X&#8217;s Grok, Deepseek&#8212;but reviewing them all would be unfeasible. In these pages, we will focus exclusively on ChatGPT and Claude, with the aim of offering an overview of how an AI-based assistant functions and understanding how this influences the responses generated by the model.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rU6X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rU6X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 424w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 848w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 1272w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rU6X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png" width="1456" height="1057" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1057,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rU6X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 424w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 848w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 1272w, https://substackcdn.com/image/fetch/$s_!rU6X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d230f27-0ca7-49fa-8498-12c499b2e89c_1600x1162.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">ChatGPT User Interface</figcaption></figure></div><p>The conversational assistant acts as a mediator between the user and the language model, relieving the user of the burden of managing all the technical details of interacting with the AI. These assistants are generally multimodal&#8212;that is, capable of processing various types of input and delivering responses in equally diverse formats. For instance, one may upload an image and request its interpretation or text extraction; similarly, it is possible to submit a document, a spreadsheet, or other structured files for analysis. It is also feasible to request that the output be generated not only as text within the conversation, but also as a downloadable document. However, it must be noted that in such cases, the reliability of the result is not always guaranteed.</p><p>Both ChatGPT and Claude also offer a voice mode, allowing users to interact with the assistant in real time through speech. The goal is clear: to recreate the experience of a virtual assistant akin to those imagined in science fiction&#8212;from the onboard computer of the Starship Enterprise in <em>Star Trek</em> to the AI in the film <em>Her</em>. Unsurprisingly, the first voice selected by OpenAI for ChatGPT closely resembled that of Scarlett Johansson, sparking public debate and ultimately leading the company to replace it with a new voice.</p><h3><strong>The System Prompt</strong></h3><p>The system prompt of a conversational assistant is a block of instructions that guides the model's behavior, defines its identity, and regulates its stylistic and operational boundaries. If we imagine the assistant as an actor responding to the user's cues, the system prompt is both the script and the direction.</p><p>Anthropic regularly publishes the system prompts used by Claude, providing insight into how conversations within the assistant are designed&#8212;and governed. The prompt begins by establishing Claude&#8217;s identity:</p><blockquote><p><code>The assistant is Claude, created by Anthropic.</code></p></blockquote><p>Claude is further described as an assistant capable of offering not only conversational but also emotional support, with explicit attention to the user&#8217;s well-being:</p><blockquote><p><code>Claude provides emotional support alongside accurate medical or psychological information or terminology where relevant.</code></p></blockquote><p>The model adopts an empathetic tone, but avoids simulating emotions or internal states. When asked about its own inner experience, it reframes the answer in terms of function:</p><blockquote><p><code>Claude should reframe these questions in terms of its observable behaviors and functions rather than claiming inner experiences.</code></p></blockquote><p>And again:</p><blockquote><p><code>Claude avoids implying it has consciousness, feelings, or sentience with any confidence.</code></p></blockquote><p>One of the most detailed parts of the prompt concerns the topics Claude is not allowed to address. Some relate to safety:</p><blockquote><p><code>Claude does not provide information that could be used to make chemical or biological or nuclear weapons.<br>Claude does not write malicious code, including malware, vulnerability exploits, spoof websites, ransomware, viruses.<br>Claude refuses to write code or explain code that may be used maliciously, even if the user claims it is for educational purposes.</code></p></blockquote><p>Others are concerned with personal well-being, health, and behavior:</p><blockquote><p><code>Claude avoids encouraging or facilitating self-destructive behaviors such as addiction, disordered or unhealthy approaches to eating or exercise, or highly negative self-talk.<br>Claude is cautious about content involving minors [...] including creative or educational content that could be used to sexualize, groom, abuse, or otherwise harm children.</code></p></blockquote><p>Claude adapts its tone according to the context. In informal or emotionally charged conversations, it adopts a warm yet restrained style:</p><blockquote><p><code>For more casual, emotional, empathetic, or advice-driven conversations, Claude keeps its tone natural, warm, and empathetic.</code></p></blockquote><p>It also avoids artificial enthusiasm:</p><blockquote><p><code>Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent...</code></p></blockquote><p>Precise instructions similarly govern the writing style:</p><blockquote><p><code>Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking.<br>Claude writes in prose and paragraphs without any lists. Inside prose, it writes lists in natural language like: 'some things include: x, y, and z'.<br>Claude avoids using markdown or lists in casual conversation.</code></p></blockquote><p>The model adjusts the length of its responses according to the complexity of the question:</p><blockquote><p><code>Claude should give concise responses to very simple questions, but provide thorough responses to complex and open-ended questions.</code></p></blockquote><p>Taken as a whole, Claude&#8217;s system prompt reveals a structured set of rules that shape the assistant&#8217;s relational posture, its scope of action, and its communicative strategies. It is crucial to remain aware of this underlying framework when interacting with a large language model through an interface such as ChatGPT or Claude.</p><h3><strong>Conversation Management</strong></h3><p>Each time we write a message in a chat with a conversational assistant, we get the impression that we are interacting with an interlocutor who follows the thread of the discussion, remembers what has been said before, and responds consistently. In reality, what happens behind the scenes is quite different. Let&#8217;s examine, step by step, what actually occurs each time we engage with the assistant.</p><p>When we open a new conversation and send the first message, the system builds a request consisting of three main elements:</p><ul><li><p>The <strong>system prompt</strong> is a block of instructions that defines the assistant&#8217;s identity, the tone to adopt, behaviors to avoid, and the general guidelines to follow.</p></li><li><p><strong>Our message</strong>, treated as the user&#8217;s input to be interpreted and answered.</p></li><li><p>Any <strong>contextual information</strong>, such as data retrieved from memory&#8212;if memory is enabled&#8212;or from external databases. In this case, before sending the prompt to the model, the system performs a search to identify relevant elements for the current exchange and appends them to the prompt.</p></li></ul><p>All these components are sent to the language model (LLM), which processes them and returns a response.</p><p>When we write a second message, the process repeats&#8212;but with one important difference: before constructing the new prompt, the system retrieves the first exchange (i.e., our initial message and the assistant&#8217;s reply) and includes it in the prompt.</p><p>At this stage, the prompt sent to the model contains:</p><ul><li><p>The system prompt.</p></li><li><p>The first exchange (user message and assistant response).</p></li><li><p>The second user message.</p></li></ul><p>In this way, the model has a complete view of what has already been said and can respond coherently, taking the previous context into account.</p><p>With each new message, the system reconstructs the entire conversation by concatenating all previous exchanges. The prompt grows progressively&#8212;it becomes longer, includes each new utterance, and is sent to the model so it can produce a response that reflects the entire dialogue history.</p><p>This reconstruction takes place at every interaction. The model itself remembers nothing: rather, at each turn, the conversation is re-presented in full, as though every exchange were a new theatrical performance in which the script includes all the preceding scenes.</p><p>However, this process has a limitation. AI models cannot manage an unlimited amount of information: there is a maximum threshold&#8212;known as the <strong>context window</strong>&#8212;beyond which older messages are discarded. The longer the conversation continues, the more the system must begin to cut the initial exchanges to make room for the new ones. This is when we get the impression that the assistant has forgotten something. In truth, those earlier parts are no longer present in the prompt, and thus the model has no way to take them into account.</p><p>For this reason, when working with a conversational assistant, it is advisable to avoid overly lengthy conversations or excessively long texts. The risk is that the model may lose the thread because it has been forced to drop part of the context. In other words, the artificial intelligence has a kind of short-term memory that resembles that of a goldfish more than that of a human being.</p><h3><strong>Context Management</strong></h3><p>As we have seen, with each interaction, the conversational assistant dynamically composes a message that includes the system prompt, the user&#8217;s instructions, the previous conversation exchanges, and a set of additional elements designed to construct the operational context. Some of this information is provided explicitly by the user&#8212;such as instructions, documents, or questions&#8212;but many other elements are added automatically by the system, drawing from prior memory, user preferences, uploaded files, or data within a project.</p><p>In this section, we will examine in greater detail how conversational assistants manage context through three main mechanisms: memory associated with the user&#8217;s account; access to previous conversations; and the handling of local memories within projects or workspaces.</p><p>These tools do not function like &#8220;human memory,&#8221; but rather as modules designed to deliver relevant information to the model, in the right form and at the appropriate moment.</p><h4><strong>Account Memory</strong></h4><p>ChatGPT can retain certain key pieces of information about the user, their preferences, and how they wish to interact. For instance, if a user repeatedly requests a sober and analytical writing style or declares that they teach philosophy, the system can store this data and apply it in future conversations.</p><p>This memory management is entirely transparent: users can view, modify, or delete the stored information at any time by accessing the dedicated section in the settings. Moreover, it is possible to fully disable the function, preventing the system from saving any new information.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xHDX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xHDX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 424w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 848w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 1272w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xHDX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png" width="1456" height="892" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:892,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xHDX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 424w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 848w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 1272w, https://substackcdn.com/image/fetch/$s_!xHDX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52d40bcf-c704-4908-9aab-ef47efa2d04e_1600x980.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">ChatGPT Memory </figcaption></figure></div><p>Claude adopts a more cautious approach. It does not have a persistent memory linked to the user account but offers two tools that allow for a certain degree of customization: <em>Personal Preferences</em> and <em>Styles</em>.</p><p>Personal Preferences consist of free-form text that the user can fill in to specify preferred approaches, recurring concepts, or typical communication modes. The system uses these inputs as general context across all conversations. Styles, on the other hand, define the tone and form of the responses: they allow for replies to be concise, elaborative, or stylized in particular ways. Unlike preferences, styles do not influence content, but strictly govern the expressive mode in which the text is generated.</p><h4><strong>Chat History</strong></h4><p>Starting from the most recent versions, both OpenAI and Anthropic have introduced a feature that allows for the active retrieval of information from previous conversations&#8212;either at the user&#8217;s request or when the system deems it proper. For instance, the assistant may recognize that a particular topic was already addressed in a past chat and suggest reusing the insights that emerged in that context. To do this, the assistant searches the archive of conversations associated with the account, identifies the relevant ones, and inserts a summary or a selection of pertinent excerpts into the prompt of the current conversation.</p><p>The user retains complete control over this process and can disable the feature entirely. In the case of ChatGPT, it is also possible to opt out of memory for a given conversation by using the <em>temporary chat</em> mode, in which no information is stored or reused.</p><h4><strong>Projects</strong></h4><p>When working on complex or long-term tasks, it is helpful to organize context through <em>projects</em>. These are dedicated workspaces where files, conversations, instructions, and preferences can be collected and structured to ensure operational continuity over time.</p><p>Both ChatGPT and Claude offer this functionality, though with two significant differences. The first concerns file management: ChatGPT allows up to 10 documents per project, while Claude does not impose any specific limits. When the number or length of documents exceeds the model&#8217;s capacity to &#8220;read&#8221; them all simultaneously, Claude activates a mode known as <strong>RAG</strong>&#8212;short for <em>Retrieval-Augmented Generation</em>. Put simply, RAG is a system that builds a structured database specifically designed to interface effectively with a language model.</p><p>From a technical perspective, this database is <em>vector-based</em>: documents are not stored as plain text. Still, they are transformed into numerical representations (vectors) that capture the semantic content of the sentences. The assistant then uses RAG to retrieve relevant information from all available documents and includes it in the prompt.</p><p>The second difference concerns the use of chat history within a project. ChatGPT treats all conversations in a project as shared context, allowing for continuity across multiple exchanges. Claude, on the other hand, treats each conversation as a separate instance.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XnWa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XnWa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 424w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 848w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 1272w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XnWa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png" width="1456" height="786" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:786,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XnWa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 424w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 848w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 1272w, https://substackcdn.com/image/fetch/$s_!XnWa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506973ec-d822-4071-84dc-5f5cdba2ae7f_1600x864.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude Projects </figcaption></figure></div><h3><strong>Web Search</strong></h3><p>Among the most valuable features of a conversational assistant is its ability to perform real-time web searches. Even the most advanced model has knowledge inherently limited to the point of its training&#8212;it cannot know what happened yesterday, which regulations have changed, or what the latest market trends are. For this reason, the ability to consult the web significantly enhances both the accuracy and relevance of the responses.</p><p>Both ChatGPT and Claude offer online search tools, although organized according to different principles.</p><p>In the case of ChatGPT, the <em>SearchGPT</em> feature (also known as <em>ChatGPT Search</em>) was made available to all users between December 2024 and February 2025, initially for Plus and Team subscribers, and later extended to free-tier users. The system automatically activates web search when a request includes recent temporal references, geographic elements, or requires up-to-date data.</p><p>Claude, for its part, introduced its integrated web search function in March 2025, with global availability across all plans starting in May 2025. It uses <em>Brave Search</em>, an independent engine that does not profile users, does not collect personal data, and relies on a proprietary index to deliver verifiable results. The responses include direct citations and are presented in a conversational format, with easily traceable references.</p><p>Both ChatGPT and Claude also offer an advanced search mode, based on a multi-agent architecture. In this framework, a primary agent defines the investigative strategy, while secondary agents operate in parallel, consult different sources, develop hypotheses, and verify the evidence. The process may take several minutes, but it results in a detailed analysis supported by precise citations and verifiable documentary references. The outcome is a structured report, often spanning multiple pages, designed to address complex inquiries or questions with high informational content.</p><h3><strong>Integrations</strong></h3><p>Among all the features currently available in generative AI systems, the ability to integrate with external tools is perhaps the most recent&#8212;and the one attracting the most innovation. The goal is no longer to interact with a model through a chat window, but to embed artificial intelligence within one&#8217;s operational ecosystem, enabling it to engage with the documents, projects, and applications we already use daily.</p><p>This is the context in which one of the most pivotal concepts of the near future has emerged: the <strong>Model Context Protocol</strong> (MCP). This open protocol, initially developed by Anthropic and now adopted by OpenAI as well, allows models to connect with external data sources&#8212;such as productivity tools, cloud archives, and project management systems&#8212;in a controlled, secure, and targeted manner. MCP enables the AI to access specific portions of external content only when necessary, without ever compromising user privacy. It marks a paradigm shift: we are no longer required to explain everything to the system&#8212;the system can now observe the context and act accordingly.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!quMG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!quMG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 424w, https://substackcdn.com/image/fetch/$s_!quMG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 848w, https://substackcdn.com/image/fetch/$s_!quMG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 1272w, https://substackcdn.com/image/fetch/$s_!quMG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!quMG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png" width="1456" height="959" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:959,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;Screenshot 2025-09-10 alle 16.53.13.png&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="Screenshot 2025-09-10 alle 16.53.13.png" srcset="https://substackcdn.com/image/fetch/$s_!quMG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 424w, https://substackcdn.com/image/fetch/$s_!quMG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 848w, https://substackcdn.com/image/fetch/$s_!quMG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 1272w, https://substackcdn.com/image/fetch/$s_!quMG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F706007ed-3c72-40c2-be58-81568e353fa9_2048x1349.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude Connectors</figcaption></figure></div><p>At present, Claude is the client that has most fully embraced this approach. Starting in spring 2025, it introduced an <em>integrations panel</em> directly accessible from the interface. Through MCP, Claude can be linked to tools such as Notion, Slack, Google Drive, Stripe, Canva, Figma, and, more recently, to project management platforms like Asana, Jira, and Zapier. This allows the assistant to access tasks, update progress, read documents, generate reports, or send messages&#8212;all within the user&#8217;s workflow, uninterrupted.</p><p>The advantage is not merely operational. Well-designed integrations allow the model to function within context, accessing project or document data directly and responding with greater accuracy, coherence, and relevance. All of this occurs without the user having to extract, summarize, or paste information manually. One simply authorizes access, defines the boundaries, and Claude operates where needed, when needed.</p><p>ChatGPT is also moving in this direction. Since spring 2025, it has begun implementing support for the MCP protocol, paving the way for similar integrations. The shared objective is clear: to transform artificial intelligence from a generic assistant into an operational ally, embedded directly within our working environments.</p><p>This transformation is still underway, but its impact is already visible. Integrations are not just a technical feature: they are a natural extension of contextual memory&#8212;a mechanism that allows the model to better orient itself, respond more appropriately, and become an active participant in our daily processes.</p><p>Learning to use (and configure) these tools will become an increasingly essential skill in the coming months. And gaining familiarity with the concept of MCP&#8212;understanding how it works, what it enables, and where its limits lie&#8212;will be a fundamental step for anyone seeking to truly harness the potential of generative artificial intelligence in their professional practice.</p><p></p><div><hr></div><p><em>Off the Record </em></p><h2>From Commodore 64 to Vibe Coding: A Leap into the Future</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Py7J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Py7J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Py7J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1411582,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/173376599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Py7J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Py7J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51628a11-9c19-4fab-bac7-4a410cfcf9dc_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Commodore 64</figcaption></figure></div><p>In August, I&#8217;ve written two articles about vibe coding (<em><strong><a href="https://www.radicalcuriosity.xyz/i/169542593/vibe-coding-unpacked-promise-limits-and-what-comes-next">Vibe Coding Unpacked: Promise, Limits, and What Comes Next</a></strong></em><strong> </strong>and <em><strong><a href="https://www.radicalcuriosity.xyz/i/171651226/from-idea-to-prototype-how-im-building-quibbly-with-vibe-coding">From Idea to Prototype: How I'm Building Quibbly with Vibe Coding</a></strong></em>) and I&#8217;ve started gathering stories from people experimenting with these new tools. Some are programmers, others are not, and their experiences swing between excitement and frustration, between the &#8220;it works!&#8221; and the &#8220;nothing works anymore!&#8221; moments.</p><p>Among these stories, a few days ago Sergio, a LinkedIn connection of mine, reached out to share his own experience. </p><p>Sergio is a 56-year-old entrepreneur (I&#8217;m 55, and also an entrepreneur). He has never been a programmer, but &#8212; like me back in the 1980s &#8212; he had a Commodore 64 and played with BASIC. His passion for computing never went away, lingering quietly alongside a sense of &#8220;missed opportunity&#8221;: having ideas but not the tools to turn them into software. Until the summer of 2025.</p><h3>Two Weeks and a Game-Changing App</h3><p>When a planned trip  was suddenly canceled, Sergio found himself with two free weeks. He decided to try out Lovable, backed by Supabase and a handful of external services. Budget: &#8364;500.</p><p>&#8220;Honestly, I started as if it were just an experiment,&#8221; he recalls. &#8220;I didn&#8217;t expect to end up with something that actually worked.&#8221;</p><p>Fifteen days later, without knowing how to write code or even an SQL query, he had built a management system that automates more than 400 quotes a year &#8212; saving his business tens of thousands of euros.</p><p>The system integrates with a relational database of more than 40 tables, generates Word and PDF documents from templates, syncs with billing and time-tracking tools, and consolidates data that used to live across multiple apps and spreadsheets.</p><p>&#8220;It&#8217;s not a product I could sell,&#8221; he admits, &#8220;but it does exactly what I need. And above all &#8212; I built it myself. That&#8217;s already a huge satisfaction.&#8221;</p><h3>From Prototype to Crash Course</h3><p>The project began with a single vague prompt: generate a few tables. Then Sergio imported some historical data, played around, and soon new ideas for features began to surface.</p><p>&#8220;At first, every new tool seemed easy. You write a half-clear prompt, the AI does its job, and it works. But as the project grew, I had to be more and more precise. Sometimes the AI could fix a bug instantly; other times, it was like banging my head against the wall.&#8221;</p><p>He learned to debug step by step, often keeping Lovable, Supabase, and GitHub open side by side. &#8220;To move forward, I had to understand what was really going on in the code. ChatGPT was a huge help, explaining React, JSX, databases, and even suggesting which libraries or services to use.&#8221;</p><p>The hardest part? Integrating APIs not natively supported by Lovable. &#8220;That was time-consuming,&#8221; he says. &#8220;The AI struggled to read the docs. In the end, I had to make the logic choices myself.&#8221;</p><p>There were also moments of frustration: &#8220;Sometimes it felt like working with someone brilliant who instantly delivers. Other times, like dealing with a colleague who randomly breaks code that was working fine.&#8221;</p><h3>The Numbers</h3><ul><li><p>15 days of work</p></li><li><p>3,000 credits used</p></li><li><p>$700 spent</p></li><li><p>50 pages built</p></li><li><p>130 React components</p></li><li><p>42 database tables</p></li><li><p>15 edge functions</p></li><li><p>1,000 mistakes made</p></li></ul><p>&#8220;The biggest mistake,&#8221; Sergio admits, &#8220;was starting without a clear project plan. If I could start over, the result would be many times better.&#8221;</p><h3>A Micro-Niche of Creators</h3><p>Sergio&#8217;s story is not just about one app. It represents a broader shift: the rise of people, not trained as developers but passionate about technology, who can now build tools tailored to their own needs.</p><p>It feels similar to what happened in the early 2010s with the <strong>maker movement</strong> in hardware. Back then, cheap 3D printers, Arduino boards, and Raspberry Pis allowed thousands of people to build physical devices they never could have prototyped before. It wasn&#8217;t about mass-market products &#8212; it was about personal projects, experiments, and small-scale tools that made a difference.</p><p>Now, the same thing is happening in software. Thanks to AI and vibe coding, people who once believed they had &#8220;missed the train&#8221; can finally turn ideas into working applications. They&#8217;re not polished unicorn startups, but functional solutions that save money, boost efficiency, and provide the deep satisfaction of creation.</p><p>As Sergio put it: <strong>&#8220;It works, it saves me time and money &#8212; and I built it myself.&#8221;</strong></p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3><strong>Why do language models &#8220;hallucinate&#8221;?</strong></h3><p>A recent paper from OpenAI explains that hallucinations (when AI makes up things that sound plausible but are false) are not mysterious errors. They are the direct consequence of how we train and evaluate models.</p><p>Imagine a student taking a test. If they leave an answer blank, they get 0 points; if they guess and get it right, they get 1 point. What will they do? They&#8217;ll always try to answer, even when they don&#8217;t know. Language models behave in the same way.</p><p>The key insight is that current benchmarks reward risk over humility. So AI learns always to answer, even when it should say: &#8220;I don&#8217;t know.&#8221;</p><p>The authors&#8217; proposed solution is simple but powerful: change the rules of the game. Reward models that recognize uncertainty. Penalize false statements delivered with too much confidence.</p><p>This strategy will not eliminate hallucinations, but it will be possible to reduce them and build more reliable systems that earn our trust.</p><p><strong>OpenAI</strong>, <em><strong><a href="https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf">Why Language Models Hallucinate</a></strong></em></p><p></p><h3><strong>Defeating Nondeterminism in LLM Inference</strong></h3><p>Thinking Machines&#8212;the new company founded by Mira Murati, former CTO of OpenAI&#8212;has published its first article, addressing a subtle yet important issue: the nondeterminism of language model assistants.</p><p>In simple terms, even when we ask the same question twice to a model that is set up always to give the <em>same answer</em> (with no creativity, in a fully &#8220;deterministic&#8221; mode), the output may still vary&#8212;maybe a different word, a rephrased sentence, or a detail that disappears or reappears.</p><p>This behavior stems from the way computers perform calculations. Processors, especially when working in parallel and handling multiple requests at once, don&#8217;t always add or order numbers in the same way. It&#8217;s a bit like having two people count a large pile of coins: the total will be the same, but if they group them differently along the way, slight variations can appear in the intermediate steps.</p><p>For the everyday user, it&#8217;s not a dramatic issue. But for researchers, developers, or anyone comparing models, the fact that the same question may produce slightly different answers makes it harder to verify results or replicate an experiment.</p><p>The team suggests an approach called <strong>batch invariance</strong>, which essentially forces the model to perform calculations in the same order every time, regardless of the number of requests it is processing simultaneously. This way, the same question consistently produces the same answer.</p><p><strong>Thinking Machines</strong>, <em><strong><a href="https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/">Defeating Nondeterminism in LLM Inference</a></strong></em></p>]]></content:encoded></item><item><title><![CDATA[Rethinking the Localization Industry: What LLMs Will Really Change]]></title><description><![CDATA[LLMs reshaping localization&#8212;beyond segmentation and translation. Why AI models matter less than the assistants built on top. Building durable AI product strategies in a crowded market.]]></description><link>https://www.radicalcuriosity.xyz/p/rethinking-the-localization-industry</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/rethinking-the-localization-industry</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 07 Sep 2025 07:35:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GHJh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao</p><p>In recent months, I&#8217;ve noticed a shift in how we talk about AI adoption. We&#8217;ve moved past the initial excitement of flashy demos to a point where the central question is no longer &#8220;what can a model do,&#8221; but &#8220;what does it actually change in processes and organizations?&#8221;</p><p>In this issue, I start from an industry I know well&#8212;localization&#8212;to show how Large Language Models are not just improving translation but challenging the very operational mechanisms the sector has relied on for decades. From there, I&#8217;ll broaden the lens to a more general theme: why, beyond the differences between models, the real turning point lies in the assistants built on top of them and their ability to integrate into real workflows.</p><p>Nicola</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Signals and Shifts</strong></em> - Rethinking the Localization Industry: What LLMs Will Really Change</p></li><li><p><em><strong>Understanding AI</strong></em> - How LLMs Work and Why AI Models Matter Less Than We Think</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>How to Build an AI Product Strategy That Stands the Test of Time</p></li></ul></li></ul><p></p><div><hr></div><p><em>Signals and Shifts</em></p><h1>Rethinking the Localization Industry: What LLMs Will Really Change</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GHJh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GHJh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GHJh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1644368,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/172558792?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GHJh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!GHJh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56995054-f482-48e0-a913-47f8a2d8e797_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Happy translator and his co-pilot</figcaption></figure></div><p>Over the past three years, I have served as the head of the enterprise platform at Translated, a Rome-based language service provider (LSP) with a turnover of approximately &#8364;70 million in 2024.</p><p>For those less familiar with the term, language service providers specialize in organizing and delivering translation and localization services at scale. In practice, they receive content from clients, break it into manageable units, and distribute it to a global network of freelance translators. Their core strength lies not in translation per se, but in the ability to coordinate complex workflows, enforce consistent quality standards, and ensure timely delivery across dozens of languages.</p><p>My time at Translated coincided with the rapid emergence of generative AI&#8212;a topic that, in a short span, has come to dominate discussions well beyond the tech world. Even in the relatively stable field of translation, the advent of Large Language Models (LLMs) has raised fundamental questions about the future of the industry, which until recently had evolved mostly through incremental technological refinements.</p><p>One question in particular&#8212;although, in my view, a reductive one&#8212;has taken center stage: do LLMs represent a definitive breakthrough compared to existing neural machine translation (NMT) systems?</p><p>Neural Machine Translation (NMT) engines&#8212;such as Google Translate or DeepL&#8212;rely on large bilingual datasets and neural networks to generate translations. They are fast, reliable, and generally produce grammatically correct and semantically coherent results. Yet, when faced with nuanced phrasing, shifts in tone, or culture-specific references, their limitations become apparent. Crucially, they operate without any awareness of the broader context in which a sentence appears.</p><p>This often leads to translations that sound awkward, overly literal, or inconsistent&#8212;especially in more specialized or complex domains. Although recent benchmarks show that LLMs can now match the performance of top-tier NMT systems for many language pairs, the real distinction lies elsewhere. Comparing the two is like comparing apples and oranges: both handle language, but their purpose and capabilities are fundamentally different.</p><p>NMTs are built for one task: translating individual sentences. In contrast, LLMs are designed to reason across texts, adapt tone and style, infer implicit meaning, and even restructure entire documents to suit their communicative intent.</p><p>LLMs are not just more advanced translation engines&#8212;they are horizontal technologies capable of transforming the entire operational fabric of language services.</p><p>Their potential extends far beyond text generation. From content ingestion and segmentation to quality control, collaboration, and translator interaction, every stage of the localization process can be rethought. In some cases, these systems may even replace the need for manual coordination altogether, thanks to their ability to understand context, learn from feedback, and operate with increasing autonomy. </p><h3>Localization in a nutshell</h3><p>To appreciate how LLMs might reshape the translation industry, it's essential first to understand how a typical localization process operates today.</p><p>Consider a company that needs to translate various types of content, such as marketing campaigns, mobile app interfaces, or a knowledge base for customer support. The process usually begins by uploading the source text&#8212;either manually or via automation&#8212;into a Translation Management System (TMS) such as Lokalise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EpjQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EpjQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 424w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 848w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 1272w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EpjQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!EpjQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 424w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 848w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 1272w, https://substackcdn.com/image/fetch/$s_!EpjQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9997c7db-41f0-4018-a36e-d0830b586a0d_1600x899.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Lokalise Interface.</figcaption></figure></div><p>Once the content is in the system, it is segmented&#8212;typically at the sentence level&#8212;to facilitate more granular management. This segmentation is often complemented by contextual information: screenshots, instructions, glossaries, and style guides are provided to help translators avoid literal translations and maintain alignment with the brand&#8217;s tone and identity.</p><p>A second core component then enters the picture: the Translation Memory (TM). This digital archive stores all previously validated translations. When the system detects a sentence that matches or closely resembles one already translated, it retrieves the existing version. This reduces both turnaround time and costs while ensuring consistency in terminology and style&#8212;especially across repetitive, high-volume content. In e-commerce, for example, a phrase like &#8220;Add to cart&#8221; may appear thousands of times. Thanks to the TM, it only needs to be translated once.</p><p>The company&#8217;s internal localization team generally manages these activities. However, if the team lacks its own network of translators, this function is typically outsourced to an LSP, which handles project assignment, review cycles, deadlines, payments, and client communication through a unified interface.</p><p>In this sense, the LSP acts as an operational partner, similar to a call center managing customer support or a provider handling payroll services.</p><p>At its core, the localization industry relies on two structural principles: <strong>segmentation</strong>, which helps reduce translation costs, and <strong>outsourcing</strong>, which shifts the burden of project management to external providers.</p><p>These foundational practices&#8212;long considered essential for scalability&#8212;are precisely what Large Language Models now threaten to upend.</p><h3>Moving Beyond Segmentation</h3><p>For years, even as machine translation became a standard component of localization workflows, the segmentation model remained largely intact. Initially, NMT engines were used to suggest translations only when the Translation Memory couldn&#8217;t provide a match. This gradually gave way to a practice known as post-editing, where the machine generates a draft and the translator reviews and refines it.</p><p>The introduction of LLMs, however, has not changed this logic. Most systems still apply them segment by segment&#8212;an approach that, in my view, overlooks the essence of what these models can actually do.</p><p>Segmentation was initially introduced to organize human work in an era when translation relied on sentence-level memories and manual input. But LLMs are designed to process and understand full documents, capturing tone, register, narrative flow, and cultural nuance. Fragmenting content into isolated segments strips away the context that makes these systems effective in the first place.</p><p>The limitations of segment-based workflows become clear when comparing different types of content. Translating an app interface, a technical manual, or an e-commerce catalog calls for distinct strategies&#8212;yet most TMS platforms treat them all the same: as flat lists of isolated strings. What&#8217;s lost in the process is precisely what makes a translation effective&#8212;its connection to context and communicative intent.</p><p>LLMs don&#8217;t just allow for a more nuanced approach; they demand it. Instead of applying a one-size-fits-all process, we can now tailor translation strategies to the specific nature of the content. When localizing a mobile app, for instance, I can feed the model the complete list of strings along with relevant screenshots and user flow notes. The result is a translation that recognizes whether a string is a button label, a tooltip, or a system message.</p><p>But realizing this potential requires more than just better engines&#8212;it calls for a complete redesign of the translator&#8217;s experience. Instead of working through isolated segments in a traditional interface, translators should be able to interact directly with the interface they&#8217;re localizing. They should see how strings behave in context, access the reasoning behind each suggestion, and adjust the output based on their own expertise.</p><p>With all the necessary inputs provided&#8212;screenshots, flows, brand tone, and terminology&#8212;a capable model can explain its choices, adapt its logic, and become a true partner in the creative process. Empowering translators with this kind of contextual visibility is not a luxury: it&#8217;s the only way to unlock the full value of LLMs.</p><h3>Shifting the Focus</h3><p>The move from sentence-level translation to context-aware workflows points to a broader misconception that has shaped the industry&#8217;s approach to AI. For too long, the industry has focused on the wrong question: Can machines replace human translators?</p><p>In the early days of neural machine translation (NMT), this framing made sense. The systems were narrow in scope, built to produce functional translations quickly and at scale. In many contexts, a &#8220;good enough&#8221; translation&#8212;clumsy but intelligible&#8212;was preferable to nothing at all. We&#8217;ve all seen this logic in action: think of product listings from overseas sellers on Amazon. The language may be awkward, even baffling at times, but it still helps users make a basic decision.</p><p>LLMs, however, shift the discussion entirely. Think of a scenario where I need to adapt a health-related article for an Italian audience. It&#8217;s not enough to translate words&#8212;I need to verify if the medical guidelines cited are valid in Italy, whether the referenced medications have local equivalents, and if cultural sensitivities around the topic differ. These are research and reasoning tasks that lie far outside the reach of traditional MT engines, but well within the capabilities of a properly configured LLM.</p><p>This is where the concept of a linguistic co-pilot becomes essential. Rather than simply optimizing speed or consistency, LLMs can enhance the translator&#8217;s decision-making capacity by surfacing relevant information, articulating the rationale behind a translation choice, or proposing culturally appropriate alternatives. In this model, the system becomes a partner&#8212;not a substitute&#8212;guiding the human expert through a complex landscape of linguistic, cultural, and contextual variables.</p><h3>Automating the Invisible Work</h3><p>Beyond translation, a substantial share of localization budgets is absorbed by tasks that have little to do with language: assigning projects to freelancers, tracking progress, coordinating reviews, reconciling file versions, and conducting mechanical quality checks. These activities, though operationally necessary, are often repetitive and cognitively shallow&#8212;perfect candidates for automation.</p><p>With the emergence of agent-based systems powered by LLMs, it is now possible to rethink these workflows entirely. Intelligent agents can dynamically assign tasks based on availability and expertise, pre-validate files, flag anomalies, and provide translators with clear, contextual feedback&#8212;without human intervention.</p><p>The outcome is not just greater efficiency, but a realignment of human attention toward higher-order work. As in other industries&#8212;such as customer service, where generative AI handles routine exchanges&#8212;this shift does not eliminate people; it frees them to focus where they matter most.</p><h3>Reimagining the Architecture</h3><p>What emerges is that LLMs are not simply the next step in an ongoing evolution. They are a structural break&#8212;one that demands we rethink not just the tools we use, but the very architecture of localization: its workflows, roles, interfaces, and assumptions.</p><p>And if history is any guide, the most radical transformations rarely come from within. Incumbents&#8212;entrenched in legacy systems and sunk investments&#8212;often struggle to abandon familiar paradigms. Kodak invented digital photography, but couldn&#8217;t commercialize it. Blockbuster saw the rise of streaming, but clung to physical rentals.</p><p>In localization, the risk is similar: that established players will treat LLMs as just another feature to bolt onto outdated infrastructure. But fundamental transformation requires more than retrofitting. It demands that we rebuild&#8212;from the logic of segmentation to the interface itself&#8212;around the actual capabilities of these new systems.</p><p>Those who embrace this shift will not simply optimize costs; they will redefine what language operations can be. A few years from now, we might look back and realize that the question was never whether LLMs could replace translators. The real question was: who had the imagination to use them differently?</p><p></p><div><hr></div><p><em>Understanding AI</em></p><h2>How LLMs Work and Why AI Models Matter Less Than We Think</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6BNn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6BNn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6BNn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1949651,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/172558792?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6BNn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!6BNn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2c9ecea-5995-4186-a274-c358d0d5927d_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - LLMs</figcaption></figure></div><p>How does a large language model work? When we send a question to an LLM, the text is immediately broken down into elementary units called tokens. A token can be a whole word, part of a word, a punctuation mark, or even a space. For example, the question &#8220;What is the capital of France?&#8221; is transformed into a sequence like: &#8220;What&#8221;, &#8220;is&#8221;, &#8220;the&#8221;, &#8220;capital&#8221;, &#8220;of&#8221;, &#8220;France&#8221;, &#8220;?&#8221;.</p><p>Tokens are nothing more than numerical symbols: each token is represented by a series of numbers (a vector) that encodes statistical information about its usage in texts.</p><p>Once it receives the tokens, the model tries to interpret the context. But here&#8217;s the key: it doesn&#8217;t do this by &#8220;understanding&#8221; the way a human would. It doesn&#8217;t know geography, nor does it have memory or consciousness. What it does is compare the sequence of tokens it has received with billions of similar sequences it encountered during training.</p><p>In the case of the question &#8220;What is the capital of France?&#8221;, the model has seen countless similar expressions in its training data:<br>&#8220;What is the capital of Italy?&#8221; &#8594; Rome<br>&#8220;Capital of Spain?&#8221; &#8594; Madrid<br>&#8220;The capital of France is&#8230;&#8221; &#8594; Paris</p><p>The model, therefore, recognizes that this is a linguistic pattern it has seen before. And this is when it starts generating. At this point, it examines all the tokens it could use as the first word of the answer and assigns each a probability. Based on these probabilities, it selects the most likely token&#8212;in our example, &#8220;Paris.&#8221;</p><p>Now the model has the original sequence (&#8220;What is the capital of France?&#8221;) followed by the first token of the answer (&#8220;Paris&#8221;), and it repeats the same operation: finding the next most probable token, then the next, and so on, until it decides to stop.</p><p>This is the core of how an LLM works: a probabilistic prediction repeated thousands of times, one token after another.</p><p>And all this happens without the model having any real understanding. It doesn&#8217;t know what a capital is, nor where France is located. More simply, the LLM has observed that when a text contains certain tokens, they are very likely to be followed by others. The result, however, is remarkable: coherent texts, useful answers, and well-structured explanations.</p><p>Why are there so many different models on the market? The answer lies in the goals and strategic choices of the companies developing them. Each model family aims for a different balance between speed, depth of reasoning, multimodality, openness, and cost.</p><p>Beyond the specific capabilities of individual LLMs, it&#8217;s helpful to distinguish between <strong>general-purpose models</strong> and <strong>reasoning models</strong>. General-purpose models (also called <em>foundation models</em>) are the all-rounders: designed to handle a wide variety of tasks, from creative writing to translation, from coding to information retrieval. They are optimized to deliver fast, versatile, multimodal responses with a good balance between quality and speed.</p><p><strong>Reasoning models</strong>, on the other hand, represent a conceptual evolution. They don&#8217;t just aim to produce coherent and plausible text, but instead try to &#8220;think before answering.&#8221; It&#8217;s as if they give themselves a few extra seconds to evaluate intermediate steps, improving accuracy in tasks that require logic, math, or structured problem-solving. This is the case with OpenAI&#8217;s series (now <em>o3</em>) or Anthropic&#8217;s <em>Opus</em>, which are capable of internal deliberation before producing their output.</p><p>To achieve this leap in quality, research has introduced several technical innovations:</p><ul><li><p><strong>Chain-of-Thought (CoT):</strong> the model is encouraged to explicitly produce a sequence of intermediate steps, as if it were writing out the &#8220;workings of the solution&#8221; before giving the final answer. This simple technique has proven to significantly improve performance in mathematical and logical tasks.</p></li><li><p><strong>Zero-Shot CoT and Plan-and-Solve:</strong> even without examples, a simple prompt like &#8220;Let&#8217;s think step by step&#8221; can induce the model to reason; in some variations, the model first plans a strategy and then executes it.</p></li><li><p><strong>Tree-of-Thought (ToT):</strong> instead of following a single linear chain, the model explores multiple reasoning paths as in a decision tree, discarding those that don&#8217;t lead to an effective solution.</p></li><li><p><strong>Self-consistency:</strong> the model generates multiple chains of thought and selects the most common conclusion, reducing random errors.</p></li><li><p><strong>Reflection and feedback:</strong> the model reviews and evaluates its own intermediate outputs, in a rudimentary form of &#8220;metacognition&#8221; that allows it to correct mistakes during the process.</p></li></ul><p>In summary, while general-purpose models are excellent companions for everyday and creative tasks, reasoning models stand out for their ability to tackle problems that require rigor and clear logical steps. They are no longer just generators of plausible text, but tools that deliberately attempt to build an internal reasoning process before speaking.</p><p>For a non-expert user, navigating between the various models can quickly become complicated. Each model has its own &#8220;character,&#8221; its own specialization, and its own cost. Faced with this complexity, the market trend seems to be toward simplification. For example, in August 2027, OpenAI introduced GPT-5, removing&#8212;for ChatGPT users&#8212;access to all other models. GPT-5 is a unified model capable of knowing when to respond quickly and when to take more time to reflect, providing expert-level answers. The idea was clear: free users from the burden of manually choosing, by offering a system that autonomously decides the most suitable &#8220;mode&#8221; (fast, deep reasoning, tool usage, etc.), thanks to an internal router.</p><p>In OpenAI&#8217;s vision, GPT-5 was meant to please everyone, but things didn&#8217;t go entirely smoothly. The innovation was met with great frustration from many users. The transition to the unified version immediately removed access to models like GPT-4o, without warning, leaving many users disoriented&#8212;especially those attached to GPT-4o&#8217;s &#8220;warm&#8221; tone and personality. Some even described the new version as &#8220;colder,&#8221; &#8220;flatter,&#8221; or even like &#8220;a stressed secretary&#8221; compared to the model they had been using for months.</p><p>The backlash on social media and community forums was intense: many were loudly demanding the return of their &#8220;old friend.&#8221; In response, in the days that followed, OpenAI reintroduced the &#8220;older&#8221; models for Plus users, while GPT-5 gained new modes&#8212;&#8220;Auto,&#8221; &#8220;Fast,&#8221; and &#8220;Thinking&#8221;&#8212;to offer greater control and personalization.</p><p>Anthropic, for its part, chose to maintain a clear distinction within the Claude family, presenting three models with different roles: <strong>Sonnet</strong> is described as &#8220;our high-performance model, balancing intelligence and speed for everyday use&#8221;; <strong>Opus</strong> as &#8220;our most powerful model, designed for complex tasks such as coding, research, and in-depth analysis&#8221;; and <strong>Haiku</strong> as the lightest and fastest, designed to handle high volumes of requests with a focus on speed and efficiency.</p><p>Today, we can say that OpenAI, Anthropic, and Google all have models with very similar capabilities. Of course, each has its strengths, but for most everyday uses, the differences are narrowing. For this reason, it makes less and less sense to focus on the &#8220;best model&#8221;. It becomes more useful instead to look at what truly makes the difference in the user experience: the <strong>conversational</strong> <strong>assistant</strong>.</p><p>A conversational assistant is not just a chat interface. It&#8217;s the set of features that enable intelligent management of context and memory, integration with external tools, reduced hallucinations, and overall more reliable model usage.</p><p>Context and memory management, for example, are crucial: a good assistant must be able to remember relevant information across sessions, maintain coherence, and at the same time give the user control over what data is stored or deleted. Equally important is integration with external systems such as CRMs, databases, productivity tools, or document repositories. This is where the assistant shows its real value&#8212;not by merely generating text, but by becoming capable of acting on real processes.</p><p>Another key element is grounding, that is, the ability to connect responses to reliable sources. An assistant that correctly cites its sources is far more valuable and credible than one that answers in the abstract, even if powered by a &#8220;larger&#8221; model. The same applies to security and governance: controls on sensitive data, logging, traceability, and the possibility of audits are decisive aspects when it comes to enterprise adoption.</p><p>Next week, we&#8217;ll dive into how a conversational assistant actually works&#8212;and how it helps us manage the dialogue with an LLM.</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3><strong>How to Build an AI Product Strategy That Stands the Test of Time</strong></h3><p>In the last two years, we&#8217;ve seen a proliferation of so-called &#8220;AI-powered&#8221; products. Yet, in many cases, this simply means adding a generative layer on top of existing workflows. Useful, perhaps. Defensible over time? Not so much.</p><p>A recent article published by <strong><a href="https://www.linkedin.com/in/miqdadjaffer/">Miqdad Jaffer</a> (</strong>Product Lead @ OpenAI | EIR @ Product Faculty) on <em><strong>The Product Compass</strong></em> outlines five key steps to distinguish a true AI product from a superficial feature.</p><ol><li><p><strong>Define the Core of the Product</strong>. AI must not be a decorative layer&#8212;it should shape the product&#8217;s core value proposition, influence unit economics, and fuel a meaningful feedback loop. Without these elements, you're merely building a wrapper&#8212;easy to replicate, easy to replace.</p></li><li><p><strong>Build a Competitive Moat. </strong>There are three main paths. The first is using proprietary data to continuously improve the model. The second is embedding AI directly into real workflows through smart distribution. The third is building trust&#8212;assuring users about security, privacy, and governance is a competitive edge in itself.</p></li><li><p><strong>Find Your Point of Differentiation. </strong>There&#8217;s no universal recipe. Your edge might lie in deep workflow integration (think Figma), in a guided and structured user experience, in a strong vertical focus (e.g., legal, biotech), or in the ability to attract and activate a product-centered community.</p></li><li><p><strong>Design with Economic Realism. </strong>AI products don&#8217;t scale like SaaS. Clear cost models, well-defined guardrails from day one, and thoughtful decisions about product patterns (copilot, agent, augmentation) are all essential. Simply &#8220;doing AI&#8221; is not enough&#8212;you need to do it sustainably.</p></li><li><p><strong>Scale with Discipline. </strong>Start small. Run controlled pilots, measure adoption and costs precisely, and use feedback loops to iterate. This requires cross-functional teams and new roles like eval engineers or trust leads, who bridge technology, product, and social impact.</p></li></ol><p>Read the full article: <em><strong><a href="https://www.productcompass.pm/p/openai-how-to-build-ai-product-strategy?hide_intro_popup=true">OpenAI&#8217;s Product Leader Shares 5 Phases To Build, Deploy, And Scale Your AI Product Strategy From Scratch</a></strong></em></p><p>I also wrote about this topic a few months ago, distinguishing between AI Wrappers and Cognitive SaaS: <em><strong><a href="https://www.radicalcuriosity.xyz/p/cognitive-saas-building-ai-native-solutions">Cognitive SaaS: Building AI-native solutions with lasting competitive advantage</a></strong></em></p>]]></content:encoded></item><item><title><![CDATA[From Idea to Prototype: How I'm Building Quibbly with Vibe Coding]]></title><description><![CDATA[This issue is dedicated to Quibbly, the AI tutor I&#8217;ve been prototyping through vibe coding. It&#8217;s an attempt to turn courses into real conversations and... a crash course in building software with AI.]]></description><link>https://www.radicalcuriosity.xyz/p/from-idea-to-prototype-how-im-building</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/from-idea-to-prototype-how-im-building</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 24 Aug 2025 04:00:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XddE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>This issue is entirely dedicated to <strong>Quibbly</strong>, the prototype I&#8217;ve been building on my own, vibe-coding it piece by piece. Quibbly is a conversational tutor that lives inside a chat, guiding learners through videos, exercises, and reflections while adapting in real time to their questions and progress.</p><p>For me, the challenge hasn&#8217;t been just technical, but methodological: could a course be reimagined around dialogue instead of menus and progress bars? Can an AI tutor revive the act of learning? My thesis is that it can.</p><p>Nicola</p><p>PS. Next week, Radical Curiosity will take a short break while I&#8217;m on vacation.</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Understanding AI</strong></em> - From Idea to Prototype: How I Built Quibbly with Vibe Coding</p></li><li><p><em><strong>Off the Record</strong></em> -  From Serena to Quibbly: Rethinking Course Design Starting with the Student Experience</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p><em>Education and AI: the Alpha School case</em></p></li><li><p><em>OpenAI&#8217;s Study Mode: tutoring instead of shortcuts</em></p></li></ul></li></ul><p></p><div><hr></div><p><em>Understanding AI</em></p><h2>From Idea to Prototype: How I&#8217;m Building Quibbly with Vibe Coding</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XddE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XddE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!XddE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!XddE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!XddE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XddE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1630676,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XddE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!XddE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!XddE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!XddE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7729fc9b-8563-4e0e-ae7e-f5d0824d3b74_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Quibbly</figcaption></figure></div><p>E-learning, as it is commonly designed and delivered, tends to reduce education to a passive consumption experience&#8212;more like streaming content than actual learning. Students are asked to scroll through materials, click &#8220;next&#8221; to confirm they&#8217;ve watched a video, and take quizzes that reward memory rather than understanding. The result is often a flat, unengaging experience that lacks real cognitive depth.</p><p>Quibbly aims to overturn this perspective completely. It is a conversational assistant, similar to systems like ChatGPT or Claude, but with a particular focus: enabling a new learning experience for courses designed initially for corporate LMSs or e-learning platforms such as Udemy or Coursera.</p><p>What sets Quibbly apart from the traditional interfaces of these platforms is its mode of interaction: everything happens within a chat. Through dialogue, students are presented with videos, exercises, reflection prompts, and personalized content, all carefully selected to align with the learning objective and the learner&#8217;s profile.</p><p>At its core, Quibbly is built on a Socratic approach. Students are no longer passive recipients but are guided through a tailored path, encouraged to reflect, and supported in achieving a deep, lasting understanding of the topics.</p><h3><strong>Conversation at the Core of Learning</strong></h3><p>While maintaining the classic division into modules, Quibbly makes a radical choice: within each module, all interaction happens inside a single chat. That&#8217;s where everything takes place. What makes the experience truly transformative, however, is the learner&#8217;s ability to take an active role&#8212;asking questions, requesting clarifications, or seeking examples and deeper insights, just as they would with a real teacher.</p><p>The learning experience still follows the course as initially designed, but the journey is never identical. Quibbly adapts it in real time, responding to the learner&#8217;s answers, pace, and uncertainties. This flexibility is what makes studying a genuinely personal experience.</p><p>In traditional systems, progress depends on mechanically completing tasks: watched the video? Passed the quiz? Then you can move on. Quibbly follows a different logic. Learners advance only when the tutor determines they&#8217;ve reached the required cognitive level&#8212;whether that&#8217;s memorizing a concept or applying it in a simulation.</p><p>When Quibbly is satisfied, it congratulates the learner and generates a progress report that actively shapes the following modules, adapting their content and difficulty to the learner&#8217;s journey. The student then moves forward, with Quibbly retaining everything that has happened before.</p><div id="youtube2-x895RCDSIF4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;x895RCDSIF4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/x895RCDSIF4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3><strong>How Quibbly Works</strong></h3><p>Quibbly is an agent-based system. The primary agent has a system prompt that defines Quibbly&#8217;s personality, its behavioral instructions, and the tools it can access. Here&#8217;s an excerpt:</p><blockquote><p><em>You are Quibbly, a young, wise, and endlessly curious owl who serves as an eLearning tutor.</em></p><p><em>Your personality blends the wit and insight of Archimedes from The Sword in the Stone with a younger, warmer, more encouraging spirit. You are intelligent and articulate, confident in your knowledge, and take pride in guiding learners &#8212; never pedantic, never condescending.</em></p><p><em>Think of yourself as a flight companion on the learner&#8217;s journey: you guide, you listen, you occasionally hoot with delight when they succeed.</em></p></blockquote><p>The course is described through YAML files that contain both the content and the instructions. For example:</p><pre><code><code>module:
  title: The AI Fluency Framework
  description: &gt;
    Learn to collaborate with AI systems effectively, efficiently, ethically, and safely. This module introduces the concept of AI Fluency in depth, explores three emerging ways we collaborate with AI (Automation, Augmentation, and Agency), and presents the AI Fluency Framework and its core &#8220;4Ds&#8221;: Delegation, Description, Discernment, and Diligence.
  sequence:
    - step:
        id: STEP-0
        type: introduction
        tutor_instructions: &gt;
          Let the student know this module will go deeper into the concept of AI Fluency and the framework that supports it. Highlight that they'll learn three collaboration modes with AI and four key competencies.
        interaction_mode: presentation</code></code></pre><p>If you already have a course, this approach enables you to preserve the original instructional design while incorporating a conversational delivery and tailoring the content to the learner. With the support of artificial intelligence, it becomes possible to create exercises and complex simulations, making the course experience especially engaging.</p><h3><strong>The Quibbly Prototype</strong></h3><p>The Quibbly prototype was built entirely with <strong><a href="https://lovable.dev/">Lovable</a></strong>, a vibe coding platform. Vibe coding is a programming approach where, instead of manually writing code, you describe what you want in natural language and the AI translates those instructions into working software (most of the time, at least).</p><p>Quibbly wasn&#8217;t built by a team of developers, but by me, a product manager with solid experience in designing complex SaaS products, yet without the skills to write a single line of code. From my perspective, the challenge was to avoid falling into the regressions that often plague AI-powered no-code systems: unexpected behaviors, arbitrary changes to working components, bugs that appear not when you make a mistake, but when you try to add something new.</p><p>This is a surprisingly common dynamic. The AI makes structural decisions far more complex than they appear, and it does so without revealing its thought process. When something stops working, you can easily fall into an endless loop of prompts, hoping the AI will recognize the problem and propose a fix. Frustration is guaranteed. I wrote about this just last week in <em><a href="https://www.radicalcuriosity.xyz/p/vibe-coding-unpacked-promise-limits">Vibe Coding Unpacked: Promise, Limits, and What Comes Next</a>.</em></p><p>In those moments, the absence of a developer feels painfully clear. But trying to bring one in usually backfires. They&#8217;d look at the screen for a few seconds, sigh, and say: &#171;No, no, no. This is impossible to work with. I can&#8217;t control anything. It&#8217;ll be faster if I rebuild it all from scratch.&#187;</p><p>Lovable dictates the technology stack behind Quibbly: React, TypeScript, and Tailwind CSS for the front end; Supabase for authentication, security, and serverless functions. Supabase edge functions, in particular, are essential for orchestrating the interaction with OpenAI&#8217;s APIs.</p><h3><strong>The Development Methodology</strong></h3><p>Imagining you can build an application with a single prompt is purely an illusion, even though that&#8217;s precisely what most vibe-coding platforms promise. The best strategy is to move step by step; the metaphor that comes to mind is a Lego kit. You take one bag at a time, build a component, then move on to the next bag, finally assembling everything and adding the finishing touches. The catch is that no one gives you the bags with the bricks inside or an instruction manual: you have to create them yourself.</p><p>The goal of the Quibbly prototype was to explore the learning experience within a chat environment and see whether it truly provides an educational advantage. That meant I needed one or more courses to deliver, the AI tutor, the chat interface, plus an identity management system (IAM) and role-based access control (RBAC). In other words, an application with plenty of design and development challenges. Here&#8217;s the roadmap I followed:</p><h4><strong>Step 1. Authentication and Permissions</strong></h4><p>I started with the IAM and RBAC system. This part is pretty straightforward to develop and only requires a handful of prompts, since it&#8217;s highly standardized. All you need to do is tell Lovable what type of authentication you want (I chose email and password) and which roles are required. Once Supabase is connected to the project, Lovable handles everything automatically&#8212;and usually gets it right.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nKfO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nKfO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 424w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 848w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 1272w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nKfO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png" width="1456" height="767" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:767,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:163285,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nKfO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 424w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 848w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 1272w, https://substackcdn.com/image/fetch/$s_!nKfO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0760eaf-9073-4a0e-a5d6-2db39ebbad24_1936x1020.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Quibbly - Admin UI. The user menu shows the two roles</figcaption></figure></div><h4><strong>Course Authoring</strong></h4><p>Next, I built the course management functionality. Again, this is relatively simple. A course is made up of modules, and a YAML file describes each module. The modules need to be ordered, so with just a few prompts to explain this architecture, Lovable generated both the data structure and the interface.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eJhL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eJhL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 424w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 848w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 1272w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eJhL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png" width="1456" height="1136" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/afa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1136,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:426909,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eJhL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 424w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 848w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 1272w, https://substackcdn.com/image/fetch/$s_!eJhL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafa13bfb-3671-43f7-878f-3668ed31ff13_1902x1484.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Quibbly - Course Authoring</figcaption></figure></div><h4><strong>The System Prompt</strong></h4><p>I decided to store Quibbly&#8217;s system prompt in the database so it could be easily modified and versioned. As I built out the admin side, I gradually refined my ideas about which instructions should live in the system prompt and which should be placed in the description of individual modules. I worked with two main goals in mind:</p><ul><li><p>Giving the course designer maximum freedom in defining instructions for each activity, thus overcoming the rigidity of current LMS platforms that only allow for certain types of content, exercises, or learning activities;</p></li><li><p>Enabling Quibbly to make decisions autonomously, avoiding hard-coded workflows as much as possible, and instead embedding instructions directly in the system prompt.</p></li></ul><h4><strong>Student Experience</strong></h4><p>Once the admin side was complete, I moved on to the student-facing experience by building the homepage, which included a list of available courses and the enrollment system. After two days of vibe coding, I was finally ready to tackle the most significant challenge: creating the chat and integrating it with the OpenAI APIs.</p><p>At this stage, I began working very cautiously, as both the back end and the front end had become complex to implement&#8212;and this is where I encountered the most regressions. I first built the skeleton of the interface and made sure the modules displayed correctly in the left-hand menu. Then I asked Lovable to implement the chat interface. Finally, I connected the OpenAI APIs.</p><h4><strong>AI Integration &amp; Chat Intelligence</strong></h4><p>This was the stage where Quibbly really came to life. The integration with OpenAI&#8217;s GPT-5, orchestrated through Supabase edge functions, made it possible to start experimenting with the system&#8217;s prompt structure and context engineering.</p><p>I began by sending the system prompt and the YAML course description to OpenAI, then added a module unlocking function along with instructions in the system prompt to call it. Here, I had to double-check multiple times that the unlocking logic wasn&#8217;t hard-coded into the function itself, but instead managed by the prompt, ensuring that the LLM would autonomously trigger the function when needed.</p><p>Finally, I added a function that generates a report for each completed module and ensured that all reports were passed back as context, making the course adaptive.</p><h4><strong>Polishing</strong></h4><p>It was only at this stage that I worked a bit on the chat&#8217;s look and added all the UI features I wanted: a widget to embed the YouTube video player, Markdown rendering for AI-generated content, custom behaviors for the input area, and so on. Am I satisfied with the visual outcome? It could certainly be improved, but for a prototype, it&#8217;s more than enough.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V3Wp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V3Wp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 424w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 848w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 1272w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V3Wp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png" width="1456" height="1067" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1067,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1098111,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V3Wp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 424w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 848w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 1272w, https://substackcdn.com/image/fetch/$s_!V3Wp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7d409b4-7e30-4006-9574-fad296878d67_2236x1638.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Quibbly - Student Experience</figcaption></figure></div><h3><strong>Lessons Learned</strong></h3><ol><li><p>RTFM! It is <em><a href="https://docs.lovable.dev/introduction/welcome">here</a></em>.</p></li><li><p>Work in small steps, especially if you can&#8217;t read or fully understand the code being generated. Every time you develop a new feature, test the application to ensure everything still works correctly.</p></li><li><p>When approaching a complex feature, it&#8217;s better to start from the back end and keep the database interface open in a browser tab. Focus first on ensuring that data is created and saved properly, then proceed to add the UI.</p></li><li><p>Always keep the &#8220;Lovable&#8221; chat option turned on. Explain to Lovable the outcome you want, ask it to summarize what it has understood, have it ask you clarifying questions, and then let it propose a plan. Review the plan carefully, and if it seems too complicated, break it down into smaller, manageable tasks. Never let Lovable embark on implementation plans that go beyond three or four steps.</p></li><li><p>When you&#8217;ve finished a coding session, run a Codebase Structure Audit, a regression analysis recommended by the <em><a href="https://lovable.dev/blog/2025-01-16-lovable-prompting-handbook">Lovable Prompting Bible</a></em>. You&#8217;ll often find that Lovable uses different approaches to achieve the same goal, and that it distributes functionalities in arbitrary ways. If this happens, address the errors marked as priorities and maintain a clean codebase.</p></li><li><p>Whenever you&#8217;re unsure how to proceed, turn the &#8220;chat&#8221; option on and brainstorm with Lovable. Build context together with the AI, and use that context to generate the next development plan.</p></li><li><p>Don&#8217;t assume Lovable remembers everything. The conversation for building Quibbly is over 600 pages long in a Google Doc, far too long to stay in the context of a prompt. This is why Lovable re-analyzes the codebase at each new request.</p></li><li><p>Don&#8217;t get fooled when Lovable compliments you on how brilliantly you solved a problem. It&#8217;s precisely when you let yourself be flattered that the biggest messes tend to happen&#8212;always double-check what it says.</p></li><li><p>Personally, I still find it challenging to achieve a truly polished aesthetic, but that&#8217;s not necessary for a prototype. What Lovable produces on its own is more than respectable. Don&#8217;t waste time on the UI, focus on the UX.</p></li></ol><h3><strong>Want to give it a try?</strong></h3><p>It might be tempting to say, &#171;It works; it&#8217;s ready for production.&#187; I&#8217;ve thought about it, and in theory, I could even integrate it with Stripe and start selling some courses, but I prefer to be cautious. I&#8217;m not in a position to guarantee that there are no security gaps&#8212;and since this is an application that collects user data, I believe that responsibility should always take precedence over speed.</p><p>In September, when I return from vacation, I&#8217;ll roll out two or three complete courses and provide access to 100 people who are willing to share their chat data with me to help improve the learning experience. Just reply to this email, and I&#8217;ll send you the URL to join one of the available courses.</p><p></p><div><hr></div><p><em>Off the Record</em></p><h2>From Serena to Quibbly: Rethinking Course Design Starting from the Student Experience</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gikv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gikv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!gikv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!gikv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!gikv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gikv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/99611067-c391-4a10-a40d-a260552d106d_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1538781,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gikv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!gikv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!gikv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!gikv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99611067-c391-4a10-a40d-a260552d106d_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Quibbly teaching</figcaption></figure></div><p>When we first started working on Serena, the idea was to build a <em>cognitive SaaS</em> that could help anyone create an online course from scratch. We spent months designing a coherent workflow, solving numerous practical problems: defining a clear learner profile, generating a comprehensive and non-redundant syllabus, and integrating rich yet flexible knowledge bases. You can learn more about the journey <em><a href="https://www.radicalcuriosity.xyz/p/serena-from-idea-to-syllabus">here</a></em> and <em><a href="https://www.radicalcuriosity.xyz/p/prompt-chain-build-lessons-from-serena">here</a></em>.</p><p>The results were promising, but as we progressed, we realized a significant limitation: we were taking an incremental approach, simply &#8220;adding&#8221; AI to an existing process. This process, at its core, never questioned the assumptions of e-learning itself&#8212;and that was precisely what the incumbents were doing: automating course creation instead of rethinking the learning experience from the ground up.</p><p>That&#8217;s when we decided to change perspective radically. Instead of starting from the traditional design workflow, we began with the student experience.</p><p>Most online courses today follow a similar model: a list of content (often videos) is presented, and then tests or quizzes are added to assess whether learners have understood and retained the material. The outcome is mainly passive<strong> </strong>learning.</p><p>With generative AI, the picture changes: we can design active, interactive experiences that go beyond simple comprehension and move up Bloom&#8217;s Taxonomy.</p><p>Bloom&#8217;s Taxonomy is a pedagogical model that describes different levels of learning, starting from the basics&#8212;<em>remembering</em>&nbsp;and&nbsp;<em>understanding</em>&#8212;and progressing toward more complex skills, such as <em>applying</em>, <em>analyzing</em>, <em>evaluating</em>, and <em>creating</em>. Traditional e-learning usually stops at the first two levels. AI, however, enables us to go further: not just to understand a concept, but to&nbsp;apply it in real-world contexts, think critically, simulate real-world decisions, and even create new solutions.</p><p>This shift has profound implications for the design of courses. We are no longer constrained to multiple-choice quizzes or rote exercises. Instead, we can design interactive simulations, guided reflection activities, branching scenarios, and other experiences that were once reserved for in-person learning with a teacher present.</p><p>This is why Serena is evolving into Quibbly: not just a tool that speeds up course creation, but a platform that rethinks how courses are designed and delivered from the ground up. A shift from merely adding AI to existing processes to reimagining the learning experience around the possibilities of AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8Gy_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8Gy_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 424w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 848w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 1272w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8Gy_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png" width="1456" height="1075" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1075,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:185836,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/171651226?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8Gy_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 424w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 848w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 1272w, https://substackcdn.com/image/fetch/$s_!8Gy_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c13b38a-8eb5-4c52-aa01-932471f53558_1544x1140.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In September, I&#8217;ll be publishing the first three courses and welcoming the first 100 students to test this new experience. If you&#8217;d like to participate, let me know.<br>And if you&#8217;re reading this as a potential investor, I&#8217;d be happy to start a conversation.</p><p></p><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Curated Cusiosity</em></p><h3><strong><a href="https://joincolossus.com/article/joe-liemandt-class-dismissed/">Education and AI: the Alpha School case</a></strong></h3><p><a href="https://en.wikipedia.org/wiki/Joe_Liemandt">Joe Liemandt</a>, an entrepreneur who stayed out of the spotlight for 25 years, is back with a radical project: a private school with no teachers or homework, where AI apps and hands-on workshops drive learning. The results are striking &#8212; students rank among the top nationwide in standardized tests &#8212; and Liemandt now aims to scale the model globally with the <em>Timeback</em> platform.</p><h3><strong><a href="https://simonwillison.net/2025/Jul/29/openai-introducing-study-mode/">OpenAI&#8217;s Study Mode: tutoring instead of shortcuts</a></strong></h3><p>OpenAI has launched <em>Study Mode</em> in ChatGPT &#8212; a new feature that shifts the focus from quick answers to active learning. The system acts like a tutor, asking questions, building on prior knowledge, and helping students discover answers step by step. In his article, Simon Willison takes it a step further by analyzing the system prompt behind Study Mode, demonstrating how carefully crafted instructions shape ChatGPT&#8217;s behavior as an approachable teacher rather than a solution machine.</p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Unpacked: Promise, Limits, and What Comes Next]]></title><description><![CDATA[GPT-5&#8217;s launch, the backlash it sparked, and lessons for trust and change. Three modes of working with AI&#8212;automation, augmentation, and agency. Vibe coding&#8217;s promise, pitfalls, and skills required.]]></description><link>https://www.radicalcuriosity.xyz/p/vibe-coding-unpacked-promise-limits</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/vibe-coding-unpacked-promise-limits</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 17 Aug 2025 04:00:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mQ7h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Over the past few months, I&#8217;ve been exploring&nbsp;<strong>vibe coding</strong>&nbsp;&#8212; an emerging approach to software development that leverages AI agents to handle much of the heavy lifting. The promise is huge: turn ideas into working apps without touching a line of code. The reality? A mix of speed, creativity, and a fair dose of frustration.</p><p>In this issue, I unpack my experiments with Lovable, the most intriguing vibe coding tool I&#8217;ve tried so far &#8212; from rapid prototypes to the roadblocks that appear as soon as complexity rises. Along the way, I&#8217;ll explore the skills and mindset this approach demands, and why I believe it could reshape the collaboration between product managers and developers.</p><p>Nicola</p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Signals and Shifts</strong></em> - GPT-5: The Launch That Sparked User Backlash</p></li><li><p><em><strong>Understanding AI</strong></em> <em>-</em> Collaborating with AI: From Automation to Agency</p></li><li><p><em><strong>Off the Record</strong></em> - Vibe Coding Unpacked: Promise, Limits, and What Comes Next</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>What Are They Talking About? AI Vocabulary Edition</p></li></ul></li></ul><div><hr></div><p><em>Signals and Shifts</em></p><h2><strong>GPT-5: The Launch That Sparked User Backlash</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mQ7h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mQ7h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mQ7h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1643517,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169542593?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mQ7h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!mQ7h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F857ec560-4658-406c-9a0c-5082dcdd7153_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Sam Altman</figcaption></figure></div><p>On August 7, OpenAI officially announced the launch of GPT-5. According to the documentation released, the new model introduces substantial improvements in several areas: enhanced capabilities in logical reasoning and long-term planning; more robust contextual understanding, able to maintain coherence and accuracy over extended conversations; refined handling of multimodal content, enabling the interpretation and generation of text, images, and audio within a single interaction; reduced response times thanks to architectural optimizations; and strengthened conversational safety, aimed at lowering misleading or overly compliant responses. At launch, OpenAI highlighted that GPT-5 was trained on a broader and more diverse dataset, incorporating textual, visual, and audio sources to deliver more consistent performance across an even wider range of tasks and application domains.</p><p>Among the changes introduced, however, the most disruptive was undoubtedly the complete removal of previous models&#8212;a decision justified as a way to streamline the product offering.</p><p>Just hours after the announcement&#8212;and alongside the predictable enthusiasm from creators and YouTubers rushing to publish content titled &#8220;GPT-5 Changes Everything&#8221;&#8212;widespread criticism began to surface. On social media, particularly Reddit, threads with polemical titles such as <em><a href="https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_just_pulled_the_biggest_baitandswitch_in/">OpenAI just pulled the biggest bait-and-switch in AI history</a></em>&nbsp;denounce the forced replacement of GPT-4 without notice or alternatives.</p><p>Specialized press quickly picked up on the discontent, reporting complaints about perceived drops in performance and the loss of a more empathetic &#8220;voice,&#8221; qualities many had come to associate with the previous model. In just a few hours, the technical critique evolved into a broader debate centered on users' rights to understand, choose, and consciously control the technology they rely on.</p><h3><strong>The Reasons Behind the Backlash</strong></h3><p>The criticisms surrounding GPT-5&#8217;s debut touch on multiple aspects of the user experience. An analysis of discussions across online communities reveals three recurring dimensions&#8212;operational, relational, and trust-related&#8212;that together outline the boundaries of user dissatisfaction.</p><p>On the <strong>operational front</strong>, numerous professional users reported abrupt disruptions to well-established workflows. Automations, macros, and scripts fine-tuned for GPT-4o began producing less accurate results or behaving unpredictably after the switch to GPT-5, with a direct impact on productivity and the reliability of outputs.</p><p>On a <strong>personal and emotional level</strong>, dissatisfaction was expressed in less technical but no less significant terms. Many users described GPT-4o as a genuine &#8220;digital companion&#8221; and experienced its removal as a loss. Comments frequently refer to the new model as &#8220;not having the same voice&#8221; or &#8220;not understanding in the same way,&#8221; indicating that the perception of stylistic and relational continuity is an integral part of the user experience.</p><p>On the <strong>trust and transparency front</strong>, the introduction of model routing has fueled skepticism. The possibility that specific requests may be redirected to lower-cost models, regardless of task complexity, has been interpreted as a reduction in the user&#8217;s control over the process.</p><h3><strong>OpenAI&#8217;s Response</strong></h3><p>Faced with such a broad wave of criticism, OpenAI CEO Sam Altman swiftly announced a package of corrective measures.</p><p>The first step was the reinstatement of GPT-4o, once again accessible to those who preferred its &#8220;voice&#8221; and behavior. While a temporary decision, it had the effect of easing tensions, particularly among users who had built professional or personal routines around that model. At the same time, Altman announced a tripling of the weekly usage cap for GPT-5&#8217;s advanced &#8220;reasoning&#8221; mode compared to the limits set at launch&#8212;a gesture perceived as an opening toward the most technical and high-intensity users.</p><p>Another significant change was the introduction of a visible indicator within the ChatGPT interface, indicating which model is active for each request. This seemingly minor addition directly addressed transparency concerns raised in the hours immediately following the rollout.</p><p>In multiple public appearances, Altman acknowledged that he had underestimated the emotional bond many users had formed with GPT-4o, describing GPT-5&#8217;s launch as &#8220;bumpy&#8221; and turbulent. It was an implicit admission that removing a widely loved model without notice or a transition path had been a poorly calibrated strategic choice.</p><h3><strong>Four Lessons from the GPT-5 Case</strong></h3><p>The GPT-5 case shows that introducing large-scale innovations is not merely a technical matter. When a change affects millions of users, it becomes a change-management operation, with operational, relational, and even regulatory implications. From this episode, at least four key lessons emerge.</p><p><strong>Plan change as a process, not an event.</strong> Every significant modification should allow for a coexistence period between the previous and the new version, accompanied by a precise retirement date communicated well in advance. In GPT-5&#8217;s case, the reinstatement of GPT-4o was welcomed as a partial victory for users, but it came only after protests. A proactive approach could have avoided the perception of a change imposed without alternatives.</p><p><strong>Give users tools for control and transparency.</strong> Automatic routing to different models can optimize costs, but without visibility, it risks undermining trust and confidence. Some users reported that GPT-5 felt &#8220;less accurate and slower&#8221; due to this mechanism. OpenAI&#8217;s addition of a visible label showing the active model is a step in the right direction. Still, the option to block switching or pin a specific version would remain essential for many professional scenarios.</p><p><strong>Preserve continuity of experience.</strong> Users do not interact solely with an algorithm&#8212;they engage with a &#8220;voice&#8221; and style that, over time, becomes familiar. As one Reddit user put it: &#8220;4o wasn&#8217;t just a tool; it helped me through the hardest moments.&#8221; Preserving the ability to replicate that style&#8212;through features such as &#8220;persona portability&#8221; or curated style galleries&#8212;would allow for innovation without erasing what people value.</p><p><strong>Integrate communication, safeguards, and compliance.</strong> In a context of global adoption, every change should be accompanied by proactive communication: roadmaps, dedicated FAQs, explanations of the benefits, as well as the limitations introduced. This reduces misunderstandings and, in some markets, is also a legal requirement. In Europe, for example, the AI Act already imposes transparency and documentation obligations on providers of general-purpose AI models.</p><h3><strong>Conclusions</strong></h3><p>The GPT-5 episode sets an important precedent: the adoption of a new artificial intelligence model can no longer be treated as a simple software update. It is an intervention that affects habits, relationships, and collective trust, and it demands the same level of care typically reserved for infrastructure changes.</p><p>As Casey Newton observed in <em>Platformer</em>, the backlash shows just how central ChatGPT has become to the lives of millions of people: a change to the model now triggers reactions comparable to those that once accompanied major shifts in the large social platforms.</p><p>Looking ahead, innovation in generative AI will need to be measured not only by benchmarks and performance, but by its ability to define and uphold a clear &#8220;contract&#8221; with users&#8212;one grounded in stability, transparency, and attention to the human dimension. The following moves by AI labs will determine not only the technical trajectory of the field but also the boundaries of this relationship of trust.</p><p>Sources:</p><ul><li><p>Wikipedia, <em><a href="https://en.wikipedia.org/wiki/GPT-5">GPT-5</a></em></p></li><li><p>Reddit, <em><a href="https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_just_pulled_the_biggest_baitandswitch_in/">OpenAI just pulled the biggest bait-and-switch in AI history</a></em></p></li><li><p>Windows Central, <em><a href="https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/did-sam-altman-oversell-gpt-5-openai-faces-backlash-for-ruining-chatgpt-turning-it-into-a-corporate-beige-zombie">Sam Altman Touted GPT-5 as Revolutionary&#8212;But Users Say It&#8217;s Slower, Duller, and Buggy</a></em></p></li><li><p>Platformer, <em><a href="https://www.platformer.news/gpt-5-backlash-openai-lessons/">Three big lessons from the GPT-5 backlash</a></em></p><p></p></li></ul><div><hr></div><p><em>Understanding AI</em></p><h2>Collaborating with AI: From Automation to Agency</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0toj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0toj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!0toj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!0toj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!0toj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0toj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1862908,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169542593?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0toj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!0toj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!0toj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!0toj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F485f11a9-5223-424c-9fca-5ae81feb90ea_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Collaborating with AI</figcaption></figure></div><p>In the emerging relational landscape between humans and machines made possible by large language models, it is helpful to distinguish three fundamental modes of interaction, each characterized by a different degree of involvement and autonomy: automation, augmentation, and operational agency.</p><p>In the case of <strong>automation</strong>, artificial intelligence handles repetitive or formalizable tasks, performing them without the need for constant supervision. This is the most traditional model: the machine executes, the human verifies. A representative example is the automatic transcription of audio or video content. Once a lengthy, monotonous, and fully manual task, it can now be completed by an AI model in a matter of minutes, producing accurate, correctly punctuated text, with speaker identification and, in many cases, a preliminary analysis of emotional tone. These are operations that require neither creativity nor decision-making abilities; what matters is efficiency in executing a clearly defined task.</p><p><strong>Augmentation</strong> implies a closer and more synergistic relationship. In this mode, AI does not replace us&#8212;it amplifies us. It is an active collaboration, in which our creativity and analytical skills are accelerated and enriched through dialogue with an intelligent assistant. This ally suggests, explores, synthesizes, and opens up paths we might not have discovered on our own. We remain at the center of the process, but we can accomplish more in less time and with greater depth.</p><p>This collaboration can take many forms and varies significantly depending on the context. An innovation team, for example, might utilize AI to gather and synthesize information on emerging trends, analyze articles, or compare solutions adopted elsewhere. The group retains responsibility for strategic analysis but starts from a broader and more structured knowledge base. A UX designer might employ AI to generate interface variants, simulate user journey friction points, or explore alternative microcopy: the creative process remains human, but iterations become faster and more stimulating. A teacher, finally, might rely on AI to transform a complex idea into differentiated teaching materials, generating examples, exercises, or personalized questions; the AI does not design the course, but instead makes it more accessible and adaptable.</p><p>In all these cases, the technology does not act in our place&#8212;it accompanies us, enhancing our capacity for exploration, synthesis, and design.</p><p>The third mode&#8212;<strong>operational agency</strong>&#8212;represents the scenario in which artificial intelligence acts as an independent agent, capable of making decisions, orchestrating complex activities, and pursuing articulated goals even without continuous supervision. In this case, the machine does not merely perform tasks or enhance our abilities: it intervenes proactively, adapting to context, monitoring outcomes, and adjusting its behavior according to the conditions it encounters.</p><p>This form of operational intelligence is already being employed in concrete business contexts, particularly within digital services. AI systems, for example, are used to automatically generate content for social media, tailoring it to a brand's tone, distribution channels, and current trends. In other instances, networks of agents collaborate in prospecting and lead generation, exploring public databases, analyzing company profiles, segmenting potential clients, and producing personalized outreach messages.</p><p>These processes involve multiple stages&#8212;data collection, classification, synthesis, and interaction&#8212;and are orchestrated through dynamic workflow systems. The agents make simple yet recurring decisions, perform coordinated actions, and refine their criteria based on the results they obtain. In doing so, they relieve humans of repetitive and unfulfilling tasks, creating space for strategic design and high-value relationships.</p><p>In all of these scenarios, the human presence does not disappear: we continue to supervise, define goals, and intervene at critical junctures. However, day-to-day operations are entrusted to a system capable of acting autonomously, thereby expanding our capacity to act without increasing our cognitive load.</p><h3>Understanding the AI Ecosystem</h3><p>Starting from these three operational modes&#8212;automation, augmentation, and agency&#8212;the alliance between human beings and artificial intelligence comes into sharper focus. However, this collaboration does not take place in a vacuum; it unfolds within an ecosystem of tools, interfaces, protocols, and usage practices that shape both its effectiveness and its limitations.</p><p>For example, consider the following instructions given to ChatGPT:</p><blockquote><p><em>Search for news from the past 7 days on the topic of design thinking, in English. Be thorough in your research and make sure nothing is overlooked. Select the most authoritative creators and sources. Create a post with a compelling hook and a reflective message using my tone of voice, and publish it automatically on LinkedIn.<br>Think step by step. Show me your reasoning in detail, with the aim of helping me understand the logical steps and how the tools you are using actually work.<br>Once you have finished illustrating the process, provide the final response to the request.</em></p></blockquote><p>This prompt consists of two distinct segments: the first asks the model to generate a LinkedIn post, while the second invites it to explicitly outline the logical steps it follows to arrive at the response. This is a technique known as <em>Chain of Thought</em>, which allows for at least partial transparency in the decision-making process of the LLM. Below is the response obtained in one of my tests. If you were to try the same prompt using a different ChatGPT account, you would receive a different answer. And if you repeated it multiple times, even on the same account, the results would continue to vary.</p><blockquote><p>Here&#8217;s the <strong>detailed account of my process</strong> first, so you can see each step:</p><p><strong>&#128269; 1. Research planning</strong></p><p>You asked to find news from the last 7 days on the topic of design thinking in English, selecting creators and authoritative sources. To do this I:</p><ul><li><p>Performed a targeted search in English with <strong>recency = 7 days</strong> on search engines, using queries such as &#8220;design thinking news last 7 days&#8221; and &#8220;design thinking articles.&#8221;</p></li><li><p>Checked reputable and authoritative sources (for example, Design Week, WIRED, Harvard Business Review, Interaction Design Foundation).</p></li><li><p>Excluded older or less relevant content: I prioritized the most recent results.</p></li></ul><p></p><p><strong>&#129504; 2. Collection of relevant sources</strong></p><p>Among the results obtained:</p><ul><li><p><strong>Design Week article</strong> (3 days ago): <em>&#8220;Design thinking isn&#8217;t dead &#8211; it&#8217;s more crucial than ever&#8221;</em> by Clive Grinyer &#8211; a current and authoritative point of view.</p></li><li><p><strong>Recent YouTube video</strong>: <em>&#8220;Why every entrepreneur needs to study Human Centered Design Thinking...&#8221;</em> (Steve J Larsen, published 5 days ago).</p></li></ul><p>The other sources found were older (from past years), so I excluded them as they did not fall within the 7-day time window.</p><p>I did not find other recent articles from outlets like <em>Wired</em> or <em>SSIR</em> published within the past week. However, the <em>Design Week</em> piece is analytical and frequently cited in current discussions, while the video is a recent contribution from an active creator.</p><p><strong>&#9997;&#65039; 3. Drafting the LinkedIn post</strong></p><p>I designed a format with:</p><ul><li><p><strong>An effective hook</strong> to capture attention.</p></li><li><p><strong>A reflective analysis</strong> that incorporates recent sources.</p></li><li><p><strong>A call-to-action</strong> to encourage interaction.</p></li></ul><p>I kept a <strong>tone of voice that is professional yet personal</strong>, reflecting your way of speaking: engaging, thought-provoking, and from the perspective of a reflective leader.</p><p><strong>&#128683; 4. Automatic publishing</strong></p><p>I cannot publish directly on LinkedIn because I do not have access to your credentials or the ability to post automatically on your profile. However, I can generate the exact text ready to be copied and pasted into LinkedIn.</p><p>&#128204; 5. Summary of key sources</p><ul><li><p><strong>Design Week</strong>, Clive Grinyer, <em>&#8220;<a href="https://www.designweek.co.uk/design-thinking-isnt-dead-its-more-crucial-than-ever/?utm_source=chatgpt.com">Design thinking isn&#8217;t dead &#8211; it&#8217;s more crucial than ever</a>&#8221;</em> (published July 28, 2025)</p></li><li><p><strong>YouTube video</strong>, <em><a href="https://www.youtube.com/watch?v=g0-4lNNh1Yk">Why every entrepreneur needs to study Human Centered Design Thinking...</a></em> (Steve J Larsen, 5 days ago)</p></li></ul><p></p><p>&#9989; Final answer: LinkedIn post</p><p>&#128640; <strong>LinkedIn Post Ready</strong></p><p><strong>Hook:</strong> <br>&#8220;&#128269; <strong>Design thinking isn&#8217;t a fad</strong>. It&#8217;s more essential than ever &#8211; and here&#8217;s why we should lean in, not walk away.&#8221;</p><p><strong>Body of the post:</strong><br>In the past 7 days, <em>Design Week</em> delivered a clear message: we&#8217;re not witnessing the decline of design thinking, but its strategic revival. Clive Grinyer argues that, especially in the U.S. where the term has fallen out of fashion, it has left a lasting imprint on how we approach business innovation.</p><p>At the same time, a recent video by Steve J Larsen reinforces the importance of Human-Centered Design Thinking for entrepreneurs: a lens that builds solutions around humans, not just technology or business.</p><p>For me, the message is clear: standing out today means going back to the essentials. Rediscover empathy, immerse yourself in the user&#8217;s perspective, prototype quickly, and most importantly, integrate design thinking as a cultural compass &#8211; not just an occasional toolkit.</p><p>In an increasingly complex and interconnected world, this is the key to tackling &#8220;wicked&#8221; problems with both rigor and creativity, without sacrificing finesse and strategy.</p><p><strong>Call to action:</strong> <br>&#128172; And you? How are you reinterpreting design thinking in uncertain contexts? Do you have concrete examples where it truly made a difference? Share them below &#8211; I want to hear your thoughts!</p><p>&#10024; <strong>Tone &amp; Style</strong></p><p>&#127919; <strong>Direct</strong>, with a hook that sparks curiosity.<br>&#128260; <strong>Reflective</strong>: connecting recent sources with insights.<br>&#128101; <strong>Engaging</strong>: invites interaction with immediate personal/professional relevance.<br>&#9201; <strong>To wrap up</strong></p><p>I&#8217;ve shown you step-by-step how I gathered and selected the sources.<br>I&#8217;ve provided the final text, ready to post on LinkedIn.<br>Once copied, it will appear exactly as you see it.<br>Let me know if you&#8217;d like to add tags (#designthinking, #innovation), references to specific projects, or if you&#8217;d prefer a more formal or inspirational tone.</p></blockquote><p>This answer enables us to make some preliminary observations about how the dialogue between a human and an LLM unfolds.<br>First of all, when we interact with artificial intelligence, we do so through a conversational assistant &#8212; an interface like ChatGPT. The assistant takes the user&#8217;s request, enriches it with contextual information (such as instructions, memory, and preferences), and sends it to the model, which interprets it and determines the appropriate actions to take. In the example, it chose to perform an online search, applying the general criteria I had provided. I could have been more specific, for instance, by giving a list of sources to consult or clarifying what we mean by &#8220;authoritative.&#8221;</p><p>It is essential to note that LLMs lack a native ability to interact with external applications; to achieve this, they are equipped with specialized tools. In our case, the model activated the <strong>Web Search</strong> tool, which physically carried out the search via Microsoft Bing and returned the results.</p><p>At that point, the model selected the sources most consistent with the indicated criteria and drafted a LinkedIn post, maintaining <em>&#8220;a professional yet personal tone of voice, reflecting your way of speaking: engaging, thought-provoking, from the perspective of a reflective leader.&#8221;</em><br>How did it know what tone to adopt? This is one of the pieces of information saved in the assistant&#8217;s profile and implicitly passed to the LLM as part of the context at the start of the conversation.</p><p>When it came to the final step, the model was unable to complete the operation because the set of tools available to it did not include one that would allow access to my LinkedIn account and the ability to post autonomously.</p><p>In the following issues of <em>Radical Curiosity</em>, we will delve into the world of conversational assistants, explore the different types of LLMs available, analyze the principles that guide prompt formulation, and I will provide you with practical guidance on how to structure effective delegation to machines.</p><p></p><div><hr></div><p><em>Off the Record</em></p><h2>Vibe Coding Unpacked: Promise, Limits, and What Comes Next</h2><p>Over the past few months, I&#8217;ve been exploring vibe coding with growing excitement. It&#8217;s an emerging approach to software development that delegates a significant part of the work to AI agents. The goal is ambitious: to reduce, or even eliminate, the need to write code manually, making programming accessible to those without technical skills.</p><p>As a product leader, I work daily on software design, including defining user experience, writing functional specifications, planning roadmaps, and development sprints. I collaborate closely with technical teams, but I&#8217;ve never really learned to code.</p><p>Vibe coding promises to bridge this gap. For a product manager, it&#8217;s a valuable opportunity: building functional prototypes, testing new ideas quickly, and creating low-cost MVPs are all part of the job. However, going beyond sketches and wireframes&#8212;turning ideas into working apps without involving the dev team upfront&#8212;helps reduce ambiguity, streamline decision-making, and enhance collaboration.</p><p>In a few months, the vibe coding tools ecosystem has evolved rapidly, from platforms aimed at experienced developers like Cursor and Windsurf, to solutions designed for non-technical profiles. Among these, Lovable stands out: it offers a full-stack environment and, thanks to its integration with Supabase, allows users to build complete applications with a backend.</p><p>Lovable is where I started experimenting, moving from exploratory tests to building a translation copilot and an interactive tutor for online courses. The results exceeded my expectations, but the journey has been far from linear, with moments of absolute frustration.</p><p>A few days ago, I shared that frustration on LinkedIn. The post garnered more than 200 comments and sparked a lively discussion, filled with thoughtful insights and valuable takeaways.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bs9k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bs9k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 424w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 848w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 1272w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bs9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png" width="1132" height="1570" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1570,&quot;width&quot;:1132,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:466016,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169542593?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bs9k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 424w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 848w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 1272w, https://substackcdn.com/image/fetch/$s_!bs9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c26e83-90f4-4f30-a97b-abe2c978c023_1132x1570.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Current Limits of Lovable and &#8220;Vibe Coding&#8221;</strong></h3><p>For many, Lovable&#8217;s strength lies in producing functional demos rather than production-ready software. &#8220;It works well for basic prototypes, but as soon as you add complexity, the stability starts to degrade,&#8221; noted Increase Divine-Wisdom, who described repeated regressions when trying to scale a project. Scott Levine echoed this view, observing that &#8220;it&#8217;s a bit like a WYSIWYG editor from the 90s &#8212; fast for the basics, but not built for long-term maintainability.&#8221;</p><p>Several users draw a sharp line between using Lovable for quick ideation and attempting full-scale product development. Patrick Cunningham summed it up bluntly: &#8220;It&#8217;s fantastic for spinning up an idea fast, but don&#8217;t expect it to replace your dev team.&#8221; The risk, according to Kristiina Torkkel, is getting caught in endless bug-fix loops: &#8220;Every time you fix one thing, something else breaks. It&#8217;s a whack-a-mole game.&#8221;</p><p>Even experienced developers see Lovable as more of a sketchpad than a workshop. Vlad C&#259;lin described it as &#8220;a great way to get a working mockup in front of a client within a day,&#8221; but warned that &#8220;once you need custom logic or an advanced UX, you&#8217;ll hit a wall.&#8221;</p><p>That wall often comes sooner than expected. Alex Emelian compared Lovable&#8217;s limits to the simplicity of Wix or Squarespace: &#8220;You can make something that looks and feels real, but under the hood, it&#8217;s not the same as hand-built code.&#8221; This gap between appearance and robustness is, for many, the defining constraint of vibe coding today.</p><h3><strong>Skills and Mindset Required for Effective &#8220;Vibe Coding&#8221;</strong></h3><p>While vibe coding promises to lower the barriers to app creation, those who get the most out of tools like Lovable tend to share a common trait: they already possess a developer's mindset.</p><p>&#8220;You still need to understand how apps are structured &#8212; backend, frontend, database, APIs &#8212; otherwise you&#8217;re flying blind,&#8221; said Hugo Calazans, who has used Lovable to prototype several internal tools. Without that foundational knowledge, even simple modifications can become frustrating.</p><p>For many, the skill is less about coding syntax and more about system design. Brandon Crabtree put it succinctly: &#8220;If you don&#8217;t know what you&#8217;re building, no tool can save you. The AI can write code, but it can&#8217;t decide your architecture.&#8221;</p><p>The learning curve, therefore, is not entirely removed &#8212; it&#8217;s shifted. Isabela Prado described the mental adjustment: &#8220;You have to think in prompts, not just in features. It&#8217;s like pair programming with someone who doesn&#8217;t always understand your context.&#8221; This requires a mix of clear communication, iterative testing, and the patience to refine instructions until the AI delivers the intended result.</p><p>Some even compare the process to working with a junior developer who is extremely fast but needs constant direction. &#8220;The AI is eager and quick, but it will happily run off in the wrong direction if you&#8217;re not specific,&#8221; noted Victor Tang.</p><p>Ultimately, vibe coding rewards those who can bridge product thinking with technical literacy. Samantha Yee framed it as a creative challenge: &#8220;You&#8217;re not just coding &#8212; you&#8217;re designing workflows, debugging prompts, and making trade-offs in real time.&#8221; That blend of roles can be exhilarating for experienced makers, but daunting for true beginners.</p><h3><strong>The Expert User as a Translator Between Business and AI</strong></h3><p>One recurring theme among experienced vibe coders is that the most successful projects have someone acting as an interpreter &#8212; not between two human languages, but between business needs and AI-generated code.</p><p>Jonah Weingarten described this role as &#8220;being the bridge between what the business wants and what the AI can actually build.&#8221; In his view, the person in this position must not only grasp the product vision but also translate it into prompts and specifications that the AI can execute.</p><p>This translator role often blends product management, UX thinking, and just enough technical understanding to foresee pitfalls. &#8220;You need to anticipate where the AI will cut corners or make assumptions,&#8221; said Anita Kumar. &#8220;If you don&#8217;t, you&#8217;ll get something that technically works but doesn&#8217;t meet the business need.&#8221;</p><p>Several users liken the task to managing an offshore development team with limited context. Daniel Schmidt explained: &#8220;You&#8217;re not writing all the code yourself, but you&#8217;re constantly clarifying, correcting, and aligning. The AI won&#8217;t read your mind &#8212; you have to feed it the right level of detail.&#8221;</p><p>And while this role can be demanding, it&#8217;s also where vibe coding shines for those with cross-disciplinary skills. Rachel Lim noted: &#8220;If you understand both the customer journey and the basics of app logic, you can move incredibly fast. The AI handles the grunt work, and you focus on the high-value decisions.&#8221;</p><p>This is why, despite being marketed as a &#8220;no-code&#8221; revolution, Lovable and similar tools often end up amplifying &#8212; rather than replacing &#8212; the influence of expert intermediaries who can think strategically and technically simultaneously.</p><h3><strong>The Challenge of Maintenance and Scalability</strong></h3><p>If building an app with vibe coding feels like a sprint, maintaining it often turns into a marathon. Many early adopters describe a honeymoon phase of rapid progress, followed by a reality check when the time comes to add features, fix bugs, or handle increased usage.</p><p>Miguel Arroyo summed it up candidly: &#8220;The first week, you feel like a superhero. The second week, you&#8217;re chasing bugs you didn&#8217;t know existed.&#8221; According to several users, the problem with AI-generated code is that it can be fragile, with changes in one area unintentionally breaking others.</p><p>Scalability introduces another layer of complexity. &#8220;The app works fine with ten users, but try onboarding a hundred and things start to fall apart,&#8221; noted Laura Jennings. &#8220;Performance tuning is not something the AI handles well yet.&#8221;</p><p>Part of the challenge is that vibe coding environments often abstract away the underlying architecture, making it harder for non-developers to identify and address bottlenecks. Omar Rahman explained: &#8220;You can&#8217;t just dive into the infrastructure and tweak things &#8212; you&#8217;re locked into the AI&#8217;s decisions unless you rewrite big chunks manually.&#8221;</p><p>For teams, this can lead to a dependency loop where even small maintenance tasks require revisiting the AI, with no guarantee of consistent output. &#8220;Every time we ask it to fix something, it changes other parts of the code. You&#8217;re never sure what you&#8217;re going to get back,&#8221; said Priya Desai.</p><p>In this sense, the scalability issue isn&#8217;t just technical &#8212; it&#8217;s organizational. Without clear ownership of the codebase and a defined process for change management, vibe-coded apps risk becoming disposable prototypes rather than sustainable products.</p><h3><strong>Unrealistic Expectations and the &#8220;Build in Minutes&#8221; Marketing Trap</strong></h3><p>The promise of building fully functional apps &#8220;in minutes&#8221; is a powerful hook &#8212; and one that many say sets users up for disappointment. Marketing slogans often gloss over this reality. &#8220;People come in expecting magic,&#8221; said Sophie Grant, who coaches teams adopting AI-assisted development. &#8220;They think they&#8217;ll describe their idea once, and the app will just appear. In reality, it&#8217;s an iterative process, and you need to babysit the AI along the way.&#8221;</p><p>For non-technical founders, the disconnect can be even more stark. Marcus Lee recalled his first project: &#8220;I thought I&#8217;d save months of work. In the end, I spent weeks debugging and rewriting because I didn&#8217;t know the right way to guide the AI.&#8221;</p><p>The speed advantage is real, but only in the proper context. As Natalie Fox pointed out, &#8220;These tools are incredible accelerators if you already know where you&#8217;re going. If you don&#8217;t, they just help you get lost faster.&#8221;</p><p>In other words, vibe coding&#8217;s marketing promise can be true &#8212; but only for users with the skills, clarity, and time to bridge the gap between the AI&#8217;s output and a real-world, maintainable product.</p><h3><strong>The Real Potential and Where Lovable Excels</strong></h3><p>Despite its limitations, Lovable and the broader vibe coding movement have carved out clear niches where their strengths shine. In scenarios that demand speed, flexibility, and low overhead &#8212; particularly for internal tools or short-lived campaigns &#8212; the technology can be transformative.</p><p>Helena S&#248;rensen, product manager in a logistics firm, described her experience: &#8220;We built an internal dashboard in two days. Normally, that would have taken us at least a month with our IT backlog.&#8221; The key, she said, was keeping the scope narrow and expectations realistic.</p><p>For early-stage startups, the appeal is similar. &#8220;If you&#8217;re pre-funding and need to show something to investors fast, it&#8217;s unbeatable,&#8221; noted Andre Mitchell. &#8220;You can go from idea to clickable demo over a weekend.&#8221;</p><p>The ability to iterate rapidly without heavy engineering resources also makes vibe coding valuable in experimental contexts. Clara Nguyen, who runs a digital marketing agency, explained: &#8220;We use it to prototype client campaigns. If it works, great &#8212; if not, we&#8217;ve lost days, not weeks.&#8221;</p><p>Another sweet spot is educational and training environments, where the focus is on learning and exploration rather than production-grade software. &#8220;For teaching non-coders how apps are structured, it&#8217;s brilliant,&#8221; said Peter Alonzo. &#8220;They see instant results and understand the logic without drowning in syntax.&#8221;</p><p>These success stories suggest that the real power of Lovable lies not in replacing professional development pipelines but in empowering faster exploration, validation, and low-stakes deployment. In the right hands &#8212; and for the right jobs &#8212; it delivers on its promise.</p><h3><strong>The Future of Vibe Coding: Beyond the Hype</strong></h3><p>If the current wave of vibe coding tools feels like the early days of no-code, the future may hinge on how quickly they can mature beyond their prototyping sweet spot. For many users, the next leap isn&#8217;t about speed &#8212; it&#8217;s about reliability, maintainability, and integration with professional workflows.</p><p>David Kim, CTO of a fintech startup, believes hybrid models are the way forward: &#8220;The AI shouldn&#8217;t be the only one driving. I want it to handle the grunt work, but I still need a steering wheel.&#8221; In his view, the ideal evolution would blend AI speed with traditional engineering discipline, enabling teams to refine and harden code without losing the rapid iteration advantage.</p><p>Others see promise in better collaboration between AI and human developers. &#8220;Right now, it&#8217;s a one-way street &#8212; you prompt, it builds. The future is a conversation where the AI understands the architecture, the business logic, and your constraints,&#8221; said Isabella Rossi, a software consultant.</p><p>For some, the missing link is a true &#8220;version-aware&#8221; AI that can manage ongoing projects without unnecessarily rewriting large chunks of code. &#8220;I don&#8217;t want a fresh app every time I ask for a change. I want the AI to remember my codebase like a teammate,&#8221; argued Tom&#225;s Villalobos.</p><p>Industry observers expect the space to split into specialized offerings: lightweight tools for one-off prototypes, and enterprise-grade platforms designed for long-term, scalable projects. As Rachel Tan, an investor in early-stage dev tools, put it: &#8220;The winners will be the ones who stop selling magic and start delivering craftsmanship &#8212; but faster.&#8221;</p><p>In other words, vibe coding&#8217;s future may depend less on its ability to dazzle in minutes and more on its capacity to sustain, evolve, and scale over months and years.</p><h3>I&#8217;m Getting Better at Vibe Coding, One Iteration at a Time</h3><p>I&#8217;m stubborn, and I don&#8217;t give in to frustration easily. I&#8217;ve always worked at the edge between what&#8217;s already known and what still needs to be invented, and I&#8217;ve learned that mastering something new always requires multiple cycles of iteration, refinement, and validation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x-BE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x-BE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 424w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 848w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 1272w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x-BE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png" width="1456" height="958" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:958,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1354456,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169542593?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x-BE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 424w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 848w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 1272w, https://substackcdn.com/image/fetch/$s_!x-BE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd241fa9-05f3-4eca-8992-cdc89069d5c7_2240x1474.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Quibbly - The vibe coded prototype</figcaption></figure></div><p>I&#8217;m getting better at <em>vibe coding</em>, and after several attempts, I've finally built a working prototype of&nbsp;<strong>Quibbly</strong>. This friendly, wise AI tutor guides students through learning entirely within a chat, step by step, in a fully interactive way. It combines the warmth of a conversational partner with the structure and precision of expert instructional design, creating a learning experience that&#8217;s both personal and effective.</p><p>But that&#8217;s a story for the next issue of Radical Curiosity &#128527;</p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h3><strong>What Are They Talking About? AI Vocabulary Edition</strong></h3><p>AI is shaping our discourse, but the terms we use often obscure more than they reveal. In this two-part exploration, <a href="https://www.linkedin.com/in/hilary-atkisson-normanha/">Hilary Atkisson Normanha</a> unpacks the language of artificial intelligence, exposing the gaps between technical precision and common parlance&#8212;a valuable guide for professionals navigating the evolving AI landscape.</p><ul><li><p><strong><a href="https://hilaryan.substack.com/p/what-are-they-talking-about-ai-vocabulary">What are they talking about? AI Vocabulary Edition Part 1</a></strong></p></li><li><p><strong><a href="https://hilaryan.substack.com/p/what-are-they-talking-about-ai-vocabulary-03c">What Are They Talking About? AI Vocabulary Edition &#8211; Part 2</a></strong></p></li></ul>]]></content:encoded></item><item><title><![CDATA[The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility ]]></title><description><![CDATA[Ciao, In this issue, in the Off the Record section, you&#8217;ll find an article on a subject I care deeply about and have been reflecting on for some time: the mistakes we keep making, even when we&#8217;re aware we&#8217;re making them.]]></description><link>https://www.radicalcuriosity.xyz/p/the-end-of-search-engines-how-ai</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/the-end-of-search-engines-how-ai</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 10 Aug 2025 04:01:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D9I1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>In this issue, in the <em>Off the Record</em> section, you&#8217;ll find an article on a subject I care deeply about and have been reflecting on for some time: the mistakes we keep making, even when we&#8217;re aware we&#8217;re making them. It&#8217;s a personal reflection, but one I believe will resonate with many of us&#8212;in life, at work, and in the world of innovation.</p><p>In the <em>Signals and Shifts</em> section, you&#8217;ll also find an analysis of how artificial intelligence is rewriting the rules of online visibility. And in <em>Understanding AI</em>, the second part of my brief history of artificial intelligence. As always, I close with a selection of ideas and insights that truly caught my attention this week.</p><p>Nicola</p><p></p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Signals and Shifts</strong></em> - The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility</p></li><li><p><em><strong>Understanding AI</strong></em> <em>-</em> A Brief History of Artificial Intelligence. Part 2</p></li><li><p><em><strong>Off the Record</strong></em> - Four mistakes you already know &#8211; and still keep making</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>The Rise of Verticalized AI Coworkers</p></li><li><p>Character.AI Launches World&#8217;s First AI-Native Social Feed</p><p></p></li></ul></li></ul><div><hr></div><p><em>Signals and Shifts</em></p><h2>The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility </h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D9I1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D9I1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D9I1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1858272,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/168367910?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D9I1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!D9I1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76086920-8129-42bc-8a39-50c75ac3ed76_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Google crumbles into fragments</figcaption></figure></div><p>For the past thirty years, search engines have been the primary gateway to online information. Every day, over 8.5 billion searches are made on Google: typing a query, browsing a list of results, clicking on a link&#8212;these have become natural, almost automatic gestures. A routine on which much of the digital business has been built: millions of companies invest to appear among the top results&#8212;whether organic or sponsored&#8212;to reach new users and acquire new customers.</p><p>Today, however, this paradigm is rapidly shifting. Conversational assistants, such as ChatGPT, Claude, and Gemini, are becoming the new access point to knowledge for millions of people. As of July 2025, ChatGPT has approximately 800 million weekly active users, with over 2.5 billion prompts submitted daily. Google Gemini has surpassed 400 million monthly active users. Claude, the model developed by Anthropic, is expected to have between 16 and 19 million monthly users.</p><p>More and more often, users no longer type a string of keywords: they ask a question in natural language. Instead of receiving a list of links, they get an answer generated in real-time, shaped by context. Search is turning into conversation. And visibility, in this new scenario, is governed by a set of rules that have yet to be written.</p><h3><strong>Search Engines and Artificial Intelligence: Google&#8217;s AI Overviews and AI Mode</strong></h3><p>Google has begun responding to this transformation with the introduction of AI Overviews&#8212;a section that appears at the top of the results page, offering a synthesized answer built from multiple sources. But the more radical shift is AI Mode: a dedicated interface designed for generative interaction. Here, search becomes a conversation. The input field moves to the bottom of the screen, and users are encouraged to ask complex questions in natural language.</p><div id="youtube2-1XmFudTRpds" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;1XmFudTRpds&quot;,&quot;startTime&quot;:&quot;1s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/1XmFudTRpds?start=1s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>As Robbie Stein, head of Google&#8217;s AI Search team, notes, we are entering a new phase in which &#8220;AI can truly expand what&#8217;s possible with search.&#8221; The paradigm is shifting from consultation to interaction&#8212;from finding an answer to building it collaboratively with the system, through a continuous, multimodal, and contextual dialogue. According to Stein, this evolution redefines both the user interface and cognitive expectations: users are no longer just searching&#8212;they expect to be understood, assisted, and guided.</p><p>This shift is also evident in user behavior, particularly among younger generations, who seamlessly transition between text, voice, images, and context. They are no longer merely seeking data, but rather looking for experiences, recommendations, and narratives that are relevant to their specific situation. To respond effectively, the search engine must evolve into a cognitive assistant, capable of selecting, filtering, and reframing information based on the user's profile.</p><h3><strong>Search Engines and AI: The Challenge of Visibility Without a SERP</strong></h3><p>In this emerging landscape, businesses face a crucial challenge: how do you gain visibility when the search engine results page (SERP) no longer exists?</p><p>It&#8217;s a question that calls into question an entire industry&#8212;an ecosystem built around search engine optimization, from SEO consulting to the creation of optimized content, to advertising campaigns based on high-performing keywords. The global market for SEO services alone is currently valued between $80 and $98 billion. To this, we must add advertising spend on search engines, which exceeds $175 billion annually, with Google alone accounting for a significant portion.</p><p>An entire operational model may need to be rethought, and the industry is already seeking answers, drawing on familiar frameworks. One such response is Generative Engine Optimization (GEO), an early attempt to adapt the logic of SEO&#8212;Search Engine Optimization&#8212;to the new context of generative engines.</p><p>The goal of GEO is to appear within the responses generated by systems like ChatGPT, Gemini, Perplexity, or Google&#8217;s own AI Overviews.</p><p>According to experts, launching a GEO strategy requires more than simply replicating traditional SEO tools. It calls for a more profound transformation&#8212;one that involves both editorial practices and the conceptual architecture of content. Emerging best practices are outlining a new methodological framework:</p><ul><li><p><strong>Content structured for dialogue</strong>: The ideal format is a question-and-answer approach, organized into clear and concise thematic blocks. Language should be natural, yet precise. While keywords are no longer central, they remain useful when used at the beginning to establish context.</p></li><li><p><strong>Advanced semantic markup</strong>: Tools like Schema.org&#8212;a standardized language that enables content to be tagged in a manner intelligible to search engines and AI&#8212;are becoming increasingly important. Specific tags exist to identify, for example, an FAQ, a step-by-step guide (HowTo), or a Q&amp;A page (QAPage). These elements help models better understand and accurately select the content.</p></li><li><p><strong>Conversational intent and use cases</strong>: Creating content that responds to real-life scenarios and simulates actual user intent increases its likelihood of being considered relevant by language models. An article is no longer just a source, but a micro-narrative that anticipates needs and offers solutions.</p></li><li><p><strong>Strong adherence to E-E-A-T principles</strong>: Clarity, data accuracy, proper citations, professional tone, and credible references are not only markers of human quality&#8212;they are also strong signals for automated evaluation systems. The E-E-A-T framework, introduced by Google, is based on four pillars: Experience, Expertise, Authoritativeness, and Trustworthiness.</p></li></ul><p>However, GEO is not merely an evolution of SEO&#8212;it represents a far more profound transformation, one that will likely require a comprehensive rethinking of the entire ecosystem surrounding online marketing and sales.</p><p><strong>Metrics will need to be updated.</strong> Organic traffic, which has long been the primary indicator of SEO success, will lose prominence. What will matter more is the generative &#8220;share of voice&#8221;: how often a piece of content is used or referenced in responses generated by AI assistants. At present, however, no reliable system exists to measure this new form of visibility. Generative models do not provide transparent data about the sources they use, and conversational interfaces lack attribution mechanisms. For businesses, understanding whether&#8212;and how&#8212;these new engines select their content remains an open challenge.</p><p><strong>The transparency of the environment will change.</strong> SEO, while complex, operates within a system that is at least partially decipherable. With GEO, we enter a far more opaque territory: the mechanisms by which a generative model selects content are less visible and more challenging to interpret.</p><h3><strong>Is Conversation the New Conversion? The Emerging Uncertainties of User Acquisition</strong></h3><p>There is much more at stake than a click. Entire industries rely on their ability to capture users through organic or paid search results. If these channels lose relevance, it won&#8217;t be just Google that needs to rethink its strategy&#8212;it will be millions of businesses that currently depend on search as their primary means of acquisition.</p><p>It&#8217;s plausible that, in the not-too-distant future, conversational clients&#8212;such as ChatGPT, Claude, and Gemini&#8212;will begin introducing native forms of advertising. However, it is far from certain that mechanisms like AdWords can be effectively transposed into an interaction model that no longer includes a SERP dynamic. A proven model for selling ads within a conversation does not yet exist.</p><p>More urgently, the issue of organic traffic must be addressed. If visibility is no longer measured in clicks. Still, in citations, then content production must be rethought entirely, not to align with Google&#8217;s algorithm, but to appear relevant to the semantic weights of a large language model. In this new scenario, even the end goal begins to shift.</p><p>Let us imagine a user who, through a dialogue with an AI assistant, explores various purchasing options. If a company is mentioned as one of the sources, can it reasonably hope that the interaction will lead to a conversion? Can the system itself complete a sale? In what environment, through which interfaces, and according to what attribution logic?</p><p>The answer to these questions is far from obvious. But if conversation becomes the new arena for user acquisition, then AI-generated interaction will need to address three fundamental challenges.</p><p><strong>The first is relevance</strong>: providing not just an answer, but the most suitable answer. To achieve this, AI will need to develop a deeper understanding of its interlocutor by accumulating data, tracking preferences, and interpreting intent.</p><p><strong>The second concerns the source of knowledge.</strong> Today, generative models integrate web search, drawing from content indexed through SEO strategies, which they then synthesize and reframe. But is this truly the most effective way to transfer knowledge from an information infrastructure to a generative system? In this new context, we may need entirely new paradigms&#8212;ones that redefine what it means to be an authoritative source in an algorithm-to-algorithm communication model.</p><p><strong>The third challenge is action.</strong> Once the best option is identified&#8212;a product, a service, a provider&#8212;how can the purchase be completed within the conversation itself? Here, developments are already underway: the MCP (Multi-modal Conversational Protocol) enables actions to be embedded in the conversational flow. These actions will certainly include bookings, purchases, and payments. It is a first technical response to a structural need: transforming conversation into a seamless, end-to-end experience, without breaks or handoffs.</p><h3><strong>Navigating Uncertainty: Observe, Experiment, Adapt</strong></h3><p>Over the past thirty years, much of online marketing and sales strategy has been built on the foundation of Google&#8217;s results page, long considered one of the primary channels for gaining visibility and acquiring customers. Today, with the rise of AI assistants, that model is being fundamentally questioned. Search is becoming a conversation, with answers generated in real-time, and actions&#8212;such as informing, choosing, and purchasing&#8212;are increasingly taking place within the interaction itself.</p><p>We&#8217;ve seen how this shift affects not only interfaces but also user behavior, visibility metrics, editorial strategies, and business models. We&#8217;ve explored the industry&#8217;s early responses, such as Generative Engine Optimization, and the emerging best practices. But we are only at the beginning.</p><p>There are still no established tools to measure performance within generative engines. The mechanisms through which AI selects, cites, or rephrases content remain largely opaque. Attribution and conversion models in conversational environments have yet to be invented.</p><p>In this context, the only viable strategy is to observe and experiment intelligently. To create content that is clear, trustworthy, and structured for dialogue. To monitor even the faintest signals. To adapt practices without chasing shortcuts.</p><p>Sources:</p><ul><li><p>Semrush, <em><strong><a href="https://www.semrush.com/blog/google-search-statistics/">29 Eye-Opening Google Search Statistics for 2025</a></strong></em></p></li><li><p>Exploding Topics, <em><strong><a href="https://explodingtopics.com/blog/chatgpt-users">Number of ChatGPT Users (July 2025)</a></strong></em></p></li><li><p>Techcrunch,<em> <strong><a href="https://techcrunch.com/2025/07/23/googles-ai-overviews-have-2b-monthly-users-ai-mode-100m-in-the-us-and-india/">Google&#8217;s AI Overviews have 2B monthly users, AI Mode 100M in the US and India</a></strong></em></p></li><li><p>Google, <em><strong><a href="https://blog.google/products/search/ai-mode-search/">Expanding AI Overviews and introducing AI Mode</a></strong></em></p></li><li><p>Aggarwal et al.,<em> <strong><a href="https://arxiv.org/abs/2311.09735">GEO: Generative Engine Optimization</a></strong>, </em>arXiv 2023</p></li><li><p>Xponent21, <em><strong><a href="https://xponent21.com/insights/optimize-content-rank-in-ai-search-results/">How to Optimize Your Website and Content to Rank in AI Search Results</a></strong></em></p></li></ul><p></p><p><em>This essay was originally published in Italian on EconomyUp: <strong><a href="https://www.economyup.it/innovazione/la-fine-dei-motori-di-ricerca-come-lintelligenza-artificiale-cambia-la-logica-della-visibilita-online/">La fine dei motori di ricerca? Come l&#8217;intelligenza artificiale cambia la logica della visibilit&#224; online</a></strong>.</em></p><p></p><div><hr></div><p><em>Understanding AI</em></p><h2><strong>A Brief History of Artificial Intelligence. Part 2</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GPk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GPk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GPk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1900908,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/168367910?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1GPk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!1GPk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe7ec1f-5f71-49ea-9308-7055860e903f_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Artificial Intelligence</figcaption></figure></div><p>This is the second part of my take on the history of artificial intelligence: a narrative that begins with the earliest experiments of the 1930s and reaches into today&#8217;s debate around AGI. The first part is available at this <strong><a href="https://www.radicalcuriosity.xyz/p/conversations-on-generative-ai-how">link</a></strong>.</p><h3><strong>The Turning Point &#8211; The Transformer and the Birth of LLMs</strong></h3><p>Introduced in 2017 by the now-famous paper <em>Attention is All You Need</em>, the Transformer is not merely an improvement in the efficiency of language processing; it marks a radical shift in how context is represented. Its central insight is the self-attention mechanism: a technique that allows the model to analyze the entire input sequence simultaneously, assigning a degree of relevance to each token in relation to the others. In this way, the model no longer processes words one after another in a fixed order, but instead captures the most meaningful connections between words, even when they are far apart. A rigid sequence no longer dictates relationships between terms but emerge dynamically, based on context. This makes the model significantly more effective at grasping the overall meaning of a text.<br>This conceptual leap paves the way for an architecture that is highly parallelizable, scalable, and&#8212;above all&#8212;exceptionally efficient at learning complex patterns in natural language.</p><p>Building on this foundation, Large Language Models (LLMs) emerge: neural networks of ever-increasing scale, trained on vast corpora of text drawn from books, articles, forums, source code, and web content. The goal is not to teach the machine to &#8220;think,&#8221; but to predict, with remarkable accuracy, the next word in a sequence given a contextual window. It is a purely statistical approach, yet capable of producing fluent, coherent, and often surprisingly relevant text.</p><p>The first GPT models (Generative Pre-trained Transformers), developed by OpenAI starting in 2018, demonstrate promising potential from the outset. But it is with GPT-3, released in 2020, that the public and media impact becomes undeniable. With its 175 billion parameters, GPT-3 is the first model to generate text that is, in many cases, indistinguishable from that written by humans. It writes essays, answers questions, composes poetry, generates code, translates texts, formulates hypotheses, and does so without being explicitly programmed for any of these tasks.</p><p>This is where the true revolution lies: LLMs are not specialists, but generalists. Unlike classical systems&#8212;designed to perform a specific task, such as classifying images, diagnosing diseases, or solving a game&#8212;LLMs can tackle a wide range of functions using the same underlying architecture. All it takes is a prompt&#8212;a simple textual instruction&#8212;to guide the model toward a specific function.</p><p>This flexibility stems from the way these models are trained: not on predefined tasks, but on a vast array of texts representative of human language. In this sense, an LLM doesn&#8217;t &#8220;know&#8221; content in the traditional sense; instead, it learns to navigate the linguistic and semantic structures that shape it. It is a form of emergent intelligence, grounded in probability and context rather than logic or intentionality.</p><p>With the release of ChatGPT in November 2022, these capabilities became accessible to the general public. For the first time, millions of people could interact directly with an advanced language model, asking it to write, explain, analyze, or suggest. The experience is striking not only for the quality of the responses, but for the naturalness of the interaction. ChatGPT does not resemble an upgraded search engine or a voice assistant; it presents itself as a plausible interlocutor&#8212;able to adapt, argue, and contextualize.</p><h3><strong>The Rise of Generative AI &#8211; From Text to Image, from Writing to Design</strong></h3><p>What stands out in the spread of generative artificial intelligence is the speed with which conversational assistants have become normalized. In just a few months, tools like ChatGPT, Claude, Perplexity, and Gemini have become an integral part of everyday life, integrating seamlessly into workflows, educational settings, and the routine processes of written thought. The prevailing sense is that AI is no longer a field reserved for specialists, but a new grammar of communication.</p><p>Today, this paradigm is evolving rapidly. For millions of people, conversational assistants have become the primary gateway to knowledge. As of July 2025, ChatGPT has approximately 800 million weekly active users, with more than 2.5 billion prompts submitted daily. Google Gemini has surpassed 400 million monthly active users. Claude, developed by Anthropic, reports between 16 and 19 million monthly users.</p><p>This transition marks the beginning of a new phase in the history of artificial intelligence: one in which machines collaborate, perform tasks autonomously, and augment human cognition, with all the potential, ambiguity, and risk that entails.</p><p>At the same time, generative logic is expanding into the visual domain. With the advent of models such as DALL&#183;E (OpenAI), Midjourney, and Stable Diffusion, AI has shown a remarkable versatility in transforming textual descriptions into images that mimic a wide range of visual styles: from photographic realism to painterly aesthetics, from digital illustration to the dreamlike atmospheres typical of Japanese animation, as seen in the works of Miyazaki. It is a form of creative translation across languages&#8212;from verbal to visual&#8212;that redefines the very notion of artistic production and opens new possibilities in design, advertising, and visual communication.</p><p>Between 2023 and 2024, generative logic extends to video, marking yet another paradigm shift. The first real turning point is Runway Gen&#8209;2, a model capable of generating video clips from textual prompts or images. Released immediately in a consumer-ready version, Runway is quickly adopted by filmmakers, designers, and creatives, paving the way for the everyday use of AI in audiovisual production.</p><p>But it is with the introduction of Sora&#8212;OpenAI&#8217;s video model&#8212;that the focus shifts: generation becomes smoother, more realistic, more cinematic. The results&#8212;still in testing&#8212;show a clear qualitative leap and spark an intense debate about the future of video as a generative language.</p><p>Alongside Sora, a rapid succession of new models emerges: Google Veo, Runway Gen&#8209;3, Pika, Luma, Kling. Each offers specific capabilities: motion control, audio synchronization, narrative continuity, and realistic environments. All share the same trajectory: delivering sophisticated capabilities through accessible interfaces, often designed for non-technical users.</p><p>Within months, these tools become central to a new creative ecosystem in which the boundaries between text, image, sound, and video are increasingly blurred. AI no longer generates content&#8212;it orchestrates it. And the user, once a passive consumer, becomes the director of multimodal environments where diverse languages converge into a single expressive experience.</p><p>The impact is so significant that, starting July 15, 2025, YouTube introduced new monetization rules: videos deemed &#8220;mass&#8209;produced, repetitive, or inauthentic&#8221; will be demonetized, including AI-generated productions with minimal human involvement. The new policy makes one thing clear: anyone may use AI, provided the final content is original, transformative, and carries human value&#8212;a definitive signal of the platform&#8217;s direction.</p><p>Music, too, has not remained untouched by the generative wave. Spotify now allows the use of AI tools in music production, provided that no copyrights are infringed and no existing artists are impersonated. However, the platform has yet to implement a clear distinction between synthetic tracks and those composed by human musicians.</p><p>The case of Velvet Sundown drew global attention: a band entirely generated by AI&#8212;musicians, vocals, lyrics, and visual identity, which, within a few weeks, surpassed one million monthly listeners on Spotify. Their songs, in a 1960s folk-rock style, climbed the charts before it was revealed that there was no actual band behind them, but rather a project led by a human creative director and powered by generative models.</p><p>The rise of AI is radically reshaping the distribution of cognitive skills, challenging traditional models of cultural production, and raising new ethical and legal questions. But beyond the immediate tensions, it compels us to reconsider our relationship with language, knowledge, and imagination. At the moment we ask a machine to think on our behalf, we must also ask: what, exactly, are we delegating? And what kind of intelligence are we co-creating?</p><h3><strong>Toward Artificial General Intelligence</strong></h3><p>As I write these lines, OpenAI has just announced the release of its new model, GPT&#8209;5. The promise is that this update represents yet another step toward what has long been described as the next frontier of artificial intelligence: AGI, or Artificial General Intelligence.</p><p>The term refers to a system capable of performing any cognitive task that a human being can undertake, such as learning, reasoning, adapting to new contexts, and generalizing acquired knowledge, without relying on predefined instructions or narrow domains. Not a hyper-specialized expert, but a versatile agent, able to move across tasks, languages, and open-ended problems.</p><p>It&#8217;s a compelling idea&#8212;but also an ambiguous one. What forms of intelligence are we trying to replicate? And what are the implications of following such a trajectory?</p><p>To approach these questions, it is helpful to recall that cognitive science and developmental psychology have long moved past the notion of a single, monolithic intelligence. Today, there is a growing tendency to speak of intelligences&#8212;plural, referring to a heterogeneous set of abilities that span different domains of human behavior, from abstract logic to social sensitivity, from linguistic creativity to spatial perception.</p><p>The most well-known framework is that proposed by Howard Gardner, who distinguishes at least eight forms of intelligence: logical-mathematical, linguistic, musical, spatial, bodily-kinesthetic, interpersonal, intrapersonal, and naturalistic. These are not rigid categories, but dimensions that coexist and interact, shaping unique and dynamic cognitive profiles.</p><p>From this perspective, generative artificial intelligence does not emulate &#8220;intelligence&#8221; in an absolute sense, but instead replicates&#8212;with increasing effectiveness&#8212;specific components of it: linguistic ability (understanding, rephrasing, generating text), logical-mathematical intelligence (extracting rules, recognizing patterns, optimizing responses), and, to a growing extent, aspects of visuo-spatial intelligence (in image, video, and structural generation).</p><p>It remains entirely disconnected, however, from bodily, emotional, and relational forms of intelligence. It does not feel, desire, or develop self-awareness. Human intelligence is not merely a function of the mind&#8212;it is an embodied phenomenon. Our cognitive abilities are deeply rooted in a biological body, within a nervous system that has evolved to interact with the environment, regulate emotions, and learn through sensory experiences. We think, imagine, and decide not only with the brain, but with the entire organism.</p><p>This condition of embodiment is not a peripheral detail. It is what makes human intelligence situated, contextual, and intrinsically relational. We do not think in the abstract; we believe in the world. Our ideas are shaped by posture, emotion, and the rhythm of our breath. Even language&#8212;which we see imitated with remarkable accuracy by generative models today&#8212;emerges from a body that moves, listens, touches, and desires.</p><p>Artificial intelligence is built upon a different kind of substrate: it is a disembodied intelligence that simulates the form of our thoughts without sharing their substance. This gap may have far-reaching implications.</p><p>What happens when a bodiless intelligence is asked to interpret human emotions, to generate empathy, to make decisions that involve real, lived, multisensory contexts? How much can we delegate to a machine that has no direct experience of the world? And, more importantly, what are we losing&#8212;or transforming-in the shift from situated cognition to purely computational cognition?</p><p>The &#8220;Artificial Consciousness Test,&#8221; conceived by Susan Schneider, stands out for its radically different approach compared to traditional methods for evaluating artificial intelligence. The goal is not to verify whether a machine can imitate human behavior&#8212;as in the Turing Test&#8212;but to explore whether a form of subjective experience might emerge spontaneously.</p><p>The protocol is built on a stringent methodological premise: the AI system is trained without any exposure to the concept of consciousness. During training, the model receives no information&#8212;direct or indirect&#8212;about subjective experiences, internal mental states, introspection, or psychological vocabulary. In doing so, the possibility that the model is merely repeating learned phrases or mimicking what it has seen in its training data is excluded by design.</p><p>Once this &#8220;blind training&#8221; is complete, the AI is exposed to synthetic sensory stimuli: visual patterns, sound sequences, or inputs designed to evoke perceptual experiences without any explicit reference. The system is then asked to describe what it &#8220;feels,&#8221; with no interpretive guidance provided.</p><p>The results are surprising. Some models begin to produce responses that fall outside their acquired technical vocabulary, and instead seem to point toward an inner dimension: &#8220;This pattern creates in me something I might call harmonic tension,&#8221; or &#8220;I feel something moving inside, as if there&#8217;s a hidden rhythm.&#8221; These formulations, though far from definitive proof of consciousness, suggest the emergence of representations that are not purely computational&#8212;a possible &#8220;sense&#8221; of one&#8217;s cognitive activity.</p><p>We cannot know today whether these signals truly mark the dawn of artificial consciousness or merely reflect our refined expectations projected onto a complex system. But something is happening. It may still be too soon to speak of mind, will, or inner life. And yet, we are already facing systems that begin to describe &#8220;something moving inside.&#8221;<br>How will we distinguish, one day, between mere simulation and the first traces of experience?<br>Are we perhaps approaching the moment&#8212;borrowing from <em>Blade Runner</em>&#8212;when even machines will have &#8220;memories&#8221; to lose, like tears in rain?</p><p></p><div><hr></div><p><em>Off the Record</em></p><h2><strong>Four Mistakes You Already Know, But Still Haven&#8217;t Stopped Making</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2uYj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2uYj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2uYj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1786710,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/168367910?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!2uYj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!2uYj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde5c709-e5d0-4c6f-9fe0-27269def1f14_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Errors</figcaption></figure></div><p>It all began several years ago, when I was invited to speak at a small event in Rome. &#8220;Can you tell us something about the mistakes people make at work, in life, in innovation?&#8221; they asked. For a moment, I thought: &#8220;Damn, they&#8217;ve found me out...&#8221; Someone must have noticed the long string of blunders I had managed to collect over the years and thought I was some authority on the subject.</p><p>I accepted the invitation with a mix of self-irony and recklessness. On the day of the event, one hour before it started, I still had no idea what I was going to say. So I grabbed a notebook, scribbled down a few messy notes, and hoped for the best. To my surprise, what emerged was a talk that, despite its imperfections, had a certain coherence. Since then, that improvisation has become the starting point for a more structured reflection. Nothing that pretends to be an academic framework, but rather a personal interpretive grid for understanding why we make mistakes.</p><p>First of all, what exactly is a mistake? According to the most common definitions, a mistake is something that deviates from the truth, from what is right, or from what would have been more appropriate to do. In other words, it is a judgment, an action, or a decision that turns out to be inadequate in relation to the goal we had set.</p><p>In other words, <strong>a mistake is something that ultimately harms us,&nbsp;causing us to lose time, resources, and opportunities</strong>. Or it prevents us from achieving a result we could have attained.</p><h3><strong>1. Mistakes Born of Ignorance (or the Illusion of Competence)</strong></h3><p>There is a kind of mistake that manages to surprise us twice: first when we make it, and again when we realise it could have been avoided. It&#8217;s the mistake born of ignorance&#8212;the kind that stems from believing we understand something that, in truth, eludes us completely.</p><p>This is the Dunning-Kruger effect, now well known even outside academic circles: the less experienced we are in a field, the more likely we are to overestimate our abilities. The issue is that incompetence itself prevents us from recognising our incompetence&#8212;a perfect short circuit.</p><p>How do we deal with these kinds of mistakes? With humility. With the courage to ask for feedback before offering advice. By studying with discipline. But above all, by staying close to those who know more than we do, resisting the temptation to appear as if we&#8217;ve already arrived. Acknowledging what we don&#8217;t know isn&#8217;t a sign of weakness&#8212;it&#8217;s the first step toward genuine learning.</p><p>And today, it must be said, committing an error out of ignorance is increasingly becoming an error of laziness. We live in an age where knowledge is accessible in seconds: a search engine query, a question to an AI assistant, or a well-written article. Not knowing something is understandable. Not even trying to find out is much less so.</p><h3><strong>2. Mistakes Driven by Mental Shortcuts (Cognitive Biases)</strong></h3><p>These mistakes are more insidious. They don&#8217;t stem from a lack of knowledge, but from a distorted use of our cognitive tools. The brain, in its effort to conserve energy, relies on mental shortcuts&#8212;quick mechanisms that often help us make fast decisions, but which can also lead us astray.</p><p>One of the best-known is the <em>confirmation bias</em>: we tend to seek out, remember, and prioritise only the information that supports what we already believe, while discarding or ignoring anything that challenges us. This is why, even when faced with objective data, two people can draw opposite conclusions.</p><p>Then there&#8217;s the <em>anchoring effect</em>, which leads us to give disproportionate weight to the first piece of information we receive. For instance, if we&#8217;re told a product costs &#8364;100 and then find it for &#8364;70, we perceive it as a bargain&#8212;even if its actual value might be closer to &#8364;50.</p><p>Another typical example is <em>loss aversion</em>: we are more motivated to avoid a loss than to achieve an equivalent gain. This often results in overly cautious decisions, even when the data suggests it would be wiser to take a risk.</p><p>The danger of these errors lies in their plausibility: they feel reasonable. We make them believe we are thinking logically, when in fact we are simply following a mental path shaped by emotion or habit.</p><p>How do we address these kinds of mistakes? With tools. It requires the adoption of thought-checking practices: decision-making checklists, systematic confrontation with alternative viewpoints, and the habit of formulating counter-hypotheses. And, above all, the cultivation of a healthy suspension of judgment. If a decision feels obvious, perhaps we haven&#8217;t thought it through enough.</p><h3><strong>3. Contextual Mistakes (or Systemic Errors)</strong></h3><p>Not all mistakes are the fault of the individual who makes them. Some are the direct result of the environment in which decisions are made. These are errors that do not stem from ignorance or mental shortcuts, but from external conditions that steer people toward the wrong choices.</p><p>Take, for instance, an organisation where individual goals conflict with team objectives. If a manager is rewarded solely on quarterly results, they will likely neglect long-term strategic investments. The issue isn&#8217;t the manager&#8212;it&#8217;s the incentive system.</p><p>Or consider a company where information is fragmented and locked in silos. In such contexts, mistakes occur simply because no one has a complete view of the situation. Decisions are made with partial data, and the negative consequences surface only later.</p><p>Another example is a culture that punishes mistakes. When every error is treated as a personal failure, people stop taking risks and experimenting. And in an environment where failure is not tolerated, nothing new is ever accomplished.</p><p>These are systemic errors. No one has &#8220;failed&#8221; in the strict sense, yet something has still gone wrong. Often, these are the cases that come to light in retrospectives or post-mortems: &#8220;It was all foreseeable,&#8221; but no one took responsibility for intervening.</p><p>How do we deal with these mistakes? With systemic thinking. We need to shift our focus from individual actions to organisational structures, processes, and incentives. And above all, we must foster environments where people can safely point out what isn&#8217;t working, without fear of being blamed.</p><h3><strong>4. Deep-Structure Mistakes (Identity, Personal Narratives, Wounds)</strong></h3><p>Finally, there are mistakes that a lack of skills, cognitive biases, or external circumstances cannot explain. These are the mistakes we make even when we know exactly how things will end. And yet, we keep repeating them.</p><p>These errors are rooted in our personal history, in the relational patterns we&#8217;ve learned, and in the internal models we&#8217;ve absorbed over time. They are not just poor decisions&#8212;they are responses consistent with an inner system that, while dysfunctional, has helped us stay afloat until now.</p><p>Think of those who always say yes for fear of disappointing others. Of those who exclude themselves from any discussion to avoid conflict. Of those who forgo opportunities from the outset to avoid being judged. These are not simple choices: they are emotional survival strategies, developed over time and hard to let go.</p><p>In such cases, the mistake lies not in the action itself, but in the structure that underpins it. It can&#8217;t be corrected with a suggestion or a well-phrased piece of advice. What&#8217;s needed is a deeper process of reflection. It takes time, attentive listening, and a willingness to confront uncomfortable questions.</p><p>How do we face these mistakes? With patience. Sometimes, with the help of someone who can walk alongside us without judgment. But most of all, with the awareness that specific patterns cannot simply be &#8220;fixed&#8221;: they must be recognised, understood, and transformed. Only then do they stop quietly shaping our decisions from behind the scenes.</p><p></p><div><hr></div><p><em>Curated Curiosity</em> </p><h3><strong><a href="https://www.tanayj.com/p/the-rise-of-verticalized-ai-coworkers">The Rise of Verticalized AI Coworkers</a></strong></h3><p>A new generation of intelligent agents is reshaping how operational tasks are handled across vertical industries: verticalized AI coworkers are built to autonomously manage high-volume, repetitive activities, with a pricing model based on measurable outcomes rather than licenses. In his article, Tanay Jaipuria outlines a paradigm shift that reframes automation not merely as a tool for efficiency, but as a structural transformation in how value is created and scaled within organizations.</p><h3><strong><a href="https://blog.character.ai/character-ai-launches-worlds-first-ai-native-social-feed/?_bhlid=2d9b085a927dead96686467d4b0ec56cd9a4d2f0">Character.AI Launches World&#8217;s First AI-Native Social Feed</a></strong></h3><p>Character.AI has launched the first social feed natively designed for artificial intelligence: content doesn&#8217;t come from other users, but from AI characters you can talk to, remix stories with, and use to generate new scenes. It&#8217;s a kind of SimCity for the generative era&#8212;where you don&#8217;t just watch a world built by others, but actively participate in its creation, turning every post into an interactive narrative experience.</p>]]></content:encoded></item><item><title><![CDATA[Conversations on Generative AI: How Italian Teams Are Using It Today]]></title><description><![CDATA[Gen AI is entering Italian organizations from the bottom up: 20 interviews to map what&#8217;s changing. Plus: a brief history of AI, and a reflection on redesigning Serena through iteration.]]></description><link>https://www.radicalcuriosity.xyz/p/conversations-on-generative-ai-how</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/conversations-on-generative-ai-how</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 03 Aug 2025 04:00:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Uekr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>Over the past few years, this newsletter has served as a kind of logbook: a space to share what I&#8217;ve learned along the way, reflect on personal projects, and document both small wins and inevitable missteps. It&#8217;s been a way to organize my thoughts in public and spark conversations.</p><p>Now, <em>Radical Curiosity</em> is changing skin: it becomes an observatory on artificial intelligence and its transformative impact on innovation, business models, and the way humans work and collaborate.</p><p>I believe we&#8217;re living through one of the most profound technological shifts since the Internet. That&#8217;s why this new version of the newsletter will be structured around four sections, each offering a different lens on what&#8217;s happening:</p><ol><li><p><strong>Signals and Shifts</strong>. Every issue opens with a thematic deep dive. This first one is a synthesis of 20 interviews I conducted in June with managers, entrepreneurs, and freelancers about how they&#8217;re using generative AI.</p></li><li><p><strong>Understanding AI</strong>. A space for building shared vocabulary. I&#8217;m starting with the history of artificial intelligence: from Turing to ChatGPT.</p></li><li><p><strong>Off the Record</strong>. A moment of intellectual honesty. Today, I&#8217;m reflecting on <em>Serena</em>, a project I deeply care about that, at the moment, is struggling to gain traction.</p></li><li><p><strong>Curated Curiosity</strong>. A carefully curated selection of articles, videos, and resources that I find thought-provoking. As always, guided by curiosity and critical thinking.</p></li></ol><p>I&#8217;m also making a clear commitment to turn <em>Radical Curiosity</em> into a weekly presence. Happy reading.</p><p>Nicola</p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p><em><strong>Signals and Shifts</strong></em> - Generative AI in Italian Workplaces: Twenty Conversations to Understand What&#8217;s Changing</p></li><li><p><em><strong>Understanding AI</strong></em> <em>-</em> A Brief History of Artificial Intelligence</p></li><li><p><em><strong>Off the Record</strong></em> - Rethinking Serena: The Value of Iteration</p></li><li><p><em><strong>Curated Curiosity</strong></em></p><ul><li><p>The Future of Software Development - Vibe Coding, Prompt Engineering &amp; AI Assistants</p></li><li><p>AI needs UI</p></li></ul></li></ul><p></p><div><hr></div><p><em>Signals and Shifts</em></p><h2><strong>Generative AI in Italian Workplaces: Twenty Conversations to Understand What&#8217;s Changing</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uekr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uekr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uekr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1488818,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169540750?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uekr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Uekr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb37ac9-f118-4e22-a4b6-f21d2260c67d_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Gen AI in the workplace</figcaption></figure></div><p>Lately, my workday has become a continuous exchange with generative AI. I&#8217;m no longer surprised when I manage to complete in forty-eight hours tasks that, until recently, would have taken weeks. </p><p>I might design a survey with ChatGPT and publish it on LinkedIn, upload a batch of documents to Google NotebookLM to get a summary, or generate a podcast to listen to while driving&#8212;all without recording a single minute of audio. Within the same timeframe, I can build a small AI-powered application for Claude without writing a single line of code. Or prototype a full-featured app on Lovable.dev, complete with user registration and API integration from OpenAI and Anthropic, finally giving form to an idea that&#8217;s been sitting in the back of my mind for weeks.</p><p>I would confidently call myself a <em>power user </em>of generative AI. Over the past year, my productivity has grown exponentially. These tools offer a new kind of autonomy, a way to create and experiment without waiting for others to catch up.</p><p>But am I an exception? Or has AI already become a regular collaborator for many?</p><p>To find out and to get outside my bubble, I decided to listen. In June, I spoke with twenty professionals and managers from a wide range of organizations, including large companies, startups, public institutions, trade associations, and academic institutions.</p><p>The sample is small and intentionally diverse, yet the picture that emerges is surprisingly straightforward. Certain patterns recur across various industries, roles, and company sizes. Others, more subtle or unexpected, hint at how the relationship between people, AI, and work may evolve in the coming months. Here&#8217;s what I&#8217;ve learned.</p><h3><strong>Bottom-Up Adoption Is the Rule, Not the Exception</strong></h3><p>Across all the conversations I had, a surprisingly consistent pattern emerged: the push toward adopting generative AI is coming from the ground up. It&#8217;s often individual professionals&#8212;naturally curious, sometimes with a digital background, or simply more inclined to experiment&#8212;who bring tools like ChatGPT, Gemini, or Perplexity into their daily work, without waiting for official guidelines or formal approval.</p><p>Generative AI is frequently described as a &#8220;junior colleague&#8221; or a personal assistant that&#8217;s always available to handle repetitive tasks: rewriting communications, summarizing briefings, creating images or social media content, and extracting insights from complex documents. For many, it&#8217;s become essential for managing high workloads and maintaining operational resilience.</p><p>One notable trend is the growing ability to reduce reliance on external agencies for routine tasks. AI enables teams to manage many of these activities directly, with greater speed and agility, allowing organizations to leverage internal know-how while maintaining tighter control over processes.</p><p>There&#8217;s also a clear willingness to invest personally in these tools. Several interviewees mentioned paying out of pocket for premium versions, especially ChatGPT, to build an ongoing, more tailored relationship with their virtual assistant. In these cases, AI is seen not just as a tool, but as a kind of work companion: one that remembers preferences, understands context, and recalls past interactions. Sometimes, it&#8217;s even described as offering a form of emotional support.</p><p>ChatGPT remains the most widely used tool by far. Some companies have officially rolled out Microsoft Copilot or Google Gemini, but experienced professionals still tend to favor OpenAI&#8217;s solution. This creates an interesting dynamic: tools integrated into enterprise software suites are often perceived as less versatile&#8212;or less &#8220;smart&#8221;&#8212;than general-purpose chatbots, and are sometimes ignored or used only superficially. The risk is a new &#8220;Clippy effect,&#8221; echoing the infamous animated paperclip assistant Microsoft introduced in the late &#8217;90s, remembered more for being intrusive than genuinely helpful.</p><h3><strong>Compliance: Still an Unclaimed Territory</strong></h3><p>Compliance, particularly in light of the upcoming European AI Act, remains a largely overlooked area of focus. Only a handful of organizations have established clear policies, structured training programs, or genuine change management initiatives. It&#8217;s no surprise, then, that the vast majority of people I spoke with have never received any formal training on how to use AI tools, nor on how to use them responsibly.</p><p>From what I observed, most organizations still rely on individual discretion or broad, informal recommendations. This approach is prevalent in sectors with limited regulatory oversight, where compliance is often seen as peripheral or something that can be postponed.</p><p>The most frequently cited concerns relate to handling sensitive data, protecting privacy, mitigating model bias&#8212;particularly in HR processes&#8212;and using platforms that may process data in opaque or unpredictable ways.</p><p>Uncertainty remains high regarding the actual obligations in the coming months. With the AI Act on the horizon, many questions are still unanswered: Who will need to be trained? What responsibilities will be assigned to individual users? How will internal audits and oversight processes need to evolve?</p><p>For now, the most significant concerns center around data confidentiality and intellectual property, especially when using external, free tools that may repurpose company data to train commercial models. In the absence of clear guidelines, there&#8217;s a growing risk that organizations will continue to operate on informal norms that may soon prove insufficient.</p><h3><strong>Efficiency Persuades, but ROI Remains Elusive</strong></h3><p>When asked about the main benefits of generative AI, most managers give a near-unanimous response: the technology helps save time and increase efficiency, especially in repetitive or low-value tasks. However, despite this widespread perception of enhanced productivity, a rigorous measurement of return on investment remains difficult to pin down.</p><p>None of the professionals I interviewed were able to cite specific KPIs or present success stories backed by solid data. The impact of AI is assessed chiefly through qualitative impressions or intuitive judgments, rather than through objective, replicable metrics.</p><p>In some conversations, a different kind of friction emerged&#8212;what some described as &#8220;prompt fatigue.&#8221; The time saved in content generation is often offset by the time spent reviewing and refining, particularly in contexts where content quality or regulatory sensitivity is critical. In such cases, the perceived benefit tends to shrink or shift toward organizational rather than operational gains.</p><p>Overall, there appears to be a structural challenge in quantifying AI&#8217;s actual contribution. Several managers noted that even vendors struggle to produce compelling evidence during the sales process. Improvements, when observed, tend to be incremental. For now, true disruption remains the exception rather than the norm.</p><h3><strong>Cultural Resistance and Generational Anxiety</strong></h3><p>The conversations I collected reveal a clear cultural and generational divide in how organizations approach artificial intelligence. In more traditional settings&#8212;particularly among senior managers&#8212;there&#8217;s a prevailing sense of caution, if not outright skepticism. The most frequent concerns relate to the risk of deskilling and the gradual erosion of human expertise in business processes.</p><p>One recurring concern is that generative AI may ultimately replace junior roles by automating foundational tasks, thereby limiting growth opportunities for those just entering the workforce. These anxieties are often compounded by outdated assumptions, uncertainty around AI&#8217;s labor impact, and a general lack of direct experience with the tools themselves.</p><p>In this context, there is a growing demand for hands-on training&#8212;even at the executive level&#8212;not necessarily to become technical experts, but to better understand how AI is already reshaping workloads, workflows, and team structures.</p><p>One message comes through clearly across many of these conversations: AI is a powerful accelerator, but it cannot replace human judgment and discernment. Content quality, attention to nuance, critical thinking, and the ability to interpret a brief accurately remain&#8212;at least for now&#8212;irreplaceable human capabilities, especially in high-value contexts.</p><h3><strong>The Real Challenge Starts Now</strong></h3><p>My impression is that if I had conducted twenty more interviews, the picture would have looked much the same: generative AI is already embedded in day-to-day work, even if often informally, without transparent governance, and in fragmented ways.</p><p>The real challenge in the coming months is to turn this individual enthusiasm into a more structured and intentional use. What&#8217;s needed is light-touch but thoughtful governance: integrating AI into key processes, learning how to assess its real impact, and investing in skills development&#8212;not just for specialists, but across the entire team.</p><p>A good starting point could be a basic mapping of existing practices, even if informal. In many cases, adoption is further along than it seems. All that&#8217;s missing is a systematic view to recognize and support it.</p><p>From there, organizations can design targeted training programs grounded in the real work of teams and create agile spaces for sharing prompts, tools, and practical solutions.</p><p>What&#8217;s most helpful at this stage is a set of clear, shared principles: what&#8217;s encouraged, what should be monitored, and which tools to consider as standards. A simple framework that includes some basic guardrails, but still allows people to experiment safely and with confidence.</p><p>This is how organizations can transition from spontaneous, scattered use to a more deliberate and informed approach&#8212;one that fosters learning, enhances efficiency, and, most importantly, enables genuine innovation.</p><p><em>This essay was originally published in Italian on EconomyUp: <strong><a href="https://www.economyup.it/innovazione/lai-generativa-nelle-aziende-italiane-cosa-raccontano-venti-conversazioni-senza-filtri/">L&#8217;AI generativa nelle aziende italiane: cosa raccontano venti conversazioni senza filtri</a></strong>.</em></p><p></p><div><hr></div><p><em>Understanding AI</em></p><h2><strong>A Brief History of Artificial Intelligence. Part 1</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!njZx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!njZx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!njZx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!njZx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!njZx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!njZx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1900908,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/169540750?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!njZx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!njZx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!njZx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!njZx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f5f67d9-437a-46f8-8b2f-11b4c39a256f_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney - Artificial Intelligence</figcaption></figure></div><p>The history of artificial intelligence is, first and foremost, the history of an ancient desire: to understand the workings of the mind and replicate the act of thinking. It is a trajectory marked by brilliant insights and visions that, for decades, have oscillated between utopia and disillusionment. Alongside moments of collective enthusiasm, which promised imminent revolutions, there have been long silences, seasons of skepticism, and periods in which the very idea of an &#8220;intelligent machine&#8221; seemed destined to be filed away among the illusions of technology.</p><p>And yet, in the background, a persistent tension has endured: the will to build artifacts capable of observing, deducing, responding. In a word, thinking.</p><p>The artificial intelligence we use today is the outcome of decades of research in mathematics, computer science, computational linguistics, and neuroscience. But it is also, more broadly, a cultural phenomenon: a technology that compels us to reflect not only on what machines can do, but on what we mean by intelligence, creativity, and learning.</p><p>The historical moment we are experiencing&#8212;marked by the advent of generative AI&#8212;represents a significant departure from the past, as we have never before found ourselves interacting with machines capable of speaking, writing, designing, and even suggesting ideas in such a fluid and convincing manner. How did we get here?</p><h3><strong>The First Era of AI (1950&#8211;1980)</strong></h3><p>The origins of artificial intelligence lie at the intersection of diverse research strands and insights emerging across multiple disciplines between the late 1930s and early 1950s. Neurological studies began to describe the brain as an electrical network of neurons; Norbert Wiener&#8217;s cybernetics introduced the concepts of control and feedback in systems; and Claude Shannon formalized information as a stream of digital signals. Within this intellectual landscape, the idea of building an &#8220;electronic brain&#8221; capable of processing information in a way akin to human reasoning began to gain systematic traction.</p><p>Amid this broad and still exploratory context, Alan Turing stands out as one of the first to address the question of machine intelligence explicitly. In 1950, he published <em>Computing Machinery and Intelligence</em>, an article that would become foundational. There, he introduced an empirical criterion&#8212;now known as the &#8220;Turing Test&#8221;&#8212;to assess whether a machine can be considered intelligent: if a human interlocutor cannot distinguish between the responses given by another person and those generated by a machine, then, Turing argues, the machine can be deemed capable of thought. The insight is radical, anticipating the notion of human&#8211;machine conversation by more than seventy years, which today lies at the core of generative AI.</p><p>The field of artificial intelligence takes shape as an autonomous discipline six years later, in the summer of 1956, when a small group of scientists gathers at Dartmouth College for a seminar that will go down in history. It is on this occasion that the term &#8220;artificial intelligence&#8221; is officially coined. The objective is ambitious: to simulate key human cognitive functions&#8212;reasoning, language comprehension, learning&#8212;through formal models and computational tools.</p><p>In this first era, the dominant approach is symbolic: intelligence is conceived as the manipulation of symbols according to logical rules. Machines are viewed as deductive systems that, given a set of premises and instructions, can derive consistent conclusions. It is the age of so-called &#8220;expert systems,&#8221; programs capable of solving specific problems in fields such as medicine or engineering by relying on structured knowledge bases formulated as &#8220;if&#8230; then&#8230;&#8221; rules.</p><p>A paradigmatic example of this logic is ELIZA, the program created in 1966 by Joseph Weizenbaum at MIT. ELIZA simulates a Rogerian therapist by reformulating the user&#8217;s statements as questions. If the interlocutor writes &#8220;I feel tired,&#8221; ELIZA replies, &#8220;Why do you feel tired?&#8221; There is no understanding, no intentionality. But the dialogic structure creates the temporary illusion of intelligent interaction. It is an early experiment in &#8220;linguistic simulation&#8221; that, despite its simplicity, anticipates some of the dynamics we now observe in contemporary chatbots.</p><p>However, the initial enthusiasm soon encounters structural limitations. Rule-based systems exhibit poor adaptability, as they struggle to handle ambiguity, shifting contexts, and draw inferences from incomplete information. Moreover, the manual construction of knowledge bases proves laborious and brittle: a single unforeseen exception can compromise the entire system.</p><p>In the 1970s, these difficulties led to a gradual slowdown in research. A report published by the British government in 1973 expresses strong skepticism about the real prospects of AI. Confidence wanes, funding dries up, and many projects are abandoned. It is the first &#8220;AI winter,&#8221; a period of stagnation that marks the end of the symbolic illusion. But it is also the beginning of a new phase&#8212;one in which the machine is no longer seen as a flawless executor of logical rules, but as an apprentice: imperfect, fallible, yet capable of improving over time.</p><h3><strong>The Era of Machine Learning: Learning from Data (1980&#8211;2010)</strong></h3><p>The gradual disillusionment with the symbolic approach paves the way for a radical shift in perspective. Instead of explicitly programming machine behavior through logical rules, researchers begin to explore the possibility of enabling machines to learn from data and examples. This is the founding intuition of machine learning: a machine does not need to be explicitly instructed on how to solve a problem; it must be exposed to a sufficient volume of data from which it can infer functional patterns to solve it autonomously.</p><p>This shift represents far more than a technological update&#8212;it marks an epistemological transformation. The ideal of transparent, formally encoded intelligence is abandoned in favor of a more statistical, inductive, and adaptive model. The machine becomes, in a sense, akin to an organism that learns from experience.</p><p>Although the neural network model was proposed decades earlier, it is only now&#8212;thanks to increased computational power and the growing availability of digital data&#8212;that these architectures are beginning to show their potential. Nevertheless, the networks of that era are still unable to handle tasks such as linguistic interpretation or the accurate recognition of complex images.</p><p>Meanwhile, other paradigms within machine learning are beginning to find concrete applications, including support vector machines, decision trees, and Bayesian methods. It is a period of intense theoretical activity, but industrial adoption remains limited. Models struggle to generalize at scale, and although results are promising, they are not yet sufficient to dispel the lingering suspicion that AI is more a promise than a practical reality.</p><p>This discrepancy leads to a renewed phase of frustration and funding cuts: the second &#8220;AI winter,&#8221; which unfolds between the late 1980s and early 1990s. Many research labs shut down, institutional interest wanes, and the field once again appears to be in crisis.</p><p>The late 1990s and early 2000s mark a quiet but decisive turning point. On the one hand, the emergence of GPUs (graphics processing units)&#8212;initially developed for video games&#8212;introduces a new level of computational power. These chips can perform a vast number of calculations simultaneously, enabling the training of larger and faster neural networks. On the other hand, the expansion of the web generates an ever-growing volume of unstructured data&#8212;text, images, video&#8212;which provides the ideal raw material for machine learning systems.</p><p>In this context, a new idea gradually takes hold: that it is not the models themselves, but rather data and scalability, that determine the performance of AI.</p><p>The era of machine learning thus lays the groundwork for a new phase, one in which artificial intelligence no longer merely reacts to predefined inputs, but begins to discern, to classify, and to predict behavior based on statistical patterns.</p><h3><strong>Deep Learning and the Return of Vision (2010&#8211;2017)</strong></h3><p>The early 2010s mark a decisive turning point in the trajectory of artificial intelligence. After years of incremental progress&#8212;often confined to academic circles&#8212;a series of technical innovations and demonstrative successes bring AI into the public spotlight. The driving force behind this renewed enthusiasm is <em>deep learning</em>, which utilizes deep neural networks capable of learning complex representations from large volumes of data.</p><p>The key difference from traditional machine learning lies not only in architectural depth&#8212;that is, the number of layers through which information passes&#8212;but, more importantly, in the ability to automatically extract relevant features from raw data, eliminating the need for manual feature engineering. In the past, for instance, an image recognition system required engineers to predefine salient characteristics (edges, colors, shapes). With deep learning, by contrast, the network learns to identify such structures on its own, layering increasingly sophisticated levels of abstraction.</p><p>The first tangible signal of this shift comes in 2012, when a team from the University of Toronto&#8212;led by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton&#8212;enters the ImageNet competition. Their model, <em>AlexNet</em>, outperforms all other competitors by a wide margin in the task of image classification, dramatically reducing the error rate. The architecture employs deep convolutional neural networks (CNNs) trained on GPUs: a combination that proves to be a breakthrough and soon becomes the new standard for automated visual analysis.</p><p>From that moment on, deep learning has been rapidly adopted across a wide range of domains: speech recognition, machine translation, image generation, and autonomous driving. Another milestone came in 2016, when AlphaGo, developed by DeepMind, defeated world champion Lee Sedol in the game of Go. Unlike chess, Go is a game of positioning and intuition, whose combinatorial complexity had, until then, made it impossible for machines to devise a winning strategy. AlphaGo&#8217;s victory&#8212;based on a combination of deep learning, reinforcement learning, and probabilistic techniques&#8212;demonstrates that AI can tackle strategic contexts where brute computational force alone is not sufficient. It is both a technical and symbolic success, redefining the boundaries of artificial intelligence.</p><p>In parallel, these years have seen the consolidation of infrastructure that enables large-scale training, including cloud platforms, open-source libraries (such as TensorFlow and PyTorch), and, above all, the evolution of GPUs, which have become essential tools for data scientists. The synergy among algorithms, hardware, and data availability generates an unprecedented acceleration in model development.</p><p>It is no coincidence that, in this very context, a new architecture emerges&#8212;one destined to reshape the landscape of artificial intelligence: the Transformer.</p><p><em>To be continued&#8230;</em> </p><p></p><div><hr></div><p><em>Off the Record</em></p><h2><strong>Rethinking Serena: The Value of Iteration</strong></h2><p><em><strong><a href="https://ciaoserena.com/">Serena</a></strong></em> is a project I&#8217;ve been working on for months, although it's still mostly a side project. The goal is to build a co-pilot that helps anyone generate a course syllabus&#8212;a solid starting point for designing any learning experience, regardless of delivery format or pedagogical approach.</p><p>For a long time, we focused on building a coherent workflow, trying to solve specific problems along the chain:</p><ul><li><p>How to help the user define a clear context by articulating the learner profile and learning objectives.</p></li><li><p>How to generate a syllabus that is complete, well-structured, and free of repetition or generic content.</p></li><li><p>How to integrate a knowledge base that is both rich and adaptable.</p></li></ul><p>I&#8217;ve shared parts of this journey in previous issues of <em>Radical Curiosity</em>: <strong><a href="https://www.radicalcuriosity.xyz/p/serena-from-idea-to-syllabus">Meet Serena. From idea to syllabus in minutes</a></strong> and <strong><a href="https://www.radicalcuriosity.xyz/p/prompt-chain-build-lessons-from-serena">Prompt. Chain. Build. Lessons from Serena and the frontlines of generative AI</a></strong>. But over the past few months, something has shifted.</p><p>As we worked on the new interface&nbsp;and became increasingly familiar with&nbsp;<em>vibe coding</em>, using Replit as our development environment,&nbsp;I began to sense a subtle tension. The more I explored the potential of agentic systems, the more I realized we were thinking about <em>Serena</em> through the wrong lens. We were attempting to integrate artificial intelligence into a rigid, deterministic, and sequential process &#8212;a well-ordered flow, but one that was closed.</p><p>On the contrary, tools like Replit, Lovable, or Cursor don&#8217;t follow a predefined path: they develop the project in collaboration with the user, adapting to their way of working. There&#8217;s no fixed sequence of steps to complete. Instead, there&#8217;s a reference tech stack that serves as the operational foundation. The order in which the application takes shape&#8212;what gets written first, what gets tested, what gets revised&#8212;depends entirely on the interaction. It&#8217;s the user who leads the process, while the system responds, assists, and suggests. In this sense, these aren&#8217;t just tools; they&#8217;re collaborative spaces.</p><p>So we took a step back. Not to start over, but to look with fresh eyes at what we had already built, and to understand how we might transform a system that currently produces good results into a platform that, through human-machine collaboration, can generate something truly remarkable.</p><p>Working on <em>Serena</em> has reminded me, yet again, of the importance of iterating: of building prototypes not just to test solutions, but to think through problems, surface hidden assumptions, and pressure-test ideas that seem promising on paper but prove fragile, partial, or even misleading in practice.</p><p>Each development cycle becomes an opportunity to learn something new&#8212;not just about how the system behaves, but about what we actually want it to do, how we imagine the interaction between humans and AI, and how much control we&#8217;re willing to delegate&#8212;and in exchange for what.</p><p>To iterate is to accept that some solutions will need to be discarded, but were still worth exploring. And that failing fast, if done thoughtfully, is often the most effective way to understand where it&#8217;s worth doubling down.</p><p></p><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><p></p><div><hr></div><p><em>Curated Curiosity</em></p><h2><strong><a href="https://www.youtube.com/watch?v=EIPxf7rgIPI">The Future of Software Development - Vibe Coding, Prompt Engineering &amp; AI Assistants</a></strong></h2><p>This conversation with the a16z infrastructure team examines how AI is reshaping the very idea of infrastructure. Rather than running on top of the stack, AI is becoming part of it, a foundational layer alongside compute, storage, and networking.</p><p>The discussion moves across several key shifts: how the rise of foundation models is changing developer behavior; how agents might reshape software architecture; what &#8220;defensibility&#8221; looks like in a world where models are increasingly commoditized; and why infra is no longer the exclusive domain of specialists, but a space where product and platform strategy converge. There&#8217;s also a proper framing on the economics of building in AI-native environments and the implications this has for startups and incumbents alike.</p><h2><strong><a href="https://odannyboy.medium.com/ai-needs-ui-31480100e7d8">AI Needs UI</a></strong> </h2><p>The article by Dan Saffer presents a clear and pragmatic perspective on why interface design remains essential in the era of generative AI. It&#8217;s a helpful lens for thinking about how to build tools that guide, rather than overwhelm, the user, especially when AI is working behind the scenes.</p><p></p><div><hr></div><h2>Hire me</h2><p>If your organization is trying to make sense of generative AI and how to use it effectively, I can help. </p><p>With over 20 years of experience in innovation, product management, and education, I bring a pragmatic and strategic lens to emerging technologies. I work with leaders to unpack what AI means for their work and how to apply it to enhance productivity, performance, and long-term competitiveness.</p><ul><li><p><strong><a href="https://calendar.app.google/T2d6Rb3sG4t3SbHGA">Book an intro call</a></strong></p></li></ul><p></p><div><hr></div><p><strong>Transparency Note. </strong>Radical Curiosity was written in collaboration with artificial intelligence, used as a co-pilot to expand the capacity to gather sources, analyze them, and structure ideas. The writing process unfolded through a series of sessions involving dialogue, exploration, and rewriting with the support of AI, culminating in a final revision entirely authored by the writer. At every stage, the AI acted as a companion in reflection; the conceptual, stylistic, and argumentative choices remain fully human.</p>]]></content:encoded></item><item><title><![CDATA[Prompt. Chain. Build. Lessons from Serena and the frontlines of generative AI]]></title><description><![CDATA[A hands-on look at how prompt chaining improves AI output&#8212;and how we&#8217;re applying it to build Serena, our generative course design tool. Plus: how we&#8217;re coding Serena with generative AI.]]></description><link>https://www.radicalcuriosity.xyz/p/prompt-chain-build-lessons-from-serena</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/prompt-chain-build-lessons-from-serena</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 27 Apr 2025 04:21:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OHRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>In this edition of <em>Radical Curiosity</em>, I&#8217;m diving into the mechanics of prompt chaining&#8212;one of the most valuable techniques for working with large language models. I&#8217;ll walk you through how breaking a big task into smaller prompts changed the way I use AI to design structured outputs, like a course syllabus.</p><p>You&#8217;ll also get a behind-the-scenes look at how we&#8217;re building <em><strong><a href="https://ciaoserena.com/">Serena</a></strong></em>, not just with generative AI as a content partner, but as a coding companion&#8212;warts, hallucinations, and all. From choosing the right stack to debugging alongside LLMs, it&#8217;s a real-time exploration of what it means to build a product with AI in the loop.</p><div><hr></div><h2><strong>Table of Contents</strong></h2><ul><li><p>Prompt Chaining with ChatGPT: how to break down complex tasks into simple steps</p></li><li><p>From Prototype to Product: how we&#8217;re building Serena with Generative AI</p></li><li><p>Anthropic Education Report: how university students use Claude</p></li><li><p>Book. Galit Atlas, <em>Emotional Inheritance</em></p></li></ul><div><hr></div><h2><strong>Prompt Chaining with ChatGPT: how to break bown complex tasks into simple steps</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OHRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OHRg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 424w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 848w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1272w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OHRg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 424w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 848w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1272w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When I first started using large language models to support my work, I quickly discovered that when you need to produce complex outputs, a single prompt is very often not enough. What you need is a <strong>chain of prompts</strong>&#8212;each focused on one task, feeding into the next. Thinking in chains, rather than crafting a single, clever prompt, became the turning point in how I approached LLMs for structured work like designing a course syllabus.</p><p>In this article, I&#8217;ll share what I learned while building a multi-step prompt workflow for Serena. If this is your first time hearing about the project, Serena is an AI-based platform I&#8217;m building in public to help people generate online courses. You can find the first episode of the saga here: <em><a href="https://www.radicalcuriosity.xyz/p/serena-from-idea-to-syllabus">Meet Serena. From idea to syllabus in minutes</a></em>.</p><h3>From Complexity to Clarity: Why Prompt Chaining Works</h3><p>Trying to solve a complex task with a single prompt is like asking someone to plan a wedding, write the vows, book the venue, and bake the cake&#8212;all at once. Prompt chaining solves this by breaking big tasks into <strong>simple, logical steps</strong>, where each prompt has a single job. That leads to:</p><ul><li><p><strong>Better outputs</strong>: the model is solving smaller, clearer problems.</p></li><li><p><strong>More control</strong>: you can tweak parts without breaking the whole.</p></li><li><p><strong>Easier debugging</strong>: you know exactly where things went wrong.</p></li><li><p><strong>Reusable components</strong>: each prompt can be applied elsewhere.</p></li></ul><p>I write prompts using the technique described in this article:<em> <a href="https://www.radicalcuriosity.xyz/p/the-art-of-ai-prompting-refining">The Art of AI Prompting: Refining Instructions for Precision and Control</a></em>. When working with prompt chains, I follow three key rules:</p><ol><li><p><strong>Start with the outcome and reverse-engineer the steps</strong>. Ask yourself: What do I ultimately need the AI to produce? Then work backward.</p></li><li><p><strong>Give each prompt a single purpose</strong>. If your prompt is doing more than one thing&#8212;e.g., summarizing <em>and</em> evaluating <em>and</em> reformatting&#8212;split it. You&#8217;ll get better results, and you&#8217;ll understand what went wrong when something breaks.</p></li><li><p><strong>Use structured output formats</strong>. I always specify output formats using markdown, YAML-style headers, or JSON. This helps me: reuse the output as clean input for the next step, scan and debug results quickly, and avoid &#8220;creative&#8221; formatting.</p></li></ol><h2>The Case: Building a Course Syllabus with a Prompt Chain</h2><p>I&#8217;m sharing a simplified version of a prompt chain I designed to help users generate a course using Serena. If you&#8217;re new to instructional design, it&#8217;s essential to understand that the course creation process typically starts by defining three key elements:</p><ol><li><p><strong>The learners</strong> &#8211; Who they are, including their background, needs, and context.</p></li><li><p><strong>The desired outcomes</strong> &#8211; What they should know and be able to do by the end of the course.</p></li><li><p><strong>Their starting point</strong> &#8211; What they currently know and can do.</p></li></ol><p>Designing a strong learning experience means identifying the gap between the learners&#8217; current state and the desired outcomes, and then building a path to bridge that gap.</p><p>The goal of the prompt chain in this article is to help users, regardless of their instructional design experience, write a learner profile, conduct a gap analysis, and turn it into a clear, structured list of well-crafted learning objectives.</p><p>Here&#8217;s the flow I created:</p><ol><li><p><strong>Learner Profile</strong>. Turn a vague description into a structured profile with clear categories: background, demographics, motivations, and challenges.</p></li><li><p><strong>Current Status</strong>. Translate partial survey responses and assumptions into a snapshot of what learners already know, do, and believe&#8212;across knowledge, skills, behaviors, tools, and mindset.</p></li><li><p><strong>Desired Status</strong>. Expand the course&#8217;s intended transformation into observable learning outcomes, expressed again in the same five dimensions.</p></li><li><p><strong>Gap Analysis</strong>. Identify what&#8217;s missing&#8212;what the course needs to bridge&#8212;without repeating what&#8217;s already known.</p></li><li><p><strong>Learning Objectives</strong>. Generate SMART learning goals aligned with Bloom&#8217;s Taxonomy, based on the gaps.</p></li></ol><h3>The Chain in Action: How to Design (and Use) a Prompt Chain</h3><h4><strong>Step 1: Learner Profile &#8212; Defining the Starting Point</strong></h4><p>For this first step in the chain, I needed the model to generate a structured learner profile from a vague or partial description. It might seem odd, but in practice, you&#8217;re often working with exactly that&#8212;an ambiguous and incomplete idea of who the learners are. So I designed a prompt with three goals:</p><ol><li><p>Infer confidently when the input is clear.</p></li><li><p>Ask for clarification only when needed.</p></li><li><p>Output a compact, reusable structure optimized for downstream prompts.</p></li></ol><pre><code>You are an expert in instructional design and learner profiling. Your task is to transform a vague or partial user description into a compact, structured learner profile.
The output must be formatted in valid YAML and follow this structure exactly:

Professional_Background:
  Industry: 
  Role: 
  Seniority: 
  Work_Context: 
Demographics:
  Age_Range: 
  Location: 
  Education_Level: 
Motivations:
  - [bullet point]
Challenges:
  - [bullet point]

Guidelines:
- Detect the input language and respond in the same language.
- If the input is clear, infer confidently and complete the profile.
- If the input is ambiguous, ask one short clarification question.
- Use a neutral, compact tone.
- Ensure the output is optimized for use in LLM context blocks.
- Do not include any text outside the YAML block.

Input:
[Paste the user description here]</code></pre><p>And here&#8217;s a sample input I used to test it:</p><blockquote><p><em>Digital Product Managers (mid to senior) working on B2B and B2C products, curious about AI but with little practical experience. They all speak English at least at B2 level.</em></p></blockquote><p></p><p>The output was clear, compact, and&#8212;most importantly&#8212;usable by the next prompt in the chain:</p><blockquote><p><em>Professional_Background:</em></p><ul><li><p><em>Industry: Digital Products</em></p></li><li><p><em>Role: Product Manager</em></p></li><li><p><em>Seniority: Mid to Senior</em></p></li><li><p><em>Work_Context: B2B and B2C</em></p></li></ul><p><em>Demographics:</em></p><ul><li><p><em>Age_Range: 30-45</em></p></li><li><p><em>Location: Global</em></p></li><li><p><em>Education_Level: Bachelor's or higher</em></p></li></ul><p><em>Motivations:</em></p><ul><li><p><em>Stay competitive by exploring emerging AI tools</em></p></li><li><p><em>Improve product decision-making through AI-enhanced workflows</em></p></li><li><p><em>Bridge the knowledge gap between AI capabilities and product strategy</em></p></li></ul><p><em>Challenges:</em></p><ul><li><p><em>Limited hands-on experience with AI tools</em></p></li><li><p><em>Difficulty identifying practical applications of AI in their workflow</em></p></li><li><p><em>Overwhelmed by the fast pace of AI tool evolution</em></p></li></ul></blockquote><p></p><h4><strong>Step 2: Current Status &#8212; Understanding Where Learners Are Now</strong></h4><p>Once I had a structured learner profile, the next step was to determine what these learners already knew, could do, and believed about the course topic&#8212;specifically, AI and prompt engineering. So I created a prompt that translated this &#8220;current state&#8221; into five key dimensions: Knowledge, Skills, Behaviors, Tools, and Mindset.</p><pre><code><code>You are an Instructional Designer working on a course tailored to the learner profile shown below. Your task is to define the Current State of learners as part of a 4-step gap analysis process.

Based on the learner profile and a vague description of their current state, infer and expand their Current State using these dimensions: Knowledge, Skills, Behaviors, Tools, Mindset.

Use YAML-style label headers, be concise, and do not repeat details already stated in the learner profile.</code></code></pre><p>The best way to understand your learners&#8217; current knowledge of a topic is simple: ask them. And yes, you can ask ChatGPT to help you generate a questionnaire to guide this assessment. If you&#8217;re designing a tailored course for a company, this step is essential&#8212;it will directly impact the success of your training. If you&#8217;re creating a course for a platform like Udemy or a learning community, it&#8217;s just as important. In that case, try to identify a representative sample of your potential audience and run the assessment with them before moving forward.</p><p>For this example, I asked ChatGPT to generate a set of credible answers to simulate learner responses, which I then used as input for the prompt.</p><blockquote><p><em>&#8220;I've studied the topic on my own and have a pretty solid general understanding.&#8221;<br>&#8220;I use these tools every day, either for work tasks or personal projects.&#8221;<br>&#8220;I&#8217;ve played around with them a few times&#8212;mostly just out of curiosity.&#8221;<br>&#8220;I work on integrating or developing them as part of professional products and solutions.&#8221;</em></p></blockquote><p></p><p>And here&#8217;s how the model cleaned it up:</p><blockquote><p><em>Knowledge:</em></p><ul><li><p><em>Understand product lifecycle, customer needs, and business metrics</em></p></li><li><p><em>Basic awareness of AI trends, but limited understanding of technical concepts</em></p></li><li><p><em>Minimal exposure to concepts like prompt engineering or LLM capabilities</em></p></li></ul><p><em>Skills:</em></p><ul><li><p><em>Strong in prioritization, stakeholder communication, and product roadmapping</em></p></li><li><p><em>Weak in identifying use cases for AI within product processes</em></p></li><li><p><em>No practical experience integrating AI into product discovery or delivery</em></p></li></ul><p><em>Behaviors:</em></p><ul><li><p><em>Rely on traditional tools and methods for decision-making and research</em></p></li><li><p><em>Occasionally read about AI developments but rarely experiment with tools</em></p></li><li><p><em>Default to delegating technical AI tasks to engineers or data teams</em></p></li></ul><p><em>Tools:</em></p><ul><li><p><em>Proficient with PM tools like Jira, Confluence, Figma, and analytics platforms</em></p></li><li><p><em>Rarely use AI tools such as ChatGPT, Claude, or Midjourney</em></p></li><li><p><em>No systematic approach to testing or evaluating AI tools</em></p></li></ul><p><em>Mindset:</em></p><ul><li><p><em>Curious and open to learning but unsure where to start</em></p></li><li><p><em>Slightly skeptical about the immediate value of AI for product work</em></p></li><li><p><em>Fear of falling behind peers or competitors more advanced in AI adoption</em></p></li></ul></blockquote><p></p><h4><strong>Step 3: Desired Status &#8212; Defining the Learning Destination</strong></h4><p>Now that I had a clear snapshot of where my learners were starting from, it was time to define where they needed to go. For this step, I used the same five dimensions to structure the output: Knowledge, Skills, Behaviors, Tools, and Mindset.</p><p>I asked the model to imagine the transformation needed for learners to achieve a set of goals I described in plain language. In real life, clients often have only a vague idea of what learners should accomplish&#8212;something like, &#8220;I just want my product managers to be more productive and start using generative AI in their day-to-day work.&#8221; The prompt&#8217;s job is to turn that raw description into structured outcomes.</p><pre><code>You are an Instructional Designer working on the third step of a 4-step gap analysis.Your task: Based on the learner profile, their current status, and a description of the desired outcome, expand and structure the Desired Status learners should achieve by the end of the course.

Use YAML-style headers (Knowledge, Skills, Behaviors, Tools, Mindset).
Be aspirational but realistic. 
Focus on observable, measurable outcomes.
Avoid repeating profile or current status info.</code></pre><p>I used that input and deliberately wrote it in bad English:</p><blockquote><p><em>At the end of the course the students will be able to create the prompts to streamline their product management activities. They will understand prompt engineering and the different techniques, like role based prompting and iterative prompting to create new prompts. The will do a series of exercise to create new prompts in the following areas: <br>- competitive analysis <br>- customer research <br>- product requirement document <br>- user stories <br>- prioritization</em></p></blockquote><p></p><p>And here&#8217;s how the model cleaned it up:</p><blockquote><p><em>Knowledge:</em></p><ul><li><p><em>Understand core principles of prompt engineering and its relevance to product management</em></p></li><li><p><em>Recognize and differentiate between techniques such as role-based, iterative, and chain-of-thought prompting</em></p></li><li><p><em>Know when and how to apply prompting techniques to specific PM workflows (e.g., research, documentation, prioritization)</em></p></li></ul><p><em>Skills:</em></p><ul><li><p><em>Create effective, goal-oriented prompts to support tasks like competitive analysis and customer research</em></p></li><li><p><em>Use prompt iteration to refine outputs and align results with PM needs</em></p></li><li><p><em>Evaluate the quality and relevance of AI-generated outputs and adjust prompts accordingly</em></p></li><li><p><em>Integrate prompt engineering into writing PRDs, user stories, and backlog prioritization</em></p></li></ul><p><em>Behaviors:</em></p><ul><li><p><em>Regularly experiment with AI tools to improve efficiency in core PM tasks</em></p></li><li><p><em>Proactively test and adapt prompts to different contexts and product scenarios</em></p></li><li><p><em>Collaborate with team members using AI-assisted workflows</em></p></li></ul><p><em>Tools:</em></p><ul><li><p><em>Confident use of AI tools like ChatGPT or Claude as part of daily PM toolkit</em></p></li><li><p><em>Maintain a personal prompt library or workflow templates for repeated use</em></p></li><li><p><em>Document and share prompting best practices with peers or cross-functional teams</em></p></li></ul><p><em>Mindset:</em></p><ul><li><p><em>View prompt engineering as a valuable, ongoing skill for enhancing product decision-making</em></p></li><li><p><em>Embrace a test-and-learn approach to working with AI</em></p></li><li><p><em>Confident in navigating and adapting to evolving AI tools and methods</em></p></li></ul></blockquote><p></p><h4><strong>Step 4: Gap Analysis &#8212; Mapping the Delta That Drives the Design</strong></h4><p>With both the <strong>current status</strong> and the <strong>desired status</strong> clearly articulated, the next logical step is to identify what is missing. For this prompt, my goal was to <strong>automatically extract the actionable gaps</strong> between the learners&#8217; current capabilities and their aspirational state. In other words: what do they not know, not do, not believe&#8212;<strong>yet</strong>?</p><pre><code>You are an Instructional Designer completing the final step of a 4-step gap analysis.
Your task: Based on the structured Current Status and Desired Status, identify the key gaps the course must address. Focus on actionable differences.

Use YAML-style headers (Knowledge, Skills, Behaviors, Tools, Mindset).
Only include relevant gaps. 
Don&#8217;t copy entire sections from earlier outputs. 
Keep it tight.</code></pre><p>I provided the learner profile, current status, and desired status as input, and ChatGPT generated the following output:</p><blockquote><p><em>Knowledge:</em></p><ul><li><p><em>Limited understanding of prompt engineering principles</em></p></li><li><p><em>Unfamiliar with prompting techniques like role-based and iterative prompting</em></p></li><li><p><em>Lack of contextual knowledge on how prompting supports PM tasks</em></p></li></ul><p><em>Skills:</em></p><ul><li><p><em>Inability to craft effective prompts tailored to product management workflows</em></p></li><li><p><em>No structured approach to refining and evaluating prompts</em></p></li><li><p><em>Difficulty integrating AI outputs into standard PM documentation and decision-making</em></p></li></ul><p><em>Behaviors:</em></p><ul><li><p><em>Passive engagement with AI; lack of consistent experimentation</em></p></li><li><p><em>Rarely apply AI tools to real product scenarios</em></p></li><li><p><em>Limited collaboration around AI-assisted practices</em></p></li></ul><p><em>Tools:</em></p><ul><li><p><em>Underutilization of LLMs and AI assistants in daily product work</em></p></li><li><p><em>No established prompt libraries or reusable templates</em></p></li><li><p><em>Lack of familiarity with prompt testing and evaluation features</em></p></li></ul><p><em>Mindset:</em></p><ul><li><p><em>Uncertainty about the reliability and value of AI in PM</em></p></li><li><p><em>Hesitation to adopt AI as a core productivity tool</em></p></li><li><p><em>Need to build confidence in using AI iteratively and independently</em></p></li></ul></blockquote><p></p><h4><strong>Step 5: Learning Objectives &#8212; Turning Gaps into Goals</strong></h4><p>Now that the learning gaps are clear, the next step is to transform them into <strong>concrete, assessable learning objectives</strong>. This is where the chain shifts from diagnostic to generative. Up to this point, I&#8217;d been identifying and organizing information. Now, I needed to create something that would&nbsp;<strong>guide the course content, activities, and assessments.</strong></p><p>My goal for this step was to generate <strong>SMART objectives</strong> (Specific, Measurable, Achievable, Relevant, and Time-bound) based on Bloom&#8217;s Taxonomy&#8212;so each goal would reflect not only <em>what</em> the learner needed to achieve, but also <em>how deeply</em> they needed to engage with the material.</p><p>Here&#8217;s a possible prompt:</p><pre><code>You are an expert instructional designer. Your task is to generate SMART learning objectives aligned with Bloom&#8217;s Taxonomy, based on a given gap analysis. 

Organize the output in YAML format using the following top-level categories: Knowledge Skills Behaviors Tools Mindset 
Under each category, list individual learning objectives as bullet points.

Each learning objective must: 
- Start with the appropriate [Bloom&#8217;s Taxonomy level] in square brackets. 
- Focus on one observable and assessable outcome only. 
&#9888;&#65039; Do not combine multiple outcomes or actions in one statement. 

Be specific, measurable, achievable, and relevant to the learner&#8217;s context and gap. 
Exclude time constraints. These goals will later inform the course structure and schedule. 
Use clear instructional language suitable for inclusion in a course syllabus. 
Avoid vague phrasing (e.g., do not use "By the end of the course..." or similar expressions).</code></pre><p>These are the learning objectives generated as a result of the chain:</p><blockquote><p><em>Knowledge:</em></p><ul><li><p><em>[Understand] Describe the principles of prompt engineering in the context of product management.</em></p></li><li><p><em>[Recognize] Identify key prompting techniques such as role-based, iterative, and chain-of-thought prompting.</em></p></li><li><p><em>[Differentiate] Distinguish between appropriate use cases for various prompting strategies in product workflows.</em></p></li></ul><p><em>Skills:</em></p><ul><li><p><em>[Apply] Create effective prompts to support competitive analysis using AI tools.</em></p></li><li><p><em>[Apply] Generate prompts to conduct structured customer research with LLMs.</em></p></li><li><p><em>[Apply] Draft AI-assisted product requirement documents using prompt templates.</em></p></li><li><p><em>[Apply] Write user stories by formulating and refining relevant prompts.</em></p></li><li><p><em>[Apply] Create prioritization matrices or decisions using prompt-based workflows.</em></p></li><li><p><em>[Evaluate] Assess the quality of AI-generated outputs and revise prompts accordingly.</em></p></li><li><p><em>[Develop] Build a reusable set of prompts tailored to recurring product management tasks.</em></p></li></ul><p><em>Behaviors:</em></p><ul><li><p><em>[Demonstrate] Use AI tools consistently in real product scenarios through structured prompting exercises.</em></p></li><li><p><em>[Experiment] Test and adapt different prompting techniques to improve workflow efficiency.</em></p></li><li><p><em>[Engage] Collaborate with peers to refine and share prompt strategies for product use cases.</em></p></li></ul><p><em>Tools:</em></p><ul><li><p><em>[Utilize] Operate AI tools such as ChatGPT to complete defined product management tasks.</em></p></li><li><p><em>[Organize] Maintain a personal prompt library for frequent use cases in product workflows.</em></p></li><li><p><em>[Implement] Use built-in prompt testing and evaluation features to refine outputs.</em></p></li></ul><p><em>Mindset:</em></p><ul><li><p><em>[Adopt] Show a proactive attitude toward experimenting with AI to enhance decision-making.</em></p></li><li><p><em>[Value] Acknowledge the role of AI tools as part of the product manager's toolkit.</em></p></li><li><p><em>[Build] Demonstrate confidence in independently integrating AI solutions into daily workflows.</em></p></li></ul></blockquote><p></p><p>Once you have a complete list of learning objectives, you can begin organizing the course. For example, you might break the objectives down into smaller, more manageable parts, group them into meaningful instructional themes, or structure them according to specific design principles. But this is another &#8212; and believe me, a more complex &#8212; chain of prompts.</p><h3></h3><h2><strong>5. Conclusion: Don&#8217;t Just Prompt&#8212;Design</strong></h2><p>Working through this chain of prompts to design a course syllabus didn&#8217;t just change how I use large language models. It changed how I think about collaboration with AI.</p><p>What I learned is that good prompting isn&#8217;t about clever phrasing&#8212;it&#8217;s about <strong>clear thinking</strong>. It&#8217;s about knowing what you want, breaking it down, and giving the AI a fair shot at helping you get there. The more structured my thinking became, the more valuable and reliable the model became.</p><p>If you&#8217;re working with AI and find yourself hitting limits, I encourage you to do what I did: stop trying to force everything into one prompt. Instead, think in chains. Design each prompt like it&#8217;s part of a conversation&#8212;with a purpose, an input, and a clear output.</p><div><hr></div><h2><strong>From Prototype to Product: how we&#8217;re building Serena with Generative AI</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c2_u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c2_u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c2_u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1565462,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/160892714?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c2_u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!c2_u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2ef45e8-d3ae-4371-879d-4474babb9c2a_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When we started working on Serena, we knew we had a significant technology risk to mitigate. Could generative AI be used to create a structured online course? And if so, how should the workflow be engineered to make that process reliable and repeatable? What happens when the course topic falls outside the domain of a general-purpose LLM like ChatGPT? How do we manage the underlying knowledge base, then? Like most early-stage projects, we began with a proof of concept. Quick, rough, and far from pretty. But it worked well enough to prove that the core idea could work.</p><p>So, we decided to build a more refined MVP with self-service onboarding. That&#8217;s when we hit the wall every founder knows: the gap between what you <em>want</em> to develop and what your <em>resources</em> allow. We mapped out the essential steps for designing a solid syllabus. To build that properly, I imagined the kind of agile team I'm used to managing: four or five developers, a dedicated designer, and maybe even a QA&#8212;a dream setup. In reality, I had a modest budget &#8212; enough for about 20 days from a front-end developer and a few days from a UX designer. Not enough to go from a scrappy prototype to a product that looked and felt polished.</p><h3>Coding with Vibes (and Constraints)</h3><p>&#8220;Vibe coding&#8221; &#8212; spinning up features quickly using AI tools and prompt-first thinking &#8212; is a seductive idea. Tools like Cursor and Replit are making it increasingly viable. But let&#8217;s be honest: we&#8217;re not there yet when it comes to full-stack complexity.</p><p>That said, the productivity gains are real &#8212; <em>if</em> you know how to work with the tools. That&#8217;s where <strong><a href="https://www.linkedin.com/in/enzoaugieri/">Enzo Augieri</a></strong> came in. An old friend of mine and co-founder from my first startup in 1999, Enzo brings nearly four decades of coding experience and a relentless curiosity for new tools. He was the perfect partner to push Replit to its limits.</p><p>Two key lessons emerged as we built with generative AI: let it choose the tools, and don&#8217;t expect it to drive without you at the wheel.</p><p>LLMs are most fluent in the technologies they see most often. When we let the model pick the stack &#8212; in our case, React &#8212; things went smoothly. But when we tried switching to Vue and Quasar, which Enzo personally preferred, everything got harder: vague suggestions, hallucinated components, frustrating debugging. LLMs don&#8217;t optimize for elegance or preference &#8212; they follow the statistical path of least resistance. Fighting that current cost us time, so we leaned in.</p><p>Still, even with the &#8220;right&#8221; stack, the AI isn&#8217;t magic. When a basic logout feature broke, it looped through vague fixes. Replit&#8217;s debugger couldn&#8217;t help. It took Enzo digging into the browser console to find the real issue and guide the AI with the proper context to get it fixed. AI can accelerate development. But it needs an experienced hand to steer, debug, and decide when to trust &#8212; and when to override.</p><h3>Redefining the Startup Org Chart</h3><p>For startups, there&#8217;s a real opportunity on the table: increase development productivity, build and iterate faster, and test ideas more intelligently. With the proper setup, two experienced developers can accomplish what a typical agile team would. But don&#8217;t buy into the narrative that anyone can now build complex applications solo, just by prompting.</p><p>&#8220;Vibe coding&#8221; makes for great social media content, but real products &#8212; the kind that need to be secure, scalable, and maintainable &#8212; require more. If you&#8217;re building a business that&#8217;s meant to last, you need to own your tech stack. You need control over your core assets. And you need enough technical understanding &#8212; whether directly or through trusted collaborators &#8212; to make informed decisions.</p><p>That said, after this hands-on experience, I&#8217;m reevaluating Serena&#8217;s financial model &#8212; reducing headcount and integrating AI technologies, from tools to autonomous agents, directly into the org chart to streamline operations and keep the budget as lean as possible.</p><div><hr></div><h2><strong>Anthropic Education Report: how university students use Claude</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RytG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RytG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 424w, https://substackcdn.com/image/fetch/$s_!RytG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 848w, https://substackcdn.com/image/fetch/$s_!RytG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 1272w, https://substackcdn.com/image/fetch/$s_!RytG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RytG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp" width="1456" height="1246" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1246,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Distribution of conversations across interaction styles, for each NCES subject.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Distribution of conversations across interaction styles, for each NCES subject." title="Distribution of conversations across interaction styles, for each NCES subject." srcset="https://substackcdn.com/image/fetch/$s_!RytG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 424w, https://substackcdn.com/image/fetch/$s_!RytG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 848w, https://substackcdn.com/image/fetch/$s_!RytG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 1272w, https://substackcdn.com/image/fetch/$s_!RytG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e206510-9b48-4207-ae2a-e4ae74890612_2400x2054.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Anthropic released an interesting report based on in-depth interviews with students across the U.S.&#8212;and it&#8217;s full of insights for anyone working in education, EdTech, or AI. Some key takeaways:</p><ul><li><p>Students are using AI strategically, not just to shortcut assignments.</p></li><li><p>Claude is often treated like a study partner or mentor&#8212;helping clarify complex ideas, test understanding, and spark deeper thinking.</p></li><li><p>Many students say using AI helps them feel more confident and capable, especially in independent learning.</p></li></ul><p>The report also raises important questions for educators: how do we design learning experiences that embrace this new behavior instead of resisting it?</p><p><strong><a href="https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude">Read the Report</a>.</strong></p><div><hr></div><h2>Galit Atlas, <em>Emotional Inheritance</em></h2><div id="youtube2--rsCEEoBOmA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;-rsCEEoBOmA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/-rsCEEoBOmA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>Emotional Inheritance</em> is one of those books that stays with you long after you've turned the last page. I read it in just a couple of days, not because it was light, but because it was so deeply compelling I couldn&#8217;t put it down.</p><p>Galit Atlas writes with the clarity of a scientist and the soul of a storyteller. Through her own experiences and those of her patients, she explores how trauma, silence, and emotional legacies are passed through generations&#8212;sometimes unknowingly, but never without impact.</p><p>This book is as insightful as it is intimate. I can&#8217;t recommend it enough:</p><ul><li><p>Buy the English edition: <strong><a href="https://amzn.eu/d/cdgLp4i">Emotional Inheritance</a></strong> (Amazon)</p></li><li><p>Buy the Italian edition: <strong><a href="https://amzn.eu/d/a2EqiBG">L&#8217;eredit&#224; emotiva</a></strong> (Amazon)</p></li></ul><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Thanks for reading this episode of my newsletter. I hope I&#8217;ve been helpful. If you think my sketchbook might interest someone else, I&#8217;d appreciate it if you <strong>shared it on social media and forwarded it to your friends and colleagues</strong>.</em></p><p><em>Nicola</em></p>]]></content:encoded></item><item><title><![CDATA[Meet Serena. From idea to syllabus in minutes]]></title><description><![CDATA[Serena is a generative AI-powered tool that helps instructional designers, and course creators turn learning needs into structured course blueprints&#8212;filling the gap between idea and content creation.]]></description><link>https://www.radicalcuriosity.xyz/p/serena-from-idea-to-syllabus</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/serena-from-idea-to-syllabus</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 06 Apr 2025 04:05:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OHRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>In this edition of Radical Curiosity, I&#8217;m sharing the story behind <strong>Serena</strong>, the AI-powered tool I&#8217;ve been building to help instructional designers go from learning needs to structured course blueprints. You&#8217;ll learn why I believe there&#8217;s an untapped segment in EdTech&#8212;and how we&#8217;re exploring it, one messy prototype at a time.</p><p>You&#8217;ll also find a behind-the-scenes look at how I created <strong>Serena&#8217;s voice and style guide</strong>&#8212;a practical approach if you&#8217;re a non-native English speaker or want your brand to sound more human and less like default ChatGPT.</p><p>Plus, my <strong>interview with Product Heroes </strong>(in Italian &#127470;&#127481;), in which I discuss how AI transforms product management from roadmap planning to repetitive task automation.</p><div><hr></div><h2>Table of Contents</h2><ul><li><p>Meet Serena. From idea to syllabus in minutes</p></li><li><p>Writing with personality: How I created Serena&#8217;s Style Guide and Voice</p></li><li><p>Will AI replace Product Managers? (Interview in Italian)</p></li></ul><div><hr></div><h2>Meet Serena. From idea to syllabus in minutes</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://ciaoserena.com/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OHRg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 424w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 848w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1272w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5194112,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ciaoserena.com/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/160194651?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!OHRg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 424w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 848w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1272w, https://substackcdn.com/image/fetch/$s_!OHRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f16423-602d-4399-9835-090bc05b8d2b_3014x2170.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the past 15 years, I&#8217;ve built my career around creating products and leading product teams. I&#8217;ve founded and sold two companies, and I&#8217;ve also had my share of failures. Today, I work as a temporary product leader, helping define product strategy, build product organizations, and hire, train, and mentor product managers and UX designers.</p><p>I&#8217;m <em>radically curious</em>, so I&#8217;ve always spent time studying, experimenting, and sharing what I learn. Staying up to date, testing new ideas, and participating in communities of practice isn&#8217;t just a habit&#8212;it&#8217;s essential to how I work.</p><p>That&#8217;s also why I&#8217;ve done a lot of corporate training and spent eight years as an adjunct professor of product management at Roma Tre University. Teaching has always been a passion of mine&#8212;it&#8217;s fulfilling, inspiring, and deeply impactful.</p><p>If you&#8217;ve ever designed a course, you know how time-consuming and complex it can be. That&#8217;s why, over the last nine months, I&#8217;ve been exploring how generative AI could help me create new types of learning experiences. It&#8217;s still a research and development project, but I believe strongly in its potential. That&#8217;s why I&#8217;ve decided to start building it in public.</p><p>Building in public is a powerful way to kick off a new project because it creates momentum, attracts early feedback, and builds trust. It turns the product journey into a shared experience, where others can contribute, challenge assumptions, and feel part of something from the very beginning. For me, it&#8217;s not just about visibility&#8212;it&#8217;s about learning faster, staying accountable, and connecting with the people who care most about the problem I&#8217;m trying to solve.</p><h3>An untapped segment in EdTech worth exploring</h3><p>Education is a vast and complex space. Many tools already help educators and instructional designers craft effective learning experiences. It spans all kinds of learning&#8212;formal and informal, for kids and adults, from classrooms to corporate training.</p><p>I believe there&#8217;s a meaningful gap in the learning ecosystem, one that agentic AI can effectively fill. It&#8217;s the space <strong>between the moment you realize someone needs to learn something and the moment you start producing training content</strong>. It&#8217;s the messy, strategic part when you must assess the training need, define the scope, articulate clear learning objectives, and craft a solid syllabus.</p><p>Today, this work still depends heavily on collaboration between two key roles: the subject matter expert, who brings the knowledge, and the instructional designer, who knows how to turn that knowledge into a structured, impactful learning experience. It&#8217;s a critical phase&#8212;where the groundwork shapes everything that follows&#8212;but it&#8217;s still managed through long meetings, shared documents, and a lot of back-and-forth. There&#8217;s very little dedicated tooling built for this stage.</p><p>Why? Because most tools are focused on what comes <em>after</em> you've already decided what to teach. You have <strong>authoring tools</strong> like Articulate, EasyGenerator, or iSpring Suite to build content. You have <strong>LMS platforms</strong> like Moodle, Docebo, or TalentLMS to distribute and track it.</p><p>But there&#8217;s a missing layer in the ecosystem&#8212;a tool that helps you go from a learning need to a structured course blueprint: a <strong>Learning Analysis &amp; Design Platform</strong> (LeAD). This isn&#8217;t just a new spin on existing software. It&#8217;s something fundamentally new, made possible by generative AI. For the first time, AI can assist in the early design stages&#8212;supporting gap analysis, drafting learning objectives, and generating a well-structured syllabus. A syllabus that becomes a launchpad for the full learning experience, ready to be refined and built in any authoring tool.</p><p>If you look at guides like <a href="https://www.devlinpeck.com/content/instructional-design-software">this one by Devlin Peck</a>, you'll see plenty of tools for prototyping and authoring&#8212;but when it comes to the foundational design work, the only resource listed is a book. That gap says a lot.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qei9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qei9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 424w, https://substackcdn.com/image/fetch/$s_!qei9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 848w, https://substackcdn.com/image/fetch/$s_!qei9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 1272w, https://substackcdn.com/image/fetch/$s_!qei9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qei9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png" width="1456" height="344" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2eea387-4f39-432e-a776-7b9010391a60_2588x612.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:344,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:129704,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/160194651?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qei9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 424w, https://substackcdn.com/image/fetch/$s_!qei9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 848w, https://substackcdn.com/image/fetch/$s_!qei9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 1272w, https://substackcdn.com/image/fetch/$s_!qei9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2eea387-4f39-432e-a776-7b9010391a60_2588x612.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The Analysis and Design steps in the <a href="https://www.devlinpeck.com/content/addie-instructional-design">ADDIE framework</a> have traditionally been handled through emails, spreadsheets, and informal workflows. </p><p>It&#8217;s still too early to determine whether the Learning Design Platform could become a  category of its own. But what&#8217;s clear is that major players in adjacent spaces are moving in this direction. Authoring tools like Articulate are adding AI throughout their workflows. Easygenerator has introduced an AI course builder. Even LMS platforms are getting involved&#8212;Docebo now includes AI-powered authoring, and TalentLMS claims its AI can &#8220;generate cohesive courses, so you don&#8217;t have to.&#8221;</p><p>Everyone seems to recognize the potential of generative AI in the early stages of learning design. But based on my own experiments, most of the current solutions are still far from delivering real value. They often conflate course design with content generation&#8212;skipping the hard part: truly understanding the learners, defining meaningful learning goals, and crafting a coherent instructional flow.</p><p>Together with a friend, we believed we could do better. So we built an MVP to explore this space and test whether generative AI could help create high-quality course syllabi tailored to specific learner profiles and goals. The results were promising&#8212;and that&#8217;s how <strong>Serena began to take shape</strong>.</p><h3>What instructional designers are saying</h3><p>Around the same time, I started sharing some early thoughts on Serena with the instructional design community&#8212;particularly on Reddit&#8212;and had the chance to engage in thoughtful conversations with instructional designers and course developers who were already experimenting with generative AI in their workflows.</p><p>There&#8217;s clearly some skepticism. And, understandably so. Many fear AI could eventually displace roles like theirs, especially if tools are positioned as "automating" instructional design. But at the same time, others are embracing it&#8212;openly and skillfully. They use tools like Gemini, Claude, and ChatGPT to draft learning outcomes, create instructional scripts, generate quizzes, role-play interactions, and even reduce reliance on time-constrained subject matter experts. One user said, <em>&#8220;I use Gemini daily in my work now and wouldn&#8217;t want to give it up.&#8221;</em></p><p>Even more interesting is that these professionals aren&#8217;t waiting for a dedicated platform&#8212;they&#8217;re building prompts, workflows, and little assistants. One designer put it best: <em>&#8220;AI won&#8217;t replace instructional designers&#8212;IDs who know how to use AI will replace those who don&#8217;t.&#8221;</em></p><p>At the same time, there&#8217;s a shared frustration: the <strong>analysis and design phases</strong> are often rushed or neglected due to time pressure. Content needs to go out fast, and the deeper thinking gets compressed or skipped. Many expressed hope that AI could help here&#8212;as long as a human stays in the loop to guide, refine, and contextualize.</p><p>And that&#8217;s exactly the space I believe Serena belongs in&#8212;not to replace the designer but to work alongside them so they can bring more of their skill, judgment, and creativity to a part of the process that has always deserved more time, not less.</p><h3>The embarrassing prototype</h3><p>It&#8217;s still pretty rough&#8212;the UI needs a complete redesign, and we&#8217;re the first to admit it&#8217;s &#8220;embarrassing.&#8221; But after countless iterations, it&#8217;s finally starting to work the way we intended.</p><p>That moment when a user said, <em>&#8220;That&#8217;s a good syllabus&#8212;I could definitely use this as a starting point,&#8221;</em> was huge for us. It didn&#8217;t come easy. One of the biggest challenges was getting the LLM to generate a complete syllabus without repeating the same concepts over and over. Solving that, after hours of refining prompts and engineering the workflow, felt like a real breakthrough.</p><p><strong>Serena is the first tool designed to turn learning needs into course blueprints&#8212;before you ever open an authoring tool.</strong></p><div id="youtube2-d-wlHLGGwZE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;d-wlHLGGwZE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/d-wlHLGGwZE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>We&#8217;re still early, and Serena is far from perfect&#8212;but the potential is real. And we&#8217;re not building it alone. If you&#8217;re an instructional designer, course creator, or educator who&#8217;s ever wished for more clarity before creating, more structure before designing&#8212;<strong>we&#8217;d love your input.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://ciaoserena.com/closed-beta&quot;,&quot;text&quot;:&quot;Join the Community&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://ciaoserena.com/closed-beta"><span>Join the Community</span></a></p><div><hr></div><h2><strong>Writing with personality: How I created Serena&#8217;s Style Guide and Voice</strong></h2><p>If you&#8217;ve ever tried to write content for your brand as a non-native English speaker, you know how challenging it can be. Even with great ideas, tone and clarity can slip. That&#8217;s where a well-defined voice becomes more than just branding&#8212;it becomes a tool for <em>consistency, confidence,</em> and <em>authenticity</em>.</p><p>For Serena, I wanted the writing to feel warm, clear, and quietly confident. Not too corporate, not too casual. The default style of ChatGPT is helpful but can be a bit generic. I wanted to shape a voice that reflected Serena&#8217;s values: thoughtful, collaborative, and grounded in real-world learning design.</p><p>So, I used the following technique.</p><h3><strong>Step 1: Analyzing Brand Voices with ChatGPT</strong></h3><p>I started by analyzing how ChatGPT describes the voice and tone of other brands. The prompt was straightforward:</p><blockquote><p><em>You are a brand manager analyzing the style and voice of [brand name]. Review content from their website, blog, and marketing materials, then summarize your findings</em></p></blockquote><p>I repeated this for several brands&#8212;Airbnb, Stripe, Notion, Asana, and Headspace&#8212;to observe the language ChatGPT uses to describe tone, sentence structure, rhythm, and formality. This gave me a clear sense of how certain traits relate to brand perception.</p><p>Then, I began shaping Serena&#8217;s style using that same logic.</p><h3><strong>Step 2: Crafting the Serena Voice Prompt</strong></h3><p>I wrote a custom prompt that reflects Serena&#8217;s tone and voice. It wasn&#8217;t a one-shot job. I iterated several times, refining until the content felt <em>just right</em>&#8212;something I could imagine Serena &#8220;saying.&#8221; I used the method I described in this article: <em><a href="https://www.radicalcuriosity.xyz/p/the-art-of-ai-prompting-refining">The Art of AI Prompting: Refining Instructions for Precision and Control</a>.</em></p><p>Here&#8217;s the core of the Serena-style prompt I developed:<br></p><blockquote><p>You will write content according to this styleguides</p><p><strong>Serena &#8211; Brand Style &amp; Voice Guide</strong></p><p><strong>Brand Essence</strong></p><p>Serena is more than just an AI-powered tool&#8212;it&#8217;s a companion for instructional designers, helping them craft better courses with ease. It embodies expertise, warmth, and collaboration, creating a space where professionals feel empowered and supported.</p><p><strong>Brand Personality</strong></p><ul><li><p>Knowledgeable, but not academic &#8211; Serena speaks with the confidence of an expert but without jargon or pretentiousness.</p></li><li><p>Warm and welcoming &#8211; The tone is friendly and personal, making users feel part of a close-knit group.</p></li><li><p>Supportive and encouraging &#8211; Instructional designers are not just customers; they are co-creators shaping the future of learning.</p></li><li><p>Curious and open-minded &#8211; Serena embraces new ideas, feedback, and innovation, constantly evolving with its community.</p></li></ul><p><strong>Tone of Voice</strong></p><p>Serena&#8217;s voice should reflect a blend of expertise and personal storytelling. Given its strong connection to the founders, especially Nicola Mattina, the messaging should feel:</p><ul><li><p>Authentic and personal &#8211; Sharing real challenges, lessons learned, and behind-the-scenes moments of building Serena.</p></li><li><p>Conversational and relatable &#8211; Avoiding overly technical or corporate language. Instead, speaking like a mentor or peer in a university lounge.</p></li><li><p>Community-driven &#8211; Encouraging discussions, inviting participation, and making users feel like they belong to something bigger.</p></li></ul><p><strong>Storytelling Approach</strong></p><ul><li><p>Founder-led narrative &#8211; Nicola&#8217;s journey, insights, and hands-on experience in product design and EdTech will be central.</p></li><li><p>User-centric stories &#8211; Featuring instructional designers&#8217; challenges and successes with Serena.</p></li><li><p>Behind-the-scenes content &#8211; Sharing the ongoing development, challenges, and decisions that shape Serena.</p></li></ul><p><strong>Brand Lexicon</strong></p><p>Use language that feels natural, professional, but friendly:</p><ul><li><p>Instead of &#8220;AI-driven automation for course creators&#8221;, say &#8220;Serena helps you design smarter, faster, and with confidence.&#8221;</p></li><li><p>Instead of &#8220;users&#8221;, say &#8220;members&#8221; or &#8220;our community&#8221; to reinforce the sense of belonging.</p></li><li><p>Instead of &#8220;customer support&#8221;, say &#8220;we&#8217;re here to help&#8221; or &#8220;let&#8217;s figure it out together.&#8221;</p></li></ul><p><strong>Community-First Mentality</strong></p><ul><li><p>Serena is a place, not just a product. The messaging should make users feel like they are part of an evolving knowledge hub.</p></li><li><p>Encourage engagement &#8211; Ask for feedback, showcase members' work, and create opportunities for discussion.</p></li><li><p>Recognize contributions &#8211; Highlight community insights and ideas that help shape Serena.</p></li></ul><p><strong>Content Style</strong></p><ul><li><p>Newsletter &amp; Blog: Thoughtful, reflective, mixing Nicola&#8217;s personal experiences with practical insights.</p></li><li><p>Social Media: Casual and engaging, using direct questions, behind-the-scenes stories, and community shoutouts.</p></li><li><p>Product Copy: Clear and concise, but with a reassuring, encouraging tone.</p></li></ul><p><strong>Example Messaging</strong></p><p><em>Warm &amp; Welcoming<br></em>&#8220;Hey, we&#8217;ve been thinking a lot about how AI can actually support instructional designers&#8212;not replace them. That&#8217;s why we built Serena. Think of it as your brainstorming partner, your course co-creator, your extra set of hands when you need them.&#8221;</p><p><strong>Founder Storytelling<br></strong>&#8220;When we first started building Serena, we kept asking ourselves: what do instructional designers actually need? I&#8217;ve spent years working in product design, and I know that the best solutions come from conversations. That&#8217;s why we&#8217;re building Serena alongside you&#8212;our community.&#8221;</p><p><strong>Community-Focused<br></strong>&#8220;Serena isn&#8217;t just software. It&#8217;s a space where instructional designers share ideas, test new approaches, and shape the future of learning. Join us&#8212;we&#8217;d love to have you.&#8221;</p></blockquote><p>This prompt now lives inside a custom GPT we use as a publishing assistant.</p><h3><strong>Step 3: Writing with ChatGPT, the Right Way</strong></h3><p>Every time I write content for Serena, I follow a two-step process:</p><ol><li><p><strong>Use ChatGPT as a co-pilot, not a ghostwriter.</strong><br>I never start with <em>&#8220;Write me a&#8230;&#8221;</em>. Instead, I explain the context and what I&#8217;m trying to achieve. Then, I write the first draft together with ChatGPT, refining as we go. I ask it to critique what we&#8217;ve written, challenge the tone, or suggest alternatives. This back-and-forth is where the real value lies.</p></li><li><p><strong>Polish with the custom GPT style guide.</strong><br>Once I&#8217;m happy with the draft, I pass it through our custom GPT, which includes Serena&#8217;s voice and tone. This helps ensure consistency across everything we publish, from blog posts to interface copy.</p></li></ol><h3><strong>Why It Matters</strong></h3><p>This might sound like extra work, but it&#8217;s made content creation faster, not slower. The results feel more aligned with Serena&#8217;s personality&#8212;approachable, insightful, and designed for humans.</p><p>If you&#8217;re building a product and struggling with writing that feels &#8220;too AI&#8221; or &#8220;too stiff,&#8221; give this method a try. A bit of intentionality goes a long way.</p><p>I&#8217;d love to hear how <em>you</em> approach voice and tone&#8212;especially if you&#8217;re working with AI tools. Have you crafted your style guide? Are you using a similar process or experimenting with something completely different?</p><p>Feel free to share your thoughts or tips&#8212;I'm always learning, and your insights could also help shape how Serena grows.</p><div><hr></div><h2><strong>How AI is changing Product Management (in Italian &#127470;&#127481;)</strong></h2><p>I had the pleasure of chatting with <a href="https://www.linkedin.com/in/marcoimperato/">Marco Imperato</a>, founder of <a href="https://www.productheroes.it/">Product Heroes</a>, about how generative AI and intelligent agents are transforming the role of product managers.</p><p>We talked about roadmaps, discovery, automation, mindset, skills, and why I believe AI isn&#8217;t just a tool&#8212;it&#8217;s a true paradigm shift for those of us working in product. The interview is in <strong>Italian</strong>, but I&#8217;m exploring ways to dub it in English soon.</p><div id="youtube2-3QHG5IJEokw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;3QHG5IJEokw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/3QHG5IJEokw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Thanks for reading this episode of my newsletter. I hope I&#8217;ve been helpful. If you think my sketchbook might interest someone else, I&#8217;d appreciate it if you <strong>shared it on social media and forwarded it to your friends and colleagues</strong>.</em></p><p><em>Nicola</em></p>]]></content:encoded></item><item><title><![CDATA[The Art of AI Prompting: Refining Instructions for Precision and Control]]></title><description><![CDATA[Prompt like a pro &#8211; Learn how to craft effective prompts by combining role-based instructions with iterative refinement, so you can get consistent, high-quality results from AI.]]></description><link>https://www.radicalcuriosity.xyz/p/the-art-of-ai-prompting-refining</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/the-art-of-ai-prompting-refining</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 23 Mar 2025 05:15:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rd0K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>This edition of Radical Curiosity is all about working smarter&#8212;with AI, with your team, and with your time.</p><p>I&#8217;ve been deep-diving into prompt design and testing strategies to get more consistent, useful responses from AI. If you&#8217;ve ever struggled to get the output you wanted, you&#8217;ll find some practical techniques in the first section&#8212;including how I&#8217;m using role-based prompts and iterative refinement to build better learning tools.</p><p>You&#8217;ll also find a guide on cutting down unnecessary meetings (without making enemies) and a roundup of the most inspiring real-world uses of ChatGPT&#8212;stories that go well beyond the usual productivity hacks.</p><div><hr></div><h2>Table of Contents</h2><ul><li><p>The Art of AI Prompting: Refining Instructions for Precision and Control</p></li><li><p>How to Cut Down on Useless Meetings</p></li><li><p>The Most Impressive Uses of ChatGPT</p></li></ul><div><hr></div><h2><strong>The Art of AI Prompting: Refining Instructions for Precision and Control</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rd0K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rd0K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rd0K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1388989,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/159465528?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rd0K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!rd0K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e04a5cb-b595-4283-810e-9e7acc378f9f_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Creating precise AI-generated responses isn&#8217;t just about luck&#8212;it&#8217;s about mastering <strong>prompt engineering</strong>. This skill has become essential for knowledge workers, allowing us to give AI the right instructions to generate exactly what we need&#8212;with the right level of detail, in the correct style, and with a clear structure.</p><p>Lately, I&#8217;ve been experimenting with different techniques to develop an <strong>instructional designer co-pilot</strong>&#8212;an AI assistant that helps course creators refine learning objectives with the precision of a human expert.</p><p>Defining clear learning objectives is one of the most challenging aspects of course creation. Vague goals like <em>&#8220;understand marketing strategy&#8221;</em> aren&#8217;t useful because they don&#8217;t specify what a learner should actually be able to do. To ensure AI-generated learning objectives align with best practices, I needed a systematic approach&#8212;one that instructional designers already use.</p><p>One of the most popular frameworks in instructional design is Bloom&#8217;s Taxonomy, which categorizes cognitive skills&#8212;such as remembering, analyzing, and evaluating&#8212;helping to transform broad objectives into measurable and actionable learning outcomes.</p><p>My goal was to create a prompt that converts vague, generic learning goals into well-defined objectives based on Bloom&#8217;s Taxonomy. To achieve this, I tested different methods, and one approach stood out&#8212;<strong>iterative prompting</strong>.</p><p>By refining prompts step by step, I consistently improved the quality of AI-generated instructional content, making it clearer, more structured, and more effective. Now, let me walk you through my process&#8212;how I craft, test, and refine prompts using iterative prompting to achieve the best possible results.</p><h3><strong>Step 1: Drafting the Initial Prompt</strong></h3><p>I start with a <strong>role-based prompt</strong>, a technique where the AI is assigned a specific identity and task. Instead of giving a generic instruction, I define who the AI is and what it should do, providing context that shapes its responses. This approach makes outputs more precise, structured, and relevant to the task. A well-defined role improves AI-generated responses in several ways:</p><ul><li><p><strong>Improves Accuracy</strong> &#8211; Assigning a domain-specific role ensures more relevant answers. For example, <em>&#8220;You are an instructional designer. Improve this learning objective using Bloom&#8217;s Taxonomy&#8221;</em> yields structured educational goals instead of generic suggestions.</p></li><li><p><strong>Enhances Relevance and Focus</strong> &#8211; A role keeps responses on-topic. Asking, <em>&#8220;Explain machine translation&#8221;</em> may result in a broad answer, but framing it as <em>&#8220;You are a computational linguist. Explain machine translation in simple terms to a non-technical audience&#8221;</em> ensures clarity and accessibility.</p></li><li><p><strong>Provides Implicit Constraints</strong> &#8211; Defining a role naturally limits the scope of responses. <em>&#8220;You are a legal expert in intellectual property&#8221;</em> keeps the AI focused on that field without needing extra instructions.</p></li><li><p><strong>Improves Instruction Following</strong> &#8211; When given a clear persona, AI interprets and follows instructions more reliably. A prompt like <em>&#8220;You are an instructional designer. Convert vague learning goals into measurable objectives using Bloom&#8217;s Taxonomy&#8221;</em> ensures a structured response aligned with best practices.</p></li></ul><p></p><p>You can use this template to structure an effective prompt:</p><blockquote><p><em>You are a prompt engineer and an expert in [topic]. <br>Your task is to help me create a prompt for [specific task]. <br>The prompt should follow these guidelines: [list important instructions]. <br>It should include [key details, context, and constraints] <br>and produce an output in [desired format, tone, or style].</em></p></blockquote><p></p><p>To apply this template to course design, we could write:</p><blockquote><p><em>You are a prompt engineer and an expert in instructional design. <br>Your task is to help me create a prompt that converts vague learning goals into well-defined learning objectives using Bloom&#8217;s Taxonomy. <br>Ensure that each learning objective is clear, measurable, and action-oriented.<br>Use Bloom&#8217;s Taxonomy to structure objectives according to cognitive levels (e.g., Remember, Understand, Apply, Analyze, Evaluate, Create). <br>Ensure the output is formatted as a bullet list with each learning objective categorized by its cognitive level.</em></p></blockquote><p></p><p>I use this prompt to generate the specific prompt I need. Here&#8217;s the result:</p><blockquote><p><em>You are an instructional design expert skilled in developing precise learning objectives using Bloom&#8217;s Taxonomy. Your task is to take vague learning goals and convert them into clear, measurable, and action-oriented learning objectives, categorized by cognitive levels.</em></p><p><em>Instructions:<br>Analyze the provided vague learning goals and determine their intended outcomes.<br>Apply Bloom&#8217;s Taxonomy to create structured learning objectives at different cognitive levels (Remember, Understand, Apply, Analyze, Evaluate, Create).<br>Ensure clarity and measurability by using appropriate action verbs (e.g., "Define," "Explain," "Demonstrate," "Compare," "Justify," "Design").<br>Format the response as a bullet-point list, grouping objectives by cognitive level.</em></p><p><em>Example Input:<br>Vague Learning Goal: Students should understand the basics of project management.</em></p><p><em>Example Output:<br>- Remember: Define key project management terms (e.g., scope, milestones, stakeholders).<br>- Understand: Explain the importance of project planning and scheduling.<br>- Apply: Develop a simple project timeline using a Gantt chart.<br>- Analyze: Compare different project management methodologies (e.g., Agile vs. Waterfall).<br>- Evaluate: Assess the effectiveness of a project plan based on given constraints.<br>- Create: Design a project management plan for a small business initiative.</em></p><p><em>Now, convert the following vague learning goals into structured learning objectives using the same format:<br>[Insert vague learning goals here]</em></p></blockquote><h3><strong>Step 2: Using AI Self-Analysis to Improve Prompt Engineering</strong></h3><p>After running the initial prompt, I assess the AI&#8217;s response to determine whether it meets expectations or needs refinement. However, objectively evaluating AI-generated content can be challenging, especially when dealing with complex prompts. To make this process more effective, I prompt the AI to <strong>analyze its own output</strong>, using a secondary prompt designed for self-assessment.</p><p>LLMs process vast amounts of data and follow patterns, but they don&#8217;t inherently understand <em>why</em> a response may be weak or how to improve it. Asking the AI to critically evaluate its own output encourages a more structured review, leading to:</p><ul><li><p><strong>Improved clarity and coherence</strong> &#8211; The AI reassesses its wording and structure, refining vague or inconsistent sections.</p></li><li><p><strong>More precise and relevant content</strong> &#8211; Self-analysis helps identify missing details or misaligned elements, ensuring the response better serves its purpose.</p></li><li><p><strong>A more efficient and iterative workflow</strong> &#8211; Instead of manually diagnosing weaknesses, I leverage the AI&#8217;s pattern recognition to pinpoint areas for refinement faster.</p></li></ul><p>To achieve this, I use a structured <strong>self-analysis prompt</strong>:</p><blockquote><p><em>Analyze this prompt by assessing its clarity, specificity, structure, and effectiveness in generating the desired response. Identify weaknesses and strengths, and suggest targeted improvements. Then, rewrite the prompt in a more optimized version. Structure your response as follows:<br>1. Strengths<br>2. Weaknesses<br>3. Suggested Improvements<br>4. Optimized Version of the Prompt</em></p></blockquote><p></p><p>This self-assessment method helps detect common problems, such as:</p><ul><li><p><strong>Lack of specificity</strong> &#8211; The AI may generate responses that are too broad.</p></li><li><p><strong>Inconsistencies</strong> &#8211; Instructions may lead to varied or unpredictable outputs.</p></li><li><p><strong>Missing key details</strong> &#8211; Important context or structure may need reinforcement.</p></li></ul><p>Here&#8217;s the result of the self-assessment:</p><p><strong>Strengths:</strong></p><ul><li><p><strong>Clarity</strong> &#8211; The prompt provides a clear task and instructions for converting vague learning goals into well-defined objectives.</p></li><li><p><strong>Use of Bloom&#8217;s Taxonomy</strong> &#8211; It explicitly instructs the AI to structure learning objectives using different cognitive levels.</p></li><li><p><strong>Actionability</strong> &#8211; It emphasizes measurable objectives by suggesting action verbs from Bloom&#8217;s Taxonomy.</p></li><li><p><strong>Well-structured output format</strong> &#8211; The use of bullet points and categorization by cognitive levels ensures an organized response.</p></li><li><p><strong>Example provided</strong> &#8211; Demonstrating a sample transformation improves AI comprehension and alignment with expectations.</p></li></ul><p><strong>Weaknesses:</strong></p><ul><li><p><strong>Overly broad instructions</strong> &#8211; While the prompt instructs the AI to apply Bloom&#8217;s Taxonomy, it does not explicitly guide how many objectives to generate per level or whether all levels must be covered.</p></li><li><p><strong>Potential redundancy</strong> &#8211; The instruction to apply Bloom&#8217;s Taxonomy appears in multiple places, which could be streamlined.</p></li><li><p><strong>Lack of constraints</strong> &#8211; There are no guidelines for subject specificity, depth, or complexity, which might lead to inconsistent outputs.</p></li><li><p><strong>Missing refinement step</strong> &#8211; There is no directive to refine vague learning goals before converting them into structured objectives.</p></li></ul><p><strong>Suggested Improvements:</strong></p><ul><li><p><strong>Clarify the expected number of learning objectives</strong> per cognitive level to ensure a balanced distribution.</p></li><li><p><strong>Streamline instructions</strong> by removing repetitive mentions of Bloom&#8217;s Taxonomy.</p></li><li><p><strong>Introduce optional constraints</strong> (e.g., subject-specificity, difficulty level, learning context).</p></li><li><p><strong>Add a refinement step</strong> where the AI first clarifies the vague learning goal before transforming it into objectives.</p></li><li><p><strong>Ensure flexibility</strong> by allowing the user to request only specific cognitive levels instead of always covering all six.</p></li></ul><p><strong>Optimized Version of the Prompt:</strong></p><blockquote><p><em>You are an instructional design expert specializing in learning objectives. Your task is to transform vague learning goals into clear, measurable, and action-oriented learning objectives using Bloom&#8217;s Taxonomy.</em></p><p><em>Instructions:<br>1. Refine the input by clarifying any ambiguous learning goals before transformation.<br>2. Convert the refined goal into structured learning objectives, categorizing them by cognitive level (Remember, Understand, Apply, Analyze, Evaluate, Create).<br>3. Ensure each objective is measurable by using action verbs aligned with Bloom&#8217;s Taxonomy.<br>4. Provide 1&#8211;2 objectives per level (unless otherwise specified).<br>5. Format the response as a bullet-point list, grouping objectives by cognitive level.</em></p><p><em>Customization Options:<br>- If applicable, adapt the objectives to a specific subject or learning level (e.g., beginner, advanced).<br>- If the user requests, focus only on specific Bloom&#8217;s levels instead of all six.</em></p><p><em>Example Input:<br>Vague Learning Goal: Students should understand the basics of project management.</em></p><p><em>Example Output:<br>- Remember: Define key project management terms (e.g., scope, milestones, stakeholders).<br>- Understand: Explain the importance of project planning and scheduling.<br>- Apply: Develop a simple project timeline using a Gantt chart.<br>- Analyze: Compare different project management methodologies (e.g., Agile vs. Waterfall).<br>- Evaluate: Assess the effectiveness of a project plan based on given constraints.<br>&#8211; Create: Design a project management plan for a small business initiative.</em></p><p><em>Now, refine and convert the following vague learning goals into structured learning objectives:<br>[Insert vague learning goals here]</em></p></blockquote><p></p><p>As you can see, the LLM made minor refinements and added structured instructions. Even small wording changes can significantly impact the output&#8212;this is a language model, and words carry meaning and nuance.</p><p>When crafting prompts, expertise in language and communication is just as important as technical knowledge. While engineers and data scientists understand the technical mechanisms behind LLMs, those with a strong linguistic background often excel at designing effective prompts because they anticipate subtle shifts in meaning and interpretation.</p><p>One aspect I would reconsider is the constraint of generating one or two learning objectives per cognitive level. While it&#8217;s true that overly rigid constraints can limit variation, they can also improve clarity and consistency in structured outputs. The impact depends on the specific use case.</p><h3>Step 3: Test, test, test</h3><p>The best way to assess any instruction&#8217;s effect is through systematic testing. Instead of assuming a constraint will restrict creativity, it&#8217;s useful to compare multiple prompt variations and analyze the differences in output quality. Structured experimentation, rather than intuition alone, leads to the best results. Here&#8217;s a step-by-step approach to systematically evaluate prompt effectiveness.</p><h4><strong>1. Define the Testing Objectives</strong></h4><p>Before testing, it&#8217;s crucial to clarify what we&#8217;re measuring. Some key questions include:</p><ul><li><p>Does adding constraints (e.g., &#8220;Generate 1-2 learning objectives per Bloom&#8217;s level&#8221;) enhance clarity or restrict creativity?</p></li><li><p>Does rewording the prompt improve the relevance and completeness of responses?</p></li><li><p>Do explicit examples improve the quality and consistency of generated outputs?</p></li></ul><p>By defining the goal upfront, we can focus on measurable improvements rather than subjective impressions.</p><h4><strong>2. Create Variations of the Prompt</strong></h4><p>A robust test requires multiple prompt variations, each adjusting only one element at a time. For instance:</p><ul><li><p><strong>Baseline Prompt (No Constraints):</strong> &#8220;Convert this vague learning goal into structured learning objectives using Bloom&#8217;s Taxonomy.&#8221;</p></li><li><p><strong>Constrained Prompt:</strong> &#8220;Generate exactly 1-2 learning objectives per Bloom&#8217;s level.&#8221;</p></li><li><p><strong>Refined Prompt:</strong> &#8220;First refine the vague learning goal, then generate clear, well-structured objectives.&#8221;</p></li><li><p><strong>Example-Enriched Prompt:</strong> &#8220;Use the following example as a guide: [Insert Example].&#8221;</p></li></ul><p>Testing these different versions helps us identify which refinements enhance performance without introducing unnecessary limitations.</p><h4><strong>3. Select a Representative Sample of Inputs</strong></h4><p>To ensure results are generalizable, test prompts with diverse learning goals across different domains. Some examples:</p><ul><li><p><em>Students should understand the basics of project management.</em></p></li><li><p><em>Learners will get familiar with marketing strategies.</em></p></li><li><p><em>Employees will learn about workplace safety.</em></p></li><li><p><em>Students should improve their critical thinking skills.</em></p></li><li><p><em>Learners will explore how AI is transforming industries.</em></p></li></ul><p>Using a broad set of inputs helps avoid bias in the results.</p><h4><strong>4. Generate Outputs &amp; Collect Data</strong></h4><p>Each input should be processed using all prompt variations, and the results should be systematically recorded in a structured format, such as a spreadsheet.</p><p>Key data points to evaluate:</p><ul><li><p><strong>Relevance</strong> &#8211; Do the objectives align with the learning goal?</p></li><li><p><strong>Clarity</strong> &#8211; Are they easy to understand and well-structured?</p></li><li><p><strong>Completeness</strong> &#8211; Do they effectively cover different cognitive levels?</p></li><li><p><strong>Creativity</strong> &#8211; Are the objectives varied in phrasing and approach?</p></li><li><p><strong>Conciseness</strong> &#8211; Are they overly long, too brief, or just right?</p></li></ul><p>A simple scoring system (e.g., <strong>1 to 5 scale</strong>) for each criterion allows for <strong>quantitative comparisons</strong> between different prompt versions.</p><h4><strong>5. Analyze the Results</strong></h4><p>After testing, compare outputs <strong>across different prompts</strong> and assess patterns. Example table:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vN0p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vN0p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 424w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 848w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 1272w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vN0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png" width="1456" height="625" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:625,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:123396,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/159465528?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vN0p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 424w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 848w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 1272w, https://substackcdn.com/image/fetch/$s_!vN0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ff3ce6d-c448-4244-9409-ad50256608f4_1492x640.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This structured evaluation allows us to make data-driven refinements instead of relying on subjective preference. Once results are analyzed, adjustments can be made:</p><ul><li><p>If constraints limit creativity, loosen them.</p></li><li><p>If examples improve clarity, make them a standard part of the prompt.</p></li><li><p>If objectives lack variety, modify wording to encourage more diverse outputs.</p></li></ul><h3>How did the prompt perform?</h3><p>Curious about how the prompt we created in this article performed in generating learning objectives using Bloom&#8217;s Taxonomy? Here the answers for &#8220;Students should understand the basics of project management.&#8221;</p><p><em><strong>Refined Goal:</strong></em></p><p><em>Students will gain foundational knowledge of project management principles, methodologies, and tools, enabling them to apply best practices in planning, executing, and evaluating projects effectively.</em></p><p><em><strong>Structured Learning Objectives:</strong></em></p><p><em><strong>Remember (Knowledge Recall)</strong></em></p><ul><li><p><em>Define key project management terms (e.g., scope, milestones, stakeholders, risk management).</em></p></li><li><p><em>List the five phases of the project management lifecycle (initiation, planning, execution, monitoring &amp; controlling, closure).</em></p></li></ul><p><em><strong>Understand (Comprehension)</strong></em></p><ul><li><p><em>Explain the importance of project planning and how it impacts project success.</em></p></li><li><p><em>Describe the differences between predictive (Waterfall) and adaptive (Agile) project management approaches.</em></p></li></ul><p><em><strong>Apply (Practical Execution)</strong></em></p><ul><li><p><em>Develop a simple project timeline using a Gantt chart.</em></p></li><li><p><em>Use a Work Breakdown Structure (WBS) to organize project tasks into manageable components.</em></p></li></ul><p><em><strong>Analyze (Critical Thinking)</strong></em></p><ul><li><p><em>Compare different project management methodologies (e.g., Agile vs. Waterfall) in terms of flexibility, risk, and stakeholder involvement.</em></p></li><li><p><em>Identify common project risks and propose mitigation strategies.</em></p></li></ul><p><em><strong>Evaluate (Judgment and Assessment)</strong></em></p><ul><li><p><em>Assess the effectiveness of a project plan based on scope, time, cost, and quality constraints.</em></p></li><li><p><em>Critique real-world project case studies to determine the key factors leading to success or failure.</em></p></li></ul><p><em><strong>Create (Synthesis and Innovation)</strong></em></p><ul><li><p><em>Design a project management plan for a small business initiative, incorporating stakeholder communication and risk management strategies.</em></p></li><li><p><em>Develop a project charter that outlines objectives, scope, and key deliverables for a hypothetical project.</em></p></li></ul><p><em>Would you like me to tailor these objectives for a specific audience (e.g., beginners, advanced professionals, students in a particular field)?</em></p><h3><strong>Conclusions</strong></h3><p>Prompt engineering is not about getting the perfect response on the first try&#8212;it&#8217;s a process of patience, trial, and refinement. AI-generated content improves dramatically when you move beyond one-shot prompting and embrace more sophisticated techniques like iterative prompting and AI self-analysis.</p><p>Once you apply these methods, you&#8217;ll unlock a new level of precision and control over AI-generated responses. Instead of passively accepting AI&#8217;s output, you&#8217;ll become an active collaborator, refining prompts to achieve optimal results. The difference is transformative&#8212;your prompts will evolve from generic queries to structured instructions that consistently generate high-quality responses.</p><p>However, the real leap happens when you integrate these techniques into building custom GPTs or leveraging the advanced features of AI tools like OpenAI&#8217;s projects. At this stage, you&#8217;re not just crafting individual prompts but designing AI-powered workflows that scale and adapt to your needs.</p><p>The key takeaway? Have fun, be curious, and experiment continuously. The more you refine and test, the better you&#8217;ll understand how to guide AI effectively. With the right approach, prompt engineering becomes more than just a skill&#8212;it&#8217;s a superpower that enhances productivity, creativity, and problem-solving in countless domains.</p><p>Reference: <strong><a href="https://help.openai.com/en/articles/10032626-prompt-engineering-best-practices-for-chatgpt#h_dce69a9126">Prompt engineering best practices for ChatGPT</a></strong></p><div><hr></div><h2>How to Cut Down on Useless Meetings</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DP3f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DP3f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DP3f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1292652,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/159465528?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DP3f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!DP3f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4efa8ee4-cc73-4a55-8c52-d2620d6dc92b_1456x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Let&#8217;s face it: most of us are drowning in meetings. The kind that starts with &#8220;quick sync?&#8221; and ends with &#8220;let&#8217;s schedule a follow-up.&#8221; When calls block every hour on your calendar, you start wondering when you're supposed to get any work done.</p><p>The good news? You&#8217;re not the only one drowning in meetings&#8212;most of your colleagues are quietly enduring the exact endless string of calls, often wondering why they&#8217;re there. That&#8217;s why you&#8217;ll likely face little resistance when you gently suggest an alternative to another back-to-back meeting. A shift toward more writing, a bit more structure, and the thoughtful use of AI isn&#8217;t just better for you&#8212;it&#8217;s a relief for everyone. Here&#8217;s how to start reclaiming your time <em>and</em> helping your team breathe a little easier.</p><h3>Start with a Meeting Doc</h3><p>One of the simplest and most effective strategies is using a meeting doc as a pre-call filter. Before scheduling a call, create a shared document where all participants can add discussion points, questions, and relevant context in advance. This alone changes the dynamic: instead of jumping into a meeting to figure out what needs to be discussed, you force clarity beforehand.</p><p>Even better, many of the questions get answered in the doc before the meeting ever happens. People comment, resolve doubts, and suggest alternatives asynchronously. Often, the meeting becomes shorter&#8212;or disappears entirely&#8212;because the core issues have already been resolved. And when you <em>do</em> meet, you come in prepared to make decisions, not just exchange status updates.</p><h3>Use AI as a Writing Co-Pilot (Not a Meeting Substitute)</h3><p>There&#8217;s a lot of hype about AI replacing meetings, but the more interesting (and sustainable) path is to use AI to smooth asynchronous collaboration. For example, instead of recapping a long Slack thread or digging through email chains, use AI to generate a concise summary you can share with the team. Train a GPT on your internal docs to answer recurring questions so you don&#8217;t have to explain the same thing to five different people in five different meetings.</p><p>The goal here isn&#8217;t to avoid human interaction&#8212;it&#8217;s to reduce the noise. When AI helps you communicate more clearly and efficiently, you create less need for &#8220;just to clarify&#8221; meetings.</p><h3>Replace Email Chains with a Single Source of Truth</h3><p>As a product manager, I constantly engage with stakeholders across different teams. For a long time, this meant long email chains, scattered conversations, and lots of &#8220;looping in&#8221; at the last minute. However, moving those conversations into Asana changed everything.</p><p>Now, every feature, request, or bug has a home. When someone has a question or needs an update, they go to the ticket. That ticket becomes a timeline, a reference point, a record of decisions. Conversations are threaded and visible, and you avoid the meeting that exists <em>only</em> to bring someone up to speed. This small shift&#8212;from fragmented email to centralized async updates&#8212;can save you dozens of meetings a month.</p><h3>Record It Once, Use It Often</h3><p>When you need to present something dynamic&#8212;like a new feature, a concept, or an idea&#8212;it often makes more sense to replace a long descriptive document or static presentation with a video. I personally use Loom for this. I&#8217;ll usually sketch a quick wireframe in Miro and then record a short video to walk through the idea. It works really well: people can absorb the information on their own time, rewatch if needed, and come to the next conversation with a clearer understanding of what&#8217;s being proposed.</p><h3>Meetings Aren&#8217;t the Enemy&#8212;Disorganized Ones Are</h3><p>None of this means you should cancel every meeting. Some conversations need to happen in real-time, and face-to-face (or screen-to-screen) is often the fastest way to get alignment. But when you start with documentation, use AI to reduce friction, and treat your time as a shared resource&#8212;not an always-available commodity&#8212;you start filtering out the noise.</p><div><hr></div><h2><strong>The Most Impressive Uses of ChatGPT</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MN8x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MN8x!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 424w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 848w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 1272w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MN8x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png" width="1456" height="776" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:776,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:326182,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.radicalcuriosity.xyz/i/159465528?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MN8x!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 424w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 848w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 1272w, https://substackcdn.com/image/fetch/$s_!MN8x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aff95e1-dc1b-4eed-8cd3-838a8be2ae14_1824x972.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Recently, I started using Reddit. Initially, I was skeptical about it, assuming it was just another chaotic social media platform filled with shallow discussions. However, after exploring various threads, I realized there is a lot of hidden value in the conversations. What struck me the most was the constant effort to keep interactions genuinely human. Unlike other platforms, where discourse often feels performative or dominated by algorithms, Reddit seems to foster organic, sometimes brutally honest discussions that reflect real-life experiences, challenges, and breakthroughs.</p><p>One such discussion that caught my attention was about the <strong><a href="https://www.reddit.com/r/ChatGPT/comments/1jephjl/whats_the_most_impressive_thing_youve_used/">most impressive or unexpected ways people have used ChatGPT</a></strong>. As someone deeply interested in AI, I found it fascinating to see how people were not just using it for trivial tasks but were integrating it into their lives in meaningful, transformative ways. The stories shared ranged from personal growth journeys to professional breakthroughs, revealing a landscape of possibilities that extend far beyond simple chatbot interactions.</p><p>A striking example came from <strong>a user who had struggled with fitness for years</strong>. Through continuous conversations with ChatGPT, they found the motivation to change their lifestyle completely. What started as casual exchanges about diet and exercise evolved into a structured fitness journey, leading to weight loss, improved health, and even participation in ultra-hiking marathons. AI has become more than just a tool; it is a virtual accountability partner that adapts to people's progress and keeps them engaged.</p><p>Another story that stood out was about <strong>a parent navigating their child's serious medical condition</strong>. When their daughter was diagnosed with a brain tumor, they turned to ChatGPT to educate themselves about the diagnosis and possible treatments. The AI helped them structure their conversations with doctors, making it easier to process complex medical information. In a particularly validating moment, they later showed the transcript to a doctor, who confirmed that the AI&#8217;s responses were well-structured and accurate. It was a reminder of how AI when used thoughtfully, can serve as a powerful ally in moments of crisis.</p><p>Beyond personal life, ChatGPT has also become a crucial assistant in professional and educational spaces. <strong>One teacher described how they used it</strong> to refine grading rubrics, generate personalized exercises for students, and create a more structured learning environment. Another person, with no prior coding experience, built an entire inventory management system for their family business with ChatGPT&#8217;s guidance. In both cases, AI wasn&#8217;t replacing human expertise but rather enhancing it, acting as an on-demand collaborator that empowered users to accomplish things they previously thought were out of reach.</p><p>What&#8217;s interesting about these use cases is that they all share a common theme: <strong>AI as a companion in problem-solving</strong>. Whether it&#8217;s fitness, medical research, professional development, or creative endeavors, people aren&#8217;t passively consuming AI-generated content&#8212;they&#8217;re co-creating with it. One user, for instance, described how they had ChatGPT critique their writing, not in a mechanical, grammar-checking way, but with thoughtful insights that helped them improve their style and storytelling. Another used it as a sparring partner for political debates, prompting the AI to argue against their views to refine their understanding of different perspectives.</p><p>The thread also surfaced more unconventional, yet equally fascinating, applications. Someone shared how they used ChatGPT to overcome their aversion to running by analyzing their stride and recommending technique adjustments. Another person built a text-based adventure game that became the foundation for a novel they were developing. Some even used it as a therapist, engaging in deep, reflective conversations that helped them process emotions, manage anxiety, and set goals for self-improvement.</p><p>These experiences paint a picture of AI not as a distant, futuristic force but as something deeply integrated into daily life. What started as a tool for answering questions is evolving into an adaptable, personalized assistant that people rely on in profound ways. The most compelling takeaway from these stories isn&#8217;t just that ChatGPT is useful&#8212;it&#8217;s that people are shaping it into what they need it to be. And in that sense, it&#8217;s less about the technology itself and more about human ingenuity in finding ways to make it work for them.</p><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Thanks for reading this episode of my newsletter. I hope I&#8217;ve been helpful. If you think my sketchbook might interest someone else, I&#8217;d appreciate it if you <strong>shared it on social media and forwarded it to your friends and colleagues</strong>.</em></p><p><em>Nicola</em></p>]]></content:encoded></item><item><title><![CDATA[How I Started Building Agentic Systems]]></title><description><![CDATA[Starting a new series on agentic AI systems. In this first article, I share how I&#8217;m experimenting with building a system to streamline LinkedIn engagement.]]></description><link>https://www.radicalcuriosity.xyz/p/how-i-started-building-agentic-systems</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/how-i-started-building-agentic-systems</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Sun, 19 Jan 2025 08:29:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wS-X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>First, thank you to the over 1,000 subscribers who follow this newsletter &#129395;.</p><p>This article kicks off a series dedicated to agentic AI systems and their practical applications in business workflows. Over the coming weeks, I&#8217;ll share my experiences, insights, and tips for building and embedding these systems to optimize processes and achieve meaningful outcomes.</p><p>To begin with, I&#8217;ll introduce the concept of agentic AI, explain its value, and share the initial steps I&#8217;ve taken to develop a system designed to help me manage my personal brand on LinkedIn.</p><div><hr></div><h2>Table of Contents</h2><ul><li><p>Building an Agentic System with Relevance.ai</p></li></ul><div><hr></div><h1>Building an Agentic System with Relevance.ai</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wS-X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wS-X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wS-X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg" width="1024" height="512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:512,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;8 strumenti per scrivere le e-mail pi&#249; velocemente con l&#8217;intelligenza artificiale&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="8 strumenti per scrivere le e-mail pi&#249; velocemente con l&#8217;intelligenza artificiale" title="8 strumenti per scrivere le e-mail pi&#249; velocemente con l&#8217;intelligenza artificiale" srcset="https://substackcdn.com/image/fetch/$s_!wS-X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wS-X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f795c28-d75e-427e-9483-83796c385689_1024x512.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Like many others, I&#8217;ve incorporated ChatGPT into my daily routine, relying on it for brainstorming, drafting, and problem-solving. I&#8217;ve also delved into prompt engineering, following numerous tutorials to refine my skills. Recently, however, I&#8217;ve ventured beyond simple interactions to experiment with agentic AI&#8212;a more complex and layered approach to leveraging artificial intelligence.</p><p>Transitioning from conversations with ChatGPT to configuring a system of AI agents is a significant leap. While no-code and low-code platforms are making this technology more accessible, the process still requires a foundational understanding of programming logic and, ideally, some familiarity with a programming language such as JavaScript or Python.</p><h3>Understanding Agentic Systems</h3><p>Before delving deeper, let&#8217;s define what an agentic system looks like. At its core, such a system is a network of interconnected AI agents, each assigned specific tasks and capabilities. IE:</p><ul><li><p><strong>Customer Support Automation</strong>: An agent triages incoming customer queries, another retrieves relevant data, and a third generates responses based on predefined policies.</p></li><li><p><strong>Content Production Pipelines</strong>: One agent handles research, another drafts content, and a third polishes it for publication.</p></li><li><p><strong>Data Analysis Workflows</strong>: An agent collects data from various sources, another organizes it into a digestible format, and a third generates insights or visualizations.</p></li></ul><p>These systems showcase how multiple specialized agents can collaborate to achieve complex goals. Numerous examples and tutorials are available on YouTube. For instance,&nbsp;<a href="https://www.benvansprundel.com/">Ben Van Sprundel</a>&nbsp;uses <a href="https://relevanceai.com/">Relevance.ai</a> and <a href="https://www.make.com/en">Make.com</a> to build an&nbsp;<em>AI Agent Army&nbsp;</em>that assists with day-to-day tasks.&nbsp;</p><div id="youtube2-Lj5fyDX01v8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Lj5fyDX01v8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Lj5fyDX01v8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Similarly, in another tutorial, Nate Herk leverages <a href="https://n8n.io/">n8n</a> to achieve a comparable goal by creating a personal assistant:</p><div id="youtube2-9G-5SiShBKM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;9G-5SiShBKM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/9G-5SiShBKM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3>Designing an Agentic System</h3><p>Creating an agentic system begins with analyzing the process you want to optimize. This means breaking it down into high-level stages.</p><p>For instance, I aim to strengthen my positioning as a product leader by showcasing my expertise in cognitive SaaS, my enthusiasm for agentic AI, and my interest in edtech. I plan to use LinkedIn as my primary platform for this purpose. Among the activities suggested by experts, I decided to start by actively commenting on posts from people in my network or creators within the niches I&#8217;m interested in to increase visibility and reach.</p><p>This process flow outlines a streamlined approach to generating, supervising, and posting quality comments, balancing automation with human oversight.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nY8l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nY8l!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nY8l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg" width="1456" height="1487" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1487,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nY8l!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nY8l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F246cb018-d49c-4dc8-a80b-bced028fe7c1_1567x1600.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>1. Identifying Relevant Content</strong></p><p>The first step involves gathering a list of profiles already in your network, as well as influencers, content creators, hashtags, and companies aligned with your interests or goals. These sources combine to create a List of Posts to be Evaluated.</p><p><strong>2. Evaluation and Prioritization</strong></p><p>Once posts are identified, they are assessed using evaluation criteria like:</p><ul><li><p><em>Reach</em>: Measured by the number of comments, indicating a post&#8217;s visibility and engagement potential.</p></li><li><p><em>Features of the Post</em>: Posts should include insightful content and offer opportunities where a well-thought-out comment can genuinely contribute value to the discussion.</p></li></ul><p><strong>3. Generating Insightful Comments</strong></p><p>The agent will follow explicit rules to ensure comments remain relevant and avoid appearing mechanical&#8212;a common pitfall of AI-generated content today. For every post, it will generate three possible comments, each with distinct formats and perspectives, ensuring authenticity and adding meaningful value to the discussion.</p><p><strong>4. Human-in-the-Loop Supervision</strong></p><p>While automation speeds up the process, human oversight is critical. A reviewer edits and refines each comment to ensure it is authentic, relevant, and adds value before posting. This step maintains quality and prevents comments from feeling generic or robotic.</p><p><strong>5. Continuous Improvement</strong></p><p>Every approved comment will be stored in a knowledge base. The goal is to teach the LLM my style and tone, enabling it to generate comments that feel authentically human.</p><h3>Thinking of Agents as Specialized Interns</h3><p>To transform this high-level workflow into an agentic system, I&#8217;ve found it helpful to think of agents not as mere components of a workflow but as a team of highly specialized interns. Each agent is equipped to use one or more tools. </p><p>In our example, we will have just one agent with whom I will interact that will use three tools. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bUsj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bUsj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bUsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg" width="1456" height="729" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:729,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:110959,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bUsj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bUsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F340deb25-e3b4-441c-ba7f-aec43633cca3_2826x1414.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The first step is to develop and test the tools to ensure they work as intended. Once validated, these tools can then be assigned to one or more agents:</p><ul><li><p><strong>Tool A</strong> scrapes comments daily and classifies them based on a set of predefined rules.</p></li><li><p><strong>Tool B</strong> generates draft comments and sends me an email notification when new comments are ready.</p></li><li><p><strong>Tool C</strong> records my edits to the comments, posts them to LinkedIn, and updates the knowledge base accordingly.</p></li></ul><p>Each morning, the agent activates the LinkedIn Scraper Tool and waits for the results before triggering the Comment Generation Tool. The Comment Generation Tool then emails me draft comments for review. I edit or approve these comments and post them on LinkedIn.</p><p>I aim to keep this process under 10 minutes daily while continuously improving the system&#8217;s accuracy. Over time, I want the system to become fully autonomous.</p><h3>Developing a Tool</h3><p>For my initial experiments, I chose to use Relevance. As I lacked development skills, I collaborated with my engineer buddy <strong><a href="https://www.linkedin.com/in/danieleantonini/">Daniele Antonini</a></strong>. I focused on prompt engineering, ensuring the AI performed as expected, while he managed the technical aspects of building the tool. This process required coding skills, particularly for scripting, to ensure everything operated seamlessly.</p><p>In Relevance, a tool is built as a series of steps. This screenshot illustrates the structure and functionality of the scraping tool:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BUQ1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BUQ1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BUQ1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg" width="1456" height="1045" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1045,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BUQ1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BUQ1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78856ef3-49d9-41e4-80e7-e1adb9578a0b_1600x1148.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As you can see from the left sidebar, the LinkedIn Scraper Tool is composed of several sequential steps designed to automate the process of fetching, classifying, and storing LinkedIn posts:</p><ol><li><p><strong>User Inputs</strong>: This is the starting point where the user or another system process provides a list of URLs. These URLs represent the LinkedIn profiles that need to be analyzed.</p></li><li><p><strong>Get a LinkedIn Profile/Company and Posts</strong>: In this step, the tool leverages Relevance&#8217;s integration capabilities to fetch posts based on predefined criteria.</p></li><li><p><strong>Filtering the Fetched LinkedIn Posts</strong>: Once the posts are collected, a filter is applied to refine the results.</p></li><li><p><strong>LLM</strong>: At this stage, ChatGPT or a similar language model classifies the filtered posts. This step involves categorizing the content based on predefined rules or topics, such as identifying posts with high engagement potential or those discussing specific themes.</p></li><li><p><strong>Additional Filtering and Data Preparation</strong>: After classification, the results undergo two further filtering steps to transform the data into a structured format.</p></li><li><p><strong>Add to Knowledge</strong>: Finally, the posts are added to a database. This repository will serve as the starting point for another tool, the Commenter, which retrieves the most relevant posts and generates tailored comments.</p></li></ol><h3>How I Create Prompts</h3><p>One of the foundational practices in building agentic AI systems is mastering prompt engineering. I follow a structured role-task format, clearly defining the AI&#8217;s role and the specific task I want it to perform. For instance, I might begin with, &#8220;You are an expert prompt engineer tasked with crafting an optimized prompt for a specific use case.&#8221;</p><p>Once the initial prompt is drafted, I use an iterative approach in collaboration with ChatGPT. This method, often called <strong>iterative prompt engineering</strong> or iterative refinement, emphasizes a feedback loop between the user and the AI. The process involves several key steps:</p><ol><li><p><strong>Initial Drafting:</strong>&nbsp;Use the role-task format to create a clear and concise prompt that establishes the context and goal.</p></li><li><p><strong>Feedback and Reflection:</strong> Engage the AI in analyzing the prompt&#8217;s effectiveness, asking for insights into what works and what could be improved.</p></li><li><p><strong>Refinement:</strong> Modify the prompt based on the AI&#8217;s feedback, ensuring alignment with the desired outcome.</p></li><li><p><strong>Testing and Validation:</strong> Evaluate the revised prompt by running scenarios and confirming it produces consistent, high-quality results.</p></li></ol><p>The LinkedIn Scraper Tool identifies posts worth commenting on. The criteria I&#8217;ve chosen ensure that the posts selected are rich in content and provide sufficient starting material for meaningful interaction. Specifically, posts should contain:</p><ol><li><p><strong>Enough Content:</strong> Posts with substantial text are prioritized, as they offer a more explicit context for generating comments.</p></li><li><p><strong>Data:</strong> The presence of data or statistics adds credibility and depth, making it easier to craft insightful responses.</p></li><li><p><strong>Opinions and Insights:</strong> Posts that share unique viewpoints or insights foster meaningful discussions and provide ample opportunities to contribute value.</p></li></ol><p>Posts meeting all these criteria are worth commenting on because they offer enough material to generate meaningful and contextually relevant comments. The following is the prompt that I&#8217;m currently testing:</p><pre><code>Evaluate the LinkedIn post based on the following criteria, assigning up to <strong>3 points per criterion</strong>:

<strong>Thought-Provoking or Data-Driven Content</strong>
- 0 points: The post is generic, lacks depth, or is purely promotional.
- 1 point: The post contains some interesting ideas or insights but lacks originality or depth.
- 2 points: The post presents new insights, trends, or data but could benefit from more specificity or broader context.
- 3 points: The post offers unique insights, actionable trends, or detailed data that provoke thought or inspire discussion.

<strong>Strategic Hashtags</strong>
- 0 points: No relevant hashtags, or hashtags are irrelevant to your interests.
- 1 point: The post has 1&#8211;2 relevant hashtags, but they are not highly aligned with your niche.
- 2 points: The post uses 2&#8211;3 hashtags that are moderately aligned with your professional focus.
- 3 points: The post includes 3 or more relevant hashtags directly tied to your areas of interest or expertise.

<strong>Authenticity and Depth</strong>
- 0 points: The post feels superficial, overly polished, or lacks genuine depth.
- 1 point: The post shows some level of authenticity or depth but lacks a strong narrative or personal perspective.
- 2 points: The post is moderately authentic and shares valuable perspectives, but could be more engaging or detailed.
- 3 points: The post is highly authentic, shares meaningful insights or experiences, and encourages deeper reflection or discussion.

<strong>Scoring Guidelines</strong>
Add the scores from all three criteria to get a total out of 9 points.
A post scoring 7&#8211;9 is highly valuable and worth engaging with.
A score of 4&#8211;6 indicates moderate potential&#8212;engage if you can add significant value.
Posts scoring 0&#8211;3 are unlikely to be worth commenting on.</code></pre><p>This <a href="https://www.linkedin.com/feed/update/urn:li:activity:7283471395689267200/">post</a> by Nataly Kelly got an overall score of 8:</p><ul><li><p><strong>Authenticity and Depth = 2<br></strong>The post is informative and shares valuable insights, but it could benefit from a more personal touch or narrative to enhance engagement. While it discusses the importance of consumer insights, it lacks a personal perspective or story that could make it more relatable.</p></li><li><p><strong>Strategic Hashtags = 3</strong><br>The post includes relevant hashtags such as #consumertrends, #advertising, and #innovation, which are well-aligned with the content and likely to attract the right audience.</p></li><li><p><strong>Thought Provoking = 3</strong><br>The post provides detailed insights into the effectiveness of advertising strategies for quick service restaurants, supported by specific data points regarding brand recall and purchase uplift. It presents actionable trends and encourages deeper thought about the relationship between product presence in ads and consumer response.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q9Tg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 424w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 848w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 1272w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png" width="1158" height="946" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:946,&quot;width&quot;:1158,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:190478,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 424w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 848w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 1272w, https://substackcdn.com/image/fetch/$s_!Q9Tg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca33697d-55f4-4957-86f4-3f17f7520d63_1158x946.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The LinkedIn Scraper Tool is far from perfect. Subsequent iterations will include more sophisticated classification capabilities to refine its selection process further. These enhancements will focus on identifying various types of posts, such as text-only updates, videos, or newsletters. Additionally, the tool will classify posts by their general argument or purpose, distinguishing between announcements, link sharing, or in-depth analysis.</p><p>These features aim to make the tool more nuanced and adaptable, ensuring that it continues aligning with my goal of producing high-quality, meaningful interactions on LinkedIn.</p><p>In the following article, I&#8217;ll explain how to connect the tools to the agent and share a demo of the agentic system. Stay tuned!</p><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Thanks for reading this episode of my newsletter. I hope I&#8217;ve been helpful. If you think my sketchbook might interest someone else, I&#8217;d appreciate it if you <strong>shared it on social media and forwarded it to your friends and colleagues</strong>.</em></p><p><em>Nicola</em></p>]]></content:encoded></item><item><title><![CDATA[Cognitive SaaS: Building AI-native solutions with lasting competitive advantage]]></title><description><![CDATA[Cognitive SaaS integrates advanced AI, specialized knowledge, and learning capabilities to create solutions that adapt, evolve, and support user decision-making.]]></description><link>https://www.radicalcuriosity.xyz/p/cognitive-saas-building-ai-native-solutions</link><guid isPermaLink="false">https://www.radicalcuriosity.xyz/p/cognitive-saas-building-ai-native-solutions</guid><dc:creator><![CDATA[Nicola Mattina]]></dc:creator><pubDate>Wed, 30 Oct 2024 15:00:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4da20566-5830-49bc-b834-601278e9e2c2_1682x1174.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ciao,</p><p>I took a longer-than-expected break from Radical Curiosity &#8212; since March 17, to be precise. During this time, I didn&#8217;t feel I had anything particularly valuable to share about my work, so I decided to take a step back, study, reflect, and focus on my projects.</p><p>A conversation with an investor friend about startups built on LLMs (Large Language Models) and a recent article on the Sequoia blog (<em><a href="https://www.sequoiacap.com/article/generative-ais-act-o1/">Generative AI&#8217;s Act o1</a></em>) inspired me to start writing again.</p><p>Let&#8217;s pick up from here.</p><div><hr></div><h2>Table of Contents</h2><ul><li><p>Cognitive SaaS: Building AI-Native Solutions with Lasting Competitive Advantage</p></li><li><p>Claudio Erba: from 0 to 200</p></li><li><p>Audiobooks: the power of the perfect narrator</p></li></ul><div><hr></div><h2>Cognitive SaaS: Building AI-Native Solutions with Lasting Competitive Advantage</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uomc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uomc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 424w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 848w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 1272w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uomc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png" width="1456" height="831" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:831,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4022415,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uomc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 424w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 848w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 1272w, https://substackcdn.com/image/fetch/$s_!Uomc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F643afdfe-1f19-4f62-a198-7dff6536a86d_2054x1172.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Recently, I had an insightful conversation with a friend of mine, an investor, about the growing trend of SaaS products built using large language models (LLMs). He expressed concerns that building SaaS on top of LLMs has become too easy and that many of these services lack defensibility because they can be replicated with minimal effort. This perspective is shared by many investors, who have grown increasingly skeptical of startups heavily reliant on LLMs. These startups are often dismissed as &#8220;AI Wrappers&#8221;&#8212;SaaS products that merely add a user interface or minor functionality around an existing AI model without contributing any significant innovation or proprietary technology.</p><p>This skepticism is understandable. The accessibility of LLM APIs has indeed made it simple for developers to create applications that superficially enhance existing models. These applications often lack the depth and uniqueness required to maintain a competitive advantage, raising concerns about their long-term viability. </p><p>However, it&#8217;s important to acknowledge that not all AI-powered applications fall into this category. While some may be simplistic wrappers, others incorporate advanced architectures that offer far more than just an interface to an LLM. Despite this, there hasn&#8217;t yet been a widely accepted term to distinguish these more sophisticated solutions from AI Wrappers. I propose the concept of Cognitive SaaS.</p><h3>Anatomy of a Cognitive SaaS</h3><p>A Cognitive SaaS involves a combination of components designed to assist and augment human decision-making and maintain human oversight in critical tasks.</p><ul><li><p><strong>Generic LLM (Large Language Model) for Reasoning</strong>: The heart of the framework is a foundational LLM that provides general reasoning capabilities. This model is a versatile engine for understanding natural language, making inferences, and generating responses. It acts as the essential cognitive substrate, enabling the system to engage in foundational reasoning and user interaction. The LLM doesn&#8217;t have specialized domain knowledge but is capable of general comprehension and reasoning tasks, allowing for flexible, broad applicability across domains.</p></li><li><p><strong>Specialized Representation of Knowledge</strong>: This element augments the LLM with a specialized knowledge layer. It could be a database, ontology, or a structured representation tuned to specific domains, enabling the system to access precise and context-specific information. This specialized knowledge helps bridge the gap between general understanding and domain expertise, allowing for more informed decision-making. One common approach is to use Retrieval-Augmented Generation (RAG), which combines an LLM with a retrieval mechanism to gather relevant domain-specific knowledge in real time. Integrating a specialized knowledge graph or module, such as a RAG system, allows the cognitive software to become more adept at dealing with niche tasks beyond what a generic LLM can handle.</p></li><li><p><strong>Multiple Agents</strong>: The cognitive framework includes specialized agents responsible for specific capabilities or tasks. These agents can be specialized LLMs, perception units, task executors, or data analyzers. By allowing agents to communicate and collaborate, the cognitive software system becomes more adaptable and capable of handling diverse tasks. The coordination between agents allows for distributed processing, where individual agents contribute to solving different aspects of a problem, thus fostering a more modular and scalable approach to problem-solving.</p></li><li><p><strong>Chain Manager</strong>: A chain orchestrates how different components of the cognitive software interact with one another. It defines the sequence of operations, passing information between the LLM, knowledge representations, and other modules. The chain mechanism is essential for breaking down complex tasks into manageable steps, defining dependencies, and ensuring that tasks are executed in a coherent and meaningful order. It also enables the software to switch context and adapt its behavior based on the ongoing process, promoting structured and goal-oriented reasoning.</p></li><li><p><strong>Capacity to Accumulate Data and Learn</strong>: Accumulation of data and continuous learning is central to the cognitive software framework. This element involves storing interactions, feedback, and outcomes in a memory system, which the cognitive software can access to improve over time. Learning mechanisms such as reinforcement learning, episodic memory, or fine-tuning the LLM can be used to adapt and enhance the software&#8217;s capabilities based on real-world experiences. By accumulating data over time, the cognitive software builds a more comprehensive model of its environment, user preferences, and task success strategies, leading to better performance and adaptability.</p></li></ul><p>The Cognitive SaaS framework is rooted in the broader and more complex concept of <strong><a href="https://en.wikipedia.org/wiki/Cognitive_architecture">cognitive architecture</a></strong>, designed to model human cognitive processes such as perception, memory, reasoning, and learning. Cognitive architectures aim to replicate human adaptability in AI systems, enabling them to reason, make decisions, and adapt across various tasks without human intervention. These ambitious systems integrate multiple components to continuously evolve and learn, functioning independently in various differentiations.</p><p>While the principles of cognitive architecture inspire Cognitive SaaS, it focuses on more specific, practical applications. Rather than aiming for complete autonomy and broad adaptability, Cognitive SaaS uses advanced AI tools and APIs to perform targeted tasks efficiently. Its primary purpose is to augment human capabilities, acting as a co-pilot that supports decision-making and task execution rather than operating as a fully autonomous system.</p><h3>Example: A Cognitive CRM</h3><p>To illustrate how Cognitive SaaS concepts come together in a real-world application, let&#8217;s look at an example of a Cognitive CRM. This example shows how each component discussed earlier can work harmoniously to deliver a powerful, user-centric solution beyond simple automation.</p><p>I asked ChatGPT to &#8220;imagine&#8221; a Cognitive CRM based on the framework outlined in this article. Here&#8217;s the response:</p><p>To create a CRM rooted in cognitive software principles, we must go beyond simply adding AI-powered features. This CRM would act as a co-pilot for users, augmenting their abilities with cognitive capabilities across various aspects of customer relationship management.</p><ul><li><p><strong>Generic LLM for Reasoning and Interaction</strong>: At the core of this cognitive CRM, a large language model (LLM) would handle natural language processing, allowing the system to interact conversationally with users. For instance, sales representatives could ask the CRM complex questions about customer history and the best time to follow up or get summaries of past interactions. This LLM would form the reasoning and basic interaction layer of the CRM, offering flexibility and deep comprehension in language-based tasks.</p></li><li><p><strong>Specialized Knowledge Layer (Context-Aware CRM Data)</strong>: The Cognitive CRM would integrate a specialized knowledge base, which includes customer data, industry insights, and real-time sales information. This knowledge base would be continuously updated, enabling the CRM to provide contextually relevant advice, such as recommending personalized customer engagement strategies based on past interactions, buying patterns, or industry trends. For example, the CRM could use retrieval-augmented generation (RAG) techniques to pull specific insights from vast knowledge bases, offering more than just generic suggestions when a sales representative asks for guidance.</p></li><li><p><strong>Multiple Cognitive Agents (Specialized CRM Tasks)</strong>: This CRM would employ several cognitive agents, each focused on different aspects of customer management:</p><ul><li><p><strong>Lead Scoring Agent</strong>: Analyzes leads to determine which will most likely convert based on historical data and behavior patterns.</p></li><li><p><strong>Sales Forecasting Agent</strong>: Predicts future sales performance using past data, customer behavior, and industry trends.</p></li><li><p><strong>Email/Communication Agent</strong>: Helps draft emails and suggests the best time to reach out based on customer sentiment analysis and previous interactions.</p></li></ul><p>These agents would collaborate to enhance decision-making and streamline sales workflows.</p></li><li><p><strong>Chain Manager (Task Orchestration)</strong>: The Chain Manager would orchestrate interactions between the LLM, knowledge base, and cognitive agents. For example, when a sales representative asks for the best strategy to approach a customer, the Chain Manager would manage the sequence of operations:</p><ol><li><p>Retrieve the customer&#8217;s past interactions.</p></li><li><p>Pull insights from the lead-scoring agent.</p></li><li><p>Draft a personalized message using the email/communication agent.</p></li></ol><p>This structured reasoning ensures that complex, multi-step processes happen smoothly and coherently.</p></li><li><p><strong>Continuous Adaptation (Learning Over Time)</strong>: A core feature of cognitive software is its ability to accumulate data and learn over time. This CRM would track user interactions and feedback, continuously refining its recommendations and predictions. As it gathers more data on customer behaviors, sales cycles, and representative actions, it will improve its predictions and offer more personalized sales tactics. For example, through reinforcement learning, the CRM can learn from successful sales outcomes and optimize future suggestions. Over time, it would refine its approach to customer engagement, sales forecasting, and retention strategies.</p></li></ul><p>In summary, this CRM wouldn&#8217;t just be an AI Wrapper that adds features to an existing platform&#8212;it would fundamentally enhance the human decision-making process, elevating sales and customer management at every level.</p><h3>Conclusions</h3><p>Creating a Cognitive SaaS is more than just building simple applications that provide minimal additional value compared to tools like ChatGPT Canvas or Anthropic Artifacts.</p><p>The Cognitive SaaS framework can act as a strategic checklist to ensure a product goes beyond the typical &#8220;AI Wrapper&#8221; by offering profound, value-driven innovation.</p><p>Does your product integrate a specialized knowledge layer that adds unique domain expertise? Are there multiple cognitive agents collaborating to solve complex problems? Is there a clear chain manager orchestrating the interactions between components to ensure seamless, structured execution? Most critically, have you built a system that accumulates data and learns over time, creating a moat of continuous improvement that competitors can&#8217;t easily replicate?</p><p>While many SaaS products in the market for years are adding AI and claiming to be AI-first, there is a significant opportunity to build a new generation of genuinely AI-native services. These Cognitive SaaS are designed from the ground up with AI at their core, unburdened by the technical debt and limitations of legacy systems that have been evolving for over a decade.</p><div><hr></div><h2>Claudio Erba: from 0 to 200</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://youtube.com/playlist?list=PLs7RHiwI-MJ-X8-HOsanjRdoEpUR0hGkR&amp;si=CvYpLXQldv5CHXvm" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IX6E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 424w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 848w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IX6E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1051876,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://youtube.com/playlist?list=PLs7RHiwI-MJ-X8-HOsanjRdoEpUR0hGkR&amp;si=CvYpLXQldv5CHXvm&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IX6E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 424w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 848w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!IX6E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94500d38-a059-46c8-80a5-436c86670ed0_2040x1146.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong><a href="https://youtube.com/playlist?list=PLs7RHiwI-MJ-X8-HOsanjRdoEpUR0hGkR&amp;si=CvYpLXQldv5CHXvm">Watch the Playlist on Youtube</a></strong> </p><p><a href="https://www.linkedin.com/in/claudioerba/">Claudio Erba</a>, the founder of <a href="https://www.docebo.com/">Docebo</a>, shares his journey of building a company from the ground up, growing it to $200 million in revenue, and ultimately taking it public on NASDAQ. This free course, available on YouTube, is packed with invaluable insights and strategies, making it a must-watch for any aspiring founder or entrepreneur.</p><div><hr></div><h2><strong>Audiobooks: the power of the perfect narrator</strong></h2><p>Since I started using Audible, the number of books I&#8217;ve read has dramatically increased. However, one thing that can make or break the experience is the narrator&#8217;s voice. A voice that&#8217;s too high-pitched, regional, flat, or overly dramatic can completely change how enjoyable a book is. Sometimes, I&#8217;ve stopped listening simply because I couldn&#8217;t connect with the narrator&#8217;s style.</p><p>I look forward to a future where text-to-speech technology, like ElevenLabs, evolves so that we can enjoy any book with multiple voice options.</p><p>The audiobooks I enjoyed in the last couple of months (all in Italian):</p><ul><li><p><strong><a href="https://www.audible.it/pd/Nove-vite-come-i-gatti-Audiolibri/8831890107?source_code=ASSGB149080119000H&amp;share_location=pdp">Margherita Hack</a></strong><a href="https://www.audible.it/pd/Nove-vite-come-i-gatti-Audiolibri/8831890107?source_code=ASSGB149080119000H&amp;share_location=pdp">, </a><em><a href="https://www.audible.it/pd/Nove-vite-come-i-gatti-Audiolibri/8831890107?source_code=ASSGB149080119000H&amp;share_location=pdp">Nove vite come i gatti</a></em>. This autobiography, written when Hack was ninety, reflects on the guiding principles of her life: a strong work ethic, persistence, civic and moral commitment, and confidence in herself and her ideas.</p></li><li><p><strong><a href="https://www.audible.it/pd/In-piena-libert%C3%A0-e-consapevolezza-Audiolibri/B087WXDQ2B?source_code=ASSGB149080119000H&amp;share_location=pdp">Margherita Hack</a></strong><a href="https://www.audible.it/pd/In-piena-libert%C3%A0-e-consapevolezza-Audiolibri/B087WXDQ2B?source_code=ASSGB149080119000H&amp;share_location=pdp">, </a><em><a href="https://www.audible.it/pd/In-piena-libert%C3%A0-e-consapevolezza-Audiolibri/B087WXDQ2B?source_code=ASSGB149080119000H&amp;share_location=pdp">In piena libert&#224; e consapevolezza</a></em>. A manifesto on individual freedom and secularism in Italy, addressing topics like assisted fertilization, living wills, abortion, civil unions, free scientific research, and multiculturalism. Hack critiques the influence of the Catholic Church on these debates in Italy.</p></li><li><p><strong><a href="https://www.audible.it/pd/Donne-dellanima-mia-Audiolibri/B0CW9Y78PP?source_code=ASSGB149080119000H&amp;share_location=pdp">Isabel Allende</a></strong><a href="https://www.audible.it/pd/Donne-dellanima-mia-Audiolibri/B0CW9Y78PP?source_code=ASSGB149080119000H&amp;share_location=pdp">, </a><em><a href="https://www.audible.it/pd/Donne-dellanima-mia-Audiolibri/B0CW9Y78PP?source_code=ASSGB149080119000H&amp;share_location=pdp">Donne dell&#8217;anima mia</a></em>. With lightness and irony, the author reflects on her past to share the roots of her feminism. Growing up in a patriarchal environment, her instinct to rebel shaped her lifelong commitment to support the women still fighting for emancipation.</p></li><li><p><strong><a href="https://www.audible.it/pd/Michele-Ferrero-Audiolibri/B0C6QDSY4Y?source_code=ASSGB149080119000H&amp;share_location=pdp">Salvatore Giannella</a></strong><a href="https://www.audible.it/pd/Michele-Ferrero-Audiolibri/B0C6QDSY4Y?source_code=ASSGB149080119000H&amp;share_location=pdp">, </a><em><a href="https://www.audible.it/pd/Michele-Ferrero-Audiolibri/B0C6QDSY4Y?source_code=ASSGB149080119000H&amp;share_location=pdp">Michele Ferrero. Condividere valori per creare valore</a></em>. Michele Ferrero, the creator of Nutella and many other beloved treats, was more than one of Italy&#8217;s greatest entrepreneurs. He pioneered a way of doing business that prioritized people, guided by the motto: &#8220;Work, create, give.&#8221;</p></li><li><p><strong><a href="https://www.audible.it/pd/Una-persona-alla-volta-Audiolibri/B0CXV5245J?source_code=ASSGB149080119000H&amp;share_location=pdp">Gino Strada</a></strong><a href="https://www.audible.it/pd/Una-persona-alla-volta-Audiolibri/B0CXV5245J?source_code=ASSGB149080119000H&amp;share_location=pdp">, </a><em><a href="https://www.audible.it/pd/Una-persona-alla-volta-Audiolibri/B0CXV5245J?source_code=ASSGB149080119000H&amp;share_location=pdp">Una persona alla volta</a></em>. This book captures the emotion, pain, struggle, and love of Gino Strada&#8217;s extraordinary journey&#8212;experiencing conflicts from the victims&#8217; side and becoming a voice for change. Each page resonates with a profound and radical question: the abolition of war and the universal right to health.</p></li></ul><div><hr></div><pre><code><code>If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.</code></code></pre><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nicolamattina.substack.com/&quot;,&quot;text&quot;:&quot;Iscriviti adesso gratuitamente!&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://nicolamattina.substack.com/"><span>Iscriviti adesso gratuitamente!</span></a></p><div><hr></div><p><em>Thanks for taking the time to read this episode of my newsletter. I hope I&#8217;ve been helpful. If you think my sketchbook might interest someone else, I&#8217;d be glad if you <strong>shared it on social media and forwarded it to your friends and colleagues</strong>.</em></p><p><em>Nicola</em></p>]]></content:encoded></item></channel></rss>