The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility
Ciao,
In this issue, in the Off the Record section, you’ll find an article on a subject I care deeply about and have been reflecting on for some time: the mistakes we keep making, even when we’re aware we’re making them. It’s a personal reflection, but one I believe will resonate with many of us—in life, at work, and in the world of innovation.
In the Signals and Shifts section, you’ll also find an analysis of how artificial intelligence is rewriting the rules of online visibility. And in Understanding AI, the second part of my brief history of artificial intelligence. As always, I close with a selection of ideas and insights that truly caught my attention this week.
Nicola
Table of Contents
Signals and Shifts - The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility
Understanding AI - A Brief History of Artificial Intelligence. Part 2
Off the Record - Four mistakes you already know – and still keep making
Curated Curiosity
The Rise of Verticalized AI Coworkers
Character.AI Launches World’s First AI-Native Social Feed
Signals and Shifts
The End of Search Engines? How AI Will Rewrite the Rules of Online Visibility
For the past thirty years, search engines have been the primary gateway to online information. Every day, over 8.5 billion searches are made on Google: typing a query, browsing a list of results, clicking on a link—these have become natural, almost automatic gestures. A routine on which much of the digital business has been built: millions of companies invest to appear among the top results—whether organic or sponsored—to reach new users and acquire new customers.
Today, however, this paradigm is rapidly shifting. Conversational assistants, such as ChatGPT, Claude, and Gemini, are becoming the new access point to knowledge for millions of people. As of July 2025, ChatGPT has approximately 800 million weekly active users, with over 2.5 billion prompts submitted daily. Google Gemini has surpassed 400 million monthly active users. Claude, the model developed by Anthropic, is expected to have between 16 and 19 million monthly users.
More and more often, users no longer type a string of keywords: they ask a question in natural language. Instead of receiving a list of links, they get an answer generated in real-time, shaped by context. Search is turning into conversation. And visibility, in this new scenario, is governed by a set of rules that have yet to be written.
Search Engines and Artificial Intelligence: Google’s AI Overviews and AI Mode
Google has begun responding to this transformation with the introduction of AI Overviews—a section that appears at the top of the results page, offering a synthesized answer built from multiple sources. But the more radical shift is AI Mode: a dedicated interface designed for generative interaction. Here, search becomes a conversation. The input field moves to the bottom of the screen, and users are encouraged to ask complex questions in natural language.
As Robbie Stein, head of Google’s AI Search team, notes, we are entering a new phase in which “AI can truly expand what’s possible with search.” The paradigm is shifting from consultation to interaction—from finding an answer to building it collaboratively with the system, through a continuous, multimodal, and contextual dialogue. According to Stein, this evolution redefines both the user interface and cognitive expectations: users are no longer just searching—they expect to be understood, assisted, and guided.
This shift is also evident in user behavior, particularly among younger generations, who seamlessly transition between text, voice, images, and context. They are no longer merely seeking data, but rather looking for experiences, recommendations, and narratives that are relevant to their specific situation. To respond effectively, the search engine must evolve into a cognitive assistant, capable of selecting, filtering, and reframing information based on the user's profile.
Search Engines and AI: The Challenge of Visibility Without a SERP
In this emerging landscape, businesses face a crucial challenge: how do you gain visibility when the search engine results page (SERP) no longer exists?
It’s a question that calls into question an entire industry—an ecosystem built around search engine optimization, from SEO consulting to the creation of optimized content, to advertising campaigns based on high-performing keywords. The global market for SEO services alone is currently valued between $80 and $98 billion. To this, we must add advertising spend on search engines, which exceeds $175 billion annually, with Google alone accounting for a significant portion.
An entire operational model may need to be rethought, and the industry is already seeking answers, drawing on familiar frameworks. One such response is Generative Engine Optimization (GEO), an early attempt to adapt the logic of SEO—Search Engine Optimization—to the new context of generative engines.
The goal of GEO is to appear within the responses generated by systems like ChatGPT, Gemini, Perplexity, or Google’s own AI Overviews.
According to experts, launching a GEO strategy requires more than simply replicating traditional SEO tools. It calls for a more profound transformation—one that involves both editorial practices and the conceptual architecture of content. Emerging best practices are outlining a new methodological framework:
Content structured for dialogue: The ideal format is a question-and-answer approach, organized into clear and concise thematic blocks. Language should be natural, yet precise. While keywords are no longer central, they remain useful when used at the beginning to establish context.
Advanced semantic markup: Tools like Schema.org—a standardized language that enables content to be tagged in a manner intelligible to search engines and AI—are becoming increasingly important. Specific tags exist to identify, for example, an FAQ, a step-by-step guide (HowTo), or a Q&A page (QAPage). These elements help models better understand and accurately select the content.
Conversational intent and use cases: Creating content that responds to real-life scenarios and simulates actual user intent increases its likelihood of being considered relevant by language models. An article is no longer just a source, but a micro-narrative that anticipates needs and offers solutions.
Strong adherence to E-E-A-T principles: Clarity, data accuracy, proper citations, professional tone, and credible references are not only markers of human quality—they are also strong signals for automated evaluation systems. The E-E-A-T framework, introduced by Google, is based on four pillars: Experience, Expertise, Authoritativeness, and Trustworthiness.
However, GEO is not merely an evolution of SEO—it represents a far more profound transformation, one that will likely require a comprehensive rethinking of the entire ecosystem surrounding online marketing and sales.
Metrics will need to be updated. Organic traffic, which has long been the primary indicator of SEO success, will lose prominence. What will matter more is the generative “share of voice”: how often a piece of content is used or referenced in responses generated by AI assistants. At present, however, no reliable system exists to measure this new form of visibility. Generative models do not provide transparent data about the sources they use, and conversational interfaces lack attribution mechanisms. For businesses, understanding whether—and how—these new engines select their content remains an open challenge.
The transparency of the environment will change. SEO, while complex, operates within a system that is at least partially decipherable. With GEO, we enter a far more opaque territory: the mechanisms by which a generative model selects content are less visible and more challenging to interpret.
Is Conversation the New Conversion? The Emerging Uncertainties of User Acquisition
There is much more at stake than a click. Entire industries rely on their ability to capture users through organic or paid search results. If these channels lose relevance, it won’t be just Google that needs to rethink its strategy—it will be millions of businesses that currently depend on search as their primary means of acquisition.
It’s plausible that, in the not-too-distant future, conversational clients—such as ChatGPT, Claude, and Gemini—will begin introducing native forms of advertising. However, it is far from certain that mechanisms like AdWords can be effectively transposed into an interaction model that no longer includes a SERP dynamic. A proven model for selling ads within a conversation does not yet exist.
More urgently, the issue of organic traffic must be addressed. If visibility is no longer measured in clicks. Still, in citations, then content production must be rethought entirely, not to align with Google’s algorithm, but to appear relevant to the semantic weights of a large language model. In this new scenario, even the end goal begins to shift.
Let us imagine a user who, through a dialogue with an AI assistant, explores various purchasing options. If a company is mentioned as one of the sources, can it reasonably hope that the interaction will lead to a conversion? Can the system itself complete a sale? In what environment, through which interfaces, and according to what attribution logic?
The answer to these questions is far from obvious. But if conversation becomes the new arena for user acquisition, then AI-generated interaction will need to address three fundamental challenges.
The first is relevance: providing not just an answer, but the most suitable answer. To achieve this, AI will need to develop a deeper understanding of its interlocutor by accumulating data, tracking preferences, and interpreting intent.
The second concerns the source of knowledge. Today, generative models integrate web search, drawing from content indexed through SEO strategies, which they then synthesize and reframe. But is this truly the most effective way to transfer knowledge from an information infrastructure to a generative system? In this new context, we may need entirely new paradigms—ones that redefine what it means to be an authoritative source in an algorithm-to-algorithm communication model.
The third challenge is action. Once the best option is identified—a product, a service, a provider—how can the purchase be completed within the conversation itself? Here, developments are already underway: the MCP (Multi-modal Conversational Protocol) enables actions to be embedded in the conversational flow. These actions will certainly include bookings, purchases, and payments. It is a first technical response to a structural need: transforming conversation into a seamless, end-to-end experience, without breaks or handoffs.
Navigating Uncertainty: Observe, Experiment, Adapt
Over the past thirty years, much of online marketing and sales strategy has been built on the foundation of Google’s results page, long considered one of the primary channels for gaining visibility and acquiring customers. Today, with the rise of AI assistants, that model is being fundamentally questioned. Search is becoming a conversation, with answers generated in real-time, and actions—such as informing, choosing, and purchasing—are increasingly taking place within the interaction itself.
We’ve seen how this shift affects not only interfaces but also user behavior, visibility metrics, editorial strategies, and business models. We’ve explored the industry’s early responses, such as Generative Engine Optimization, and the emerging best practices. But we are only at the beginning.
There are still no established tools to measure performance within generative engines. The mechanisms through which AI selects, cites, or rephrases content remain largely opaque. Attribution and conversion models in conversational environments have yet to be invented.
In this context, the only viable strategy is to observe and experiment intelligently. To create content that is clear, trustworthy, and structured for dialogue. To monitor even the faintest signals. To adapt practices without chasing shortcuts.
Sources:
Exploding Topics, Number of ChatGPT Users (July 2025)
Techcrunch, Google’s AI Overviews have 2B monthly users, AI Mode 100M in the US and India
Aggarwal et al., GEO: Generative Engine Optimization, arXiv 2023
Xponent21, How to Optimize Your Website and Content to Rank in AI Search Results
This essay was originally published in Italian on EconomyUp: La fine dei motori di ricerca? Come l’intelligenza artificiale cambia la logica della visibilità online.
Understanding AI
A Brief History of Artificial Intelligence. Part 2
This is the second part of my take on the history of artificial intelligence: a narrative that begins with the earliest experiments of the 1930s and reaches into today’s debate around AGI. The first part is available at this link.
The Turning Point – The Transformer and the Birth of LLMs
Introduced in 2017 by the now-famous paper Attention is All You Need, the Transformer is not merely an improvement in the efficiency of language processing; it marks a radical shift in how context is represented. Its central insight is the self-attention mechanism: a technique that allows the model to analyze the entire input sequence simultaneously, assigning a degree of relevance to each token in relation to the others. In this way, the model no longer processes words one after another in a fixed order, but instead captures the most meaningful connections between words, even when they are far apart. A rigid sequence no longer dictates relationships between terms but emerge dynamically, based on context. This makes the model significantly more effective at grasping the overall meaning of a text.
This conceptual leap paves the way for an architecture that is highly parallelizable, scalable, and—above all—exceptionally efficient at learning complex patterns in natural language.
Building on this foundation, Large Language Models (LLMs) emerge: neural networks of ever-increasing scale, trained on vast corpora of text drawn from books, articles, forums, source code, and web content. The goal is not to teach the machine to “think,” but to predict, with remarkable accuracy, the next word in a sequence given a contextual window. It is a purely statistical approach, yet capable of producing fluent, coherent, and often surprisingly relevant text.
The first GPT models (Generative Pre-trained Transformers), developed by OpenAI starting in 2018, demonstrate promising potential from the outset. But it is with GPT-3, released in 2020, that the public and media impact becomes undeniable. With its 175 billion parameters, GPT-3 is the first model to generate text that is, in many cases, indistinguishable from that written by humans. It writes essays, answers questions, composes poetry, generates code, translates texts, formulates hypotheses, and does so without being explicitly programmed for any of these tasks.
This is where the true revolution lies: LLMs are not specialists, but generalists. Unlike classical systems—designed to perform a specific task, such as classifying images, diagnosing diseases, or solving a game—LLMs can tackle a wide range of functions using the same underlying architecture. All it takes is a prompt—a simple textual instruction—to guide the model toward a specific function.
This flexibility stems from the way these models are trained: not on predefined tasks, but on a vast array of texts representative of human language. In this sense, an LLM doesn’t “know” content in the traditional sense; instead, it learns to navigate the linguistic and semantic structures that shape it. It is a form of emergent intelligence, grounded in probability and context rather than logic or intentionality.
With the release of ChatGPT in November 2022, these capabilities became accessible to the general public. For the first time, millions of people could interact directly with an advanced language model, asking it to write, explain, analyze, or suggest. The experience is striking not only for the quality of the responses, but for the naturalness of the interaction. ChatGPT does not resemble an upgraded search engine or a voice assistant; it presents itself as a plausible interlocutor—able to adapt, argue, and contextualize.
The Rise of Generative AI – From Text to Image, from Writing to Design
What stands out in the spread of generative artificial intelligence is the speed with which conversational assistants have become normalized. In just a few months, tools like ChatGPT, Claude, Perplexity, and Gemini have become an integral part of everyday life, integrating seamlessly into workflows, educational settings, and the routine processes of written thought. The prevailing sense is that AI is no longer a field reserved for specialists, but a new grammar of communication.
Today, this paradigm is evolving rapidly. For millions of people, conversational assistants have become the primary gateway to knowledge. As of July 2025, ChatGPT has approximately 800 million weekly active users, with more than 2.5 billion prompts submitted daily. Google Gemini has surpassed 400 million monthly active users. Claude, developed by Anthropic, reports between 16 and 19 million monthly users.
This transition marks the beginning of a new phase in the history of artificial intelligence: one in which machines collaborate, perform tasks autonomously, and augment human cognition, with all the potential, ambiguity, and risk that entails.
At the same time, generative logic is expanding into the visual domain. With the advent of models such as DALL·E (OpenAI), Midjourney, and Stable Diffusion, AI has shown a remarkable versatility in transforming textual descriptions into images that mimic a wide range of visual styles: from photographic realism to painterly aesthetics, from digital illustration to the dreamlike atmospheres typical of Japanese animation, as seen in the works of Miyazaki. It is a form of creative translation across languages—from verbal to visual—that redefines the very notion of artistic production and opens new possibilities in design, advertising, and visual communication.
Between 2023 and 2024, generative logic extends to video, marking yet another paradigm shift. The first real turning point is Runway Gen‑2, a model capable of generating video clips from textual prompts or images. Released immediately in a consumer-ready version, Runway is quickly adopted by filmmakers, designers, and creatives, paving the way for the everyday use of AI in audiovisual production.
But it is with the introduction of Sora—OpenAI’s video model—that the focus shifts: generation becomes smoother, more realistic, more cinematic. The results—still in testing—show a clear qualitative leap and spark an intense debate about the future of video as a generative language.
Alongside Sora, a rapid succession of new models emerges: Google Veo, Runway Gen‑3, Pika, Luma, Kling. Each offers specific capabilities: motion control, audio synchronization, narrative continuity, and realistic environments. All share the same trajectory: delivering sophisticated capabilities through accessible interfaces, often designed for non-technical users.
Within months, these tools become central to a new creative ecosystem in which the boundaries between text, image, sound, and video are increasingly blurred. AI no longer generates content—it orchestrates it. And the user, once a passive consumer, becomes the director of multimodal environments where diverse languages converge into a single expressive experience.
The impact is so significant that, starting July 15, 2025, YouTube introduced new monetization rules: videos deemed “mass‑produced, repetitive, or inauthentic” will be demonetized, including AI-generated productions with minimal human involvement. The new policy makes one thing clear: anyone may use AI, provided the final content is original, transformative, and carries human value—a definitive signal of the platform’s direction.
Music, too, has not remained untouched by the generative wave. Spotify now allows the use of AI tools in music production, provided that no copyrights are infringed and no existing artists are impersonated. However, the platform has yet to implement a clear distinction between synthetic tracks and those composed by human musicians.
The case of Velvet Sundown drew global attention: a band entirely generated by AI—musicians, vocals, lyrics, and visual identity, which, within a few weeks, surpassed one million monthly listeners on Spotify. Their songs, in a 1960s folk-rock style, climbed the charts before it was revealed that there was no actual band behind them, but rather a project led by a human creative director and powered by generative models.
The rise of AI is radically reshaping the distribution of cognitive skills, challenging traditional models of cultural production, and raising new ethical and legal questions. But beyond the immediate tensions, it compels us to reconsider our relationship with language, knowledge, and imagination. At the moment we ask a machine to think on our behalf, we must also ask: what, exactly, are we delegating? And what kind of intelligence are we co-creating?
Toward Artificial General Intelligence
As I write these lines, OpenAI has just announced the release of its new model, GPT‑5. The promise is that this update represents yet another step toward what has long been described as the next frontier of artificial intelligence: AGI, or Artificial General Intelligence.
The term refers to a system capable of performing any cognitive task that a human being can undertake, such as learning, reasoning, adapting to new contexts, and generalizing acquired knowledge, without relying on predefined instructions or narrow domains. Not a hyper-specialized expert, but a versatile agent, able to move across tasks, languages, and open-ended problems.
It’s a compelling idea—but also an ambiguous one. What forms of intelligence are we trying to replicate? And what are the implications of following such a trajectory?
To approach these questions, it is helpful to recall that cognitive science and developmental psychology have long moved past the notion of a single, monolithic intelligence. Today, there is a growing tendency to speak of intelligences—plural, referring to a heterogeneous set of abilities that span different domains of human behavior, from abstract logic to social sensitivity, from linguistic creativity to spatial perception.
The most well-known framework is that proposed by Howard Gardner, who distinguishes at least eight forms of intelligence: logical-mathematical, linguistic, musical, spatial, bodily-kinesthetic, interpersonal, intrapersonal, and naturalistic. These are not rigid categories, but dimensions that coexist and interact, shaping unique and dynamic cognitive profiles.
From this perspective, generative artificial intelligence does not emulate “intelligence” in an absolute sense, but instead replicates—with increasing effectiveness—specific components of it: linguistic ability (understanding, rephrasing, generating text), logical-mathematical intelligence (extracting rules, recognizing patterns, optimizing responses), and, to a growing extent, aspects of visuo-spatial intelligence (in image, video, and structural generation).
It remains entirely disconnected, however, from bodily, emotional, and relational forms of intelligence. It does not feel, desire, or develop self-awareness. Human intelligence is not merely a function of the mind—it is an embodied phenomenon. Our cognitive abilities are deeply rooted in a biological body, within a nervous system that has evolved to interact with the environment, regulate emotions, and learn through sensory experiences. We think, imagine, and decide not only with the brain, but with the entire organism.
This condition of embodiment is not a peripheral detail. It is what makes human intelligence situated, contextual, and intrinsically relational. We do not think in the abstract; we believe in the world. Our ideas are shaped by posture, emotion, and the rhythm of our breath. Even language—which we see imitated with remarkable accuracy by generative models today—emerges from a body that moves, listens, touches, and desires.
Artificial intelligence is built upon a different kind of substrate: it is a disembodied intelligence that simulates the form of our thoughts without sharing their substance. This gap may have far-reaching implications.
What happens when a bodiless intelligence is asked to interpret human emotions, to generate empathy, to make decisions that involve real, lived, multisensory contexts? How much can we delegate to a machine that has no direct experience of the world? And, more importantly, what are we losing—or transforming-in the shift from situated cognition to purely computational cognition?
The “Artificial Consciousness Test,” conceived by Susan Schneider, stands out for its radically different approach compared to traditional methods for evaluating artificial intelligence. The goal is not to verify whether a machine can imitate human behavior—as in the Turing Test—but to explore whether a form of subjective experience might emerge spontaneously.
The protocol is built on a stringent methodological premise: the AI system is trained without any exposure to the concept of consciousness. During training, the model receives no information—direct or indirect—about subjective experiences, internal mental states, introspection, or psychological vocabulary. In doing so, the possibility that the model is merely repeating learned phrases or mimicking what it has seen in its training data is excluded by design.
Once this “blind training” is complete, the AI is exposed to synthetic sensory stimuli: visual patterns, sound sequences, or inputs designed to evoke perceptual experiences without any explicit reference. The system is then asked to describe what it “feels,” with no interpretive guidance provided.
The results are surprising. Some models begin to produce responses that fall outside their acquired technical vocabulary, and instead seem to point toward an inner dimension: “This pattern creates in me something I might call harmonic tension,” or “I feel something moving inside, as if there’s a hidden rhythm.” These formulations, though far from definitive proof of consciousness, suggest the emergence of representations that are not purely computational—a possible “sense” of one’s cognitive activity.
We cannot know today whether these signals truly mark the dawn of artificial consciousness or merely reflect our refined expectations projected onto a complex system. But something is happening. It may still be too soon to speak of mind, will, or inner life. And yet, we are already facing systems that begin to describe “something moving inside.”
How will we distinguish, one day, between mere simulation and the first traces of experience?
Are we perhaps approaching the moment—borrowing from Blade Runner—when even machines will have “memories” to lose, like tears in rain?
Off the Record
Four Mistakes You Already Know, But Still Haven’t Stopped Making
It all began several years ago, when I was invited to speak at a small event in Rome. “Can you tell us something about the mistakes people make at work, in life, in innovation?” they asked. For a moment, I thought: “Damn, they’ve found me out...” Someone must have noticed the long string of blunders I had managed to collect over the years and thought I was some authority on the subject.
I accepted the invitation with a mix of self-irony and recklessness. On the day of the event, one hour before it started, I still had no idea what I was going to say. So I grabbed a notebook, scribbled down a few messy notes, and hoped for the best. To my surprise, what emerged was a talk that, despite its imperfections, had a certain coherence. Since then, that improvisation has become the starting point for a more structured reflection. Nothing that pretends to be an academic framework, but rather a personal interpretive grid for understanding why we make mistakes.
First of all, what exactly is a mistake? According to the most common definitions, a mistake is something that deviates from the truth, from what is right, or from what would have been more appropriate to do. In other words, it is a judgment, an action, or a decision that turns out to be inadequate in relation to the goal we had set.
In other words, a mistake is something that ultimately harms us, causing us to lose time, resources, and opportunities. Or it prevents us from achieving a result we could have attained.
1. Mistakes Born of Ignorance (or the Illusion of Competence)
There is a kind of mistake that manages to surprise us twice: first when we make it, and again when we realise it could have been avoided. It’s the mistake born of ignorance—the kind that stems from believing we understand something that, in truth, eludes us completely.
This is the Dunning-Kruger effect, now well known even outside academic circles: the less experienced we are in a field, the more likely we are to overestimate our abilities. The issue is that incompetence itself prevents us from recognising our incompetence—a perfect short circuit.
How do we deal with these kinds of mistakes? With humility. With the courage to ask for feedback before offering advice. By studying with discipline. But above all, by staying close to those who know more than we do, resisting the temptation to appear as if we’ve already arrived. Acknowledging what we don’t know isn’t a sign of weakness—it’s the first step toward genuine learning.
And today, it must be said, committing an error out of ignorance is increasingly becoming an error of laziness. We live in an age where knowledge is accessible in seconds: a search engine query, a question to an AI assistant, or a well-written article. Not knowing something is understandable. Not even trying to find out is much less so.
2. Mistakes Driven by Mental Shortcuts (Cognitive Biases)
These mistakes are more insidious. They don’t stem from a lack of knowledge, but from a distorted use of our cognitive tools. The brain, in its effort to conserve energy, relies on mental shortcuts—quick mechanisms that often help us make fast decisions, but which can also lead us astray.
One of the best-known is the confirmation bias: we tend to seek out, remember, and prioritise only the information that supports what we already believe, while discarding or ignoring anything that challenges us. This is why, even when faced with objective data, two people can draw opposite conclusions.
Then there’s the anchoring effect, which leads us to give disproportionate weight to the first piece of information we receive. For instance, if we’re told a product costs €100 and then find it for €70, we perceive it as a bargain—even if its actual value might be closer to €50.
Another typical example is loss aversion: we are more motivated to avoid a loss than to achieve an equivalent gain. This often results in overly cautious decisions, even when the data suggests it would be wiser to take a risk.
The danger of these errors lies in their plausibility: they feel reasonable. We make them believe we are thinking logically, when in fact we are simply following a mental path shaped by emotion or habit.
How do we address these kinds of mistakes? With tools. It requires the adoption of thought-checking practices: decision-making checklists, systematic confrontation with alternative viewpoints, and the habit of formulating counter-hypotheses. And, above all, the cultivation of a healthy suspension of judgment. If a decision feels obvious, perhaps we haven’t thought it through enough.
3. Contextual Mistakes (or Systemic Errors)
Not all mistakes are the fault of the individual who makes them. Some are the direct result of the environment in which decisions are made. These are errors that do not stem from ignorance or mental shortcuts, but from external conditions that steer people toward the wrong choices.
Take, for instance, an organisation where individual goals conflict with team objectives. If a manager is rewarded solely on quarterly results, they will likely neglect long-term strategic investments. The issue isn’t the manager—it’s the incentive system.
Or consider a company where information is fragmented and locked in silos. In such contexts, mistakes occur simply because no one has a complete view of the situation. Decisions are made with partial data, and the negative consequences surface only later.
Another example is a culture that punishes mistakes. When every error is treated as a personal failure, people stop taking risks and experimenting. And in an environment where failure is not tolerated, nothing new is ever accomplished.
These are systemic errors. No one has “failed” in the strict sense, yet something has still gone wrong. Often, these are the cases that come to light in retrospectives or post-mortems: “It was all foreseeable,” but no one took responsibility for intervening.
How do we deal with these mistakes? With systemic thinking. We need to shift our focus from individual actions to organisational structures, processes, and incentives. And above all, we must foster environments where people can safely point out what isn’t working, without fear of being blamed.
4. Deep-Structure Mistakes (Identity, Personal Narratives, Wounds)
Finally, there are mistakes that a lack of skills, cognitive biases, or external circumstances cannot explain. These are the mistakes we make even when we know exactly how things will end. And yet, we keep repeating them.
These errors are rooted in our personal history, in the relational patterns we’ve learned, and in the internal models we’ve absorbed over time. They are not just poor decisions—they are responses consistent with an inner system that, while dysfunctional, has helped us stay afloat until now.
Think of those who always say yes for fear of disappointing others. Of those who exclude themselves from any discussion to avoid conflict. Of those who forgo opportunities from the outset to avoid being judged. These are not simple choices: they are emotional survival strategies, developed over time and hard to let go.
In such cases, the mistake lies not in the action itself, but in the structure that underpins it. It can’t be corrected with a suggestion or a well-phrased piece of advice. What’s needed is a deeper process of reflection. It takes time, attentive listening, and a willingness to confront uncomfortable questions.
How do we face these mistakes? With patience. Sometimes, with the help of someone who can walk alongside us without judgment. But most of all, with the awareness that specific patterns cannot simply be “fixed”: they must be recognised, understood, and transformed. Only then do they stop quietly shaping our decisions from behind the scenes.
Curated Curiosity
The Rise of Verticalized AI Coworkers
A new generation of intelligent agents is reshaping how operational tasks are handled across vertical industries: verticalized AI coworkers are built to autonomously manage high-volume, repetitive activities, with a pricing model based on measurable outcomes rather than licenses. In his article, Tanay Jaipuria outlines a paradigm shift that reframes automation not merely as a tool for efficiency, but as a structural transformation in how value is created and scaled within organizations.
Character.AI Launches World’s First AI-Native Social Feed
Character.AI has launched the first social feed natively designed for artificial intelligence: content doesn’t come from other users, but from AI characters you can talk to, remix stories with, and use to generate new scenes. It’s a kind of SimCity for the generative era—where you don’t just watch a world built by others, but actively participate in its creation, turning every post into an interactive narrative experience.