Conversations on Generative AI: How Italian Teams Are Using It Today
Gen AI is entering Italian organizations from the bottom up: 20 interviews to map what’s changing. Plus: a brief history of AI, and a reflection on redesigning Serena through iteration.
Ciao,
Over the past few years, this newsletter has served as a kind of logbook: a space to share what I’ve learned along the way, reflect on personal projects, and document both small wins and inevitable missteps. It’s been a way to organize my thoughts in public and spark conversations.
Now, Radical Curiosity is changing skin: it becomes an observatory on artificial intelligence and its transformative impact on innovation, business models, and the way humans work and collaborate.
I believe we’re living through one of the most profound technological shifts since the Internet. That’s why this new version of the newsletter will be structured around four sections, each offering a different lens on what’s happening:
Signals and Shifts. Every issue opens with a thematic deep dive. This first one is a synthesis of 20 interviews I conducted in June with managers, entrepreneurs, and freelancers about how they’re using generative AI.
Understanding AI. A space for building shared vocabulary. I’m starting with the history of artificial intelligence: from Turing to ChatGPT.
Off the Record. A moment of intellectual honesty. Today, I’m reflecting on Serena, a project I deeply care about that, at the moment, is struggling to gain traction.
Curated Curiosity. A carefully curated selection of articles, videos, and resources that I find thought-provoking. As always, guided by curiosity and critical thinking.
I’m also making a clear commitment to turn Radical Curiosity into a weekly presence. Happy reading.
Nicola
Table of Contents
Signals and Shifts - Generative AI in Italian Workplaces: Twenty Conversations to Understand What’s Changing
Understanding AI - A Brief History of Artificial Intelligence
Off the Record - Rethinking Serena: The Value of Iteration
Curated Curiosity
The Future of Software Development - Vibe Coding, Prompt Engineering & AI Assistants
AI needs UI
Signals and Shifts
Generative AI in Italian Workplaces: Twenty Conversations to Understand What’s Changing
Lately, my workday has become a continuous exchange with generative AI. I’m no longer surprised when I manage to complete in forty-eight hours tasks that, until recently, would have taken weeks.
I might design a survey with ChatGPT and publish it on LinkedIn, upload a batch of documents to Google NotebookLM to get a summary, or generate a podcast to listen to while driving—all without recording a single minute of audio. Within the same timeframe, I can build a small AI-powered application for Claude without writing a single line of code. Or prototype a full-featured app on Lovable.dev, complete with user registration and API integration from OpenAI and Anthropic, finally giving form to an idea that’s been sitting in the back of my mind for weeks.
I would confidently call myself a power user of generative AI. Over the past year, my productivity has grown exponentially. These tools offer a new kind of autonomy, a way to create and experiment without waiting for others to catch up.
But am I an exception? Or has AI already become a regular collaborator for many?
To find out and to get outside my bubble, I decided to listen. In June, I spoke with twenty professionals and managers from a wide range of organizations, including large companies, startups, public institutions, trade associations, and academic institutions.
The sample is small and intentionally diverse, yet the picture that emerges is surprisingly straightforward. Certain patterns recur across various industries, roles, and company sizes. Others, more subtle or unexpected, hint at how the relationship between people, AI, and work may evolve in the coming months. Here’s what I’ve learned.
Bottom-Up Adoption Is the Rule, Not the Exception
Across all the conversations I had, a surprisingly consistent pattern emerged: the push toward adopting generative AI is coming from the ground up. It’s often individual professionals—naturally curious, sometimes with a digital background, or simply more inclined to experiment—who bring tools like ChatGPT, Gemini, or Perplexity into their daily work, without waiting for official guidelines or formal approval.
Generative AI is frequently described as a “junior colleague” or a personal assistant that’s always available to handle repetitive tasks: rewriting communications, summarizing briefings, creating images or social media content, and extracting insights from complex documents. For many, it’s become essential for managing high workloads and maintaining operational resilience.
One notable trend is the growing ability to reduce reliance on external agencies for routine tasks. AI enables teams to manage many of these activities directly, with greater speed and agility, allowing organizations to leverage internal know-how while maintaining tighter control over processes.
There’s also a clear willingness to invest personally in these tools. Several interviewees mentioned paying out of pocket for premium versions, especially ChatGPT, to build an ongoing, more tailored relationship with their virtual assistant. In these cases, AI is seen not just as a tool, but as a kind of work companion: one that remembers preferences, understands context, and recalls past interactions. Sometimes, it’s even described as offering a form of emotional support.
ChatGPT remains the most widely used tool by far. Some companies have officially rolled out Microsoft Copilot or Google Gemini, but experienced professionals still tend to favor OpenAI’s solution. This creates an interesting dynamic: tools integrated into enterprise software suites are often perceived as less versatile—or less “smart”—than general-purpose chatbots, and are sometimes ignored or used only superficially. The risk is a new “Clippy effect,” echoing the infamous animated paperclip assistant Microsoft introduced in the late ’90s, remembered more for being intrusive than genuinely helpful.
Compliance: Still an Unclaimed Territory
Compliance, particularly in light of the upcoming European AI Act, remains a largely overlooked area of focus. Only a handful of organizations have established clear policies, structured training programs, or genuine change management initiatives. It’s no surprise, then, that the vast majority of people I spoke with have never received any formal training on how to use AI tools, nor on how to use them responsibly.
From what I observed, most organizations still rely on individual discretion or broad, informal recommendations. This approach is prevalent in sectors with limited regulatory oversight, where compliance is often seen as peripheral or something that can be postponed.
The most frequently cited concerns relate to handling sensitive data, protecting privacy, mitigating model bias—particularly in HR processes—and using platforms that may process data in opaque or unpredictable ways.
Uncertainty remains high regarding the actual obligations in the coming months. With the AI Act on the horizon, many questions are still unanswered: Who will need to be trained? What responsibilities will be assigned to individual users? How will internal audits and oversight processes need to evolve?
For now, the most significant concerns center around data confidentiality and intellectual property, especially when using external, free tools that may repurpose company data to train commercial models. In the absence of clear guidelines, there’s a growing risk that organizations will continue to operate on informal norms that may soon prove insufficient.
Efficiency Persuades, but ROI Remains Elusive
When asked about the main benefits of generative AI, most managers give a near-unanimous response: the technology helps save time and increase efficiency, especially in repetitive or low-value tasks. However, despite this widespread perception of enhanced productivity, a rigorous measurement of return on investment remains difficult to pin down.
None of the professionals I interviewed were able to cite specific KPIs or present success stories backed by solid data. The impact of AI is assessed chiefly through qualitative impressions or intuitive judgments, rather than through objective, replicable metrics.
In some conversations, a different kind of friction emerged—what some described as “prompt fatigue.” The time saved in content generation is often offset by the time spent reviewing and refining, particularly in contexts where content quality or regulatory sensitivity is critical. In such cases, the perceived benefit tends to shrink or shift toward organizational rather than operational gains.
Overall, there appears to be a structural challenge in quantifying AI’s actual contribution. Several managers noted that even vendors struggle to produce compelling evidence during the sales process. Improvements, when observed, tend to be incremental. For now, true disruption remains the exception rather than the norm.
Cultural Resistance and Generational Anxiety
The conversations I collected reveal a clear cultural and generational divide in how organizations approach artificial intelligence. In more traditional settings—particularly among senior managers—there’s a prevailing sense of caution, if not outright skepticism. The most frequent concerns relate to the risk of deskilling and the gradual erosion of human expertise in business processes.
One recurring concern is that generative AI may ultimately replace junior roles by automating foundational tasks, thereby limiting growth opportunities for those just entering the workforce. These anxieties are often compounded by outdated assumptions, uncertainty around AI’s labor impact, and a general lack of direct experience with the tools themselves.
In this context, there is a growing demand for hands-on training—even at the executive level—not necessarily to become technical experts, but to better understand how AI is already reshaping workloads, workflows, and team structures.
One message comes through clearly across many of these conversations: AI is a powerful accelerator, but it cannot replace human judgment and discernment. Content quality, attention to nuance, critical thinking, and the ability to interpret a brief accurately remain—at least for now—irreplaceable human capabilities, especially in high-value contexts.
The Real Challenge Starts Now
My impression is that if I had conducted twenty more interviews, the picture would have looked much the same: generative AI is already embedded in day-to-day work, even if often informally, without transparent governance, and in fragmented ways.
The real challenge in the coming months is to turn this individual enthusiasm into a more structured and intentional use. What’s needed is light-touch but thoughtful governance: integrating AI into key processes, learning how to assess its real impact, and investing in skills development—not just for specialists, but across the entire team.
A good starting point could be a basic mapping of existing practices, even if informal. In many cases, adoption is further along than it seems. All that’s missing is a systematic view to recognize and support it.
From there, organizations can design targeted training programs grounded in the real work of teams and create agile spaces for sharing prompts, tools, and practical solutions.
What’s most helpful at this stage is a set of clear, shared principles: what’s encouraged, what should be monitored, and which tools to consider as standards. A simple framework that includes some basic guardrails, but still allows people to experiment safely and with confidence.
This is how organizations can transition from spontaneous, scattered use to a more deliberate and informed approach—one that fosters learning, enhances efficiency, and, most importantly, enables genuine innovation.
This essay was originally published in Italian on EconomyUp: L’AI generativa nelle aziende italiane: cosa raccontano venti conversazioni senza filtri.
Understanding AI
A Brief History of Artificial Intelligence. Part 1
The history of artificial intelligence is, first and foremost, the history of an ancient desire: to understand the workings of the mind and replicate the act of thinking. It is a trajectory marked by brilliant insights and visions that, for decades, have oscillated between utopia and disillusionment. Alongside moments of collective enthusiasm, which promised imminent revolutions, there have been long silences, seasons of skepticism, and periods in which the very idea of an “intelligent machine” seemed destined to be filed away among the illusions of technology.
And yet, in the background, a persistent tension has endured: the will to build artifacts capable of observing, deducing, responding. In a word, thinking.
The artificial intelligence we use today is the outcome of decades of research in mathematics, computer science, computational linguistics, and neuroscience. But it is also, more broadly, a cultural phenomenon: a technology that compels us to reflect not only on what machines can do, but on what we mean by intelligence, creativity, and learning.
The historical moment we are experiencing—marked by the advent of generative AI—represents a significant departure from the past, as we have never before found ourselves interacting with machines capable of speaking, writing, designing, and even suggesting ideas in such a fluid and convincing manner. How did we get here?
The First Era of AI (1950–1980)
The origins of artificial intelligence lie at the intersection of diverse research strands and insights emerging across multiple disciplines between the late 1930s and early 1950s. Neurological studies began to describe the brain as an electrical network of neurons; Norbert Wiener’s cybernetics introduced the concepts of control and feedback in systems; and Claude Shannon formalized information as a stream of digital signals. Within this intellectual landscape, the idea of building an “electronic brain” capable of processing information in a way akin to human reasoning began to gain systematic traction.
Amid this broad and still exploratory context, Alan Turing stands out as one of the first to address the question of machine intelligence explicitly. In 1950, he published Computing Machinery and Intelligence, an article that would become foundational. There, he introduced an empirical criterion—now known as the “Turing Test”—to assess whether a machine can be considered intelligent: if a human interlocutor cannot distinguish between the responses given by another person and those generated by a machine, then, Turing argues, the machine can be deemed capable of thought. The insight is radical, anticipating the notion of human–machine conversation by more than seventy years, which today lies at the core of generative AI.
The field of artificial intelligence takes shape as an autonomous discipline six years later, in the summer of 1956, when a small group of scientists gathers at Dartmouth College for a seminar that will go down in history. It is on this occasion that the term “artificial intelligence” is officially coined. The objective is ambitious: to simulate key human cognitive functions—reasoning, language comprehension, learning—through formal models and computational tools.
In this first era, the dominant approach is symbolic: intelligence is conceived as the manipulation of symbols according to logical rules. Machines are viewed as deductive systems that, given a set of premises and instructions, can derive consistent conclusions. It is the age of so-called “expert systems,” programs capable of solving specific problems in fields such as medicine or engineering by relying on structured knowledge bases formulated as “if… then…” rules.
A paradigmatic example of this logic is ELIZA, the program created in 1966 by Joseph Weizenbaum at MIT. ELIZA simulates a Rogerian therapist by reformulating the user’s statements as questions. If the interlocutor writes “I feel tired,” ELIZA replies, “Why do you feel tired?” There is no understanding, no intentionality. But the dialogic structure creates the temporary illusion of intelligent interaction. It is an early experiment in “linguistic simulation” that, despite its simplicity, anticipates some of the dynamics we now observe in contemporary chatbots.
However, the initial enthusiasm soon encounters structural limitations. Rule-based systems exhibit poor adaptability, as they struggle to handle ambiguity, shifting contexts, and draw inferences from incomplete information. Moreover, the manual construction of knowledge bases proves laborious and brittle: a single unforeseen exception can compromise the entire system.
In the 1970s, these difficulties led to a gradual slowdown in research. A report published by the British government in 1973 expresses strong skepticism about the real prospects of AI. Confidence wanes, funding dries up, and many projects are abandoned. It is the first “AI winter,” a period of stagnation that marks the end of the symbolic illusion. But it is also the beginning of a new phase—one in which the machine is no longer seen as a flawless executor of logical rules, but as an apprentice: imperfect, fallible, yet capable of improving over time.
The Era of Machine Learning: Learning from Data (1980–2010)
The gradual disillusionment with the symbolic approach paves the way for a radical shift in perspective. Instead of explicitly programming machine behavior through logical rules, researchers begin to explore the possibility of enabling machines to learn from data and examples. This is the founding intuition of machine learning: a machine does not need to be explicitly instructed on how to solve a problem; it must be exposed to a sufficient volume of data from which it can infer functional patterns to solve it autonomously.
This shift represents far more than a technological update—it marks an epistemological transformation. The ideal of transparent, formally encoded intelligence is abandoned in favor of a more statistical, inductive, and adaptive model. The machine becomes, in a sense, akin to an organism that learns from experience.
Although the neural network model was proposed decades earlier, it is only now—thanks to increased computational power and the growing availability of digital data—that these architectures are beginning to show their potential. Nevertheless, the networks of that era are still unable to handle tasks such as linguistic interpretation or the accurate recognition of complex images.
Meanwhile, other paradigms within machine learning are beginning to find concrete applications, including support vector machines, decision trees, and Bayesian methods. It is a period of intense theoretical activity, but industrial adoption remains limited. Models struggle to generalize at scale, and although results are promising, they are not yet sufficient to dispel the lingering suspicion that AI is more a promise than a practical reality.
This discrepancy leads to a renewed phase of frustration and funding cuts: the second “AI winter,” which unfolds between the late 1980s and early 1990s. Many research labs shut down, institutional interest wanes, and the field once again appears to be in crisis.
The late 1990s and early 2000s mark a quiet but decisive turning point. On the one hand, the emergence of GPUs (graphics processing units)—initially developed for video games—introduces a new level of computational power. These chips can perform a vast number of calculations simultaneously, enabling the training of larger and faster neural networks. On the other hand, the expansion of the web generates an ever-growing volume of unstructured data—text, images, video—which provides the ideal raw material for machine learning systems.
In this context, a new idea gradually takes hold: that it is not the models themselves, but rather data and scalability, that determine the performance of AI.
The era of machine learning thus lays the groundwork for a new phase, one in which artificial intelligence no longer merely reacts to predefined inputs, but begins to discern, to classify, and to predict behavior based on statistical patterns.
Deep Learning and the Return of Vision (2010–2017)
The early 2010s mark a decisive turning point in the trajectory of artificial intelligence. After years of incremental progress—often confined to academic circles—a series of technical innovations and demonstrative successes bring AI into the public spotlight. The driving force behind this renewed enthusiasm is deep learning, which utilizes deep neural networks capable of learning complex representations from large volumes of data.
The key difference from traditional machine learning lies not only in architectural depth—that is, the number of layers through which information passes—but, more importantly, in the ability to automatically extract relevant features from raw data, eliminating the need for manual feature engineering. In the past, for instance, an image recognition system required engineers to predefine salient characteristics (edges, colors, shapes). With deep learning, by contrast, the network learns to identify such structures on its own, layering increasingly sophisticated levels of abstraction.
The first tangible signal of this shift comes in 2012, when a team from the University of Toronto—led by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton—enters the ImageNet competition. Their model, AlexNet, outperforms all other competitors by a wide margin in the task of image classification, dramatically reducing the error rate. The architecture employs deep convolutional neural networks (CNNs) trained on GPUs: a combination that proves to be a breakthrough and soon becomes the new standard for automated visual analysis.
From that moment on, deep learning has been rapidly adopted across a wide range of domains: speech recognition, machine translation, image generation, and autonomous driving. Another milestone came in 2016, when AlphaGo, developed by DeepMind, defeated world champion Lee Sedol in the game of Go. Unlike chess, Go is a game of positioning and intuition, whose combinatorial complexity had, until then, made it impossible for machines to devise a winning strategy. AlphaGo’s victory—based on a combination of deep learning, reinforcement learning, and probabilistic techniques—demonstrates that AI can tackle strategic contexts where brute computational force alone is not sufficient. It is both a technical and symbolic success, redefining the boundaries of artificial intelligence.
In parallel, these years have seen the consolidation of infrastructure that enables large-scale training, including cloud platforms, open-source libraries (such as TensorFlow and PyTorch), and, above all, the evolution of GPUs, which have become essential tools for data scientists. The synergy among algorithms, hardware, and data availability generates an unprecedented acceleration in model development.
It is no coincidence that, in this very context, a new architecture emerges—one destined to reshape the landscape of artificial intelligence: the Transformer.
To be continued…
Off the Record
Rethinking Serena: The Value of Iteration
Serena is a project I’ve been working on for months, although it's still mostly a side project. The goal is to build a co-pilot that helps anyone generate a course syllabus—a solid starting point for designing any learning experience, regardless of delivery format or pedagogical approach.
For a long time, we focused on building a coherent workflow, trying to solve specific problems along the chain:
How to help the user define a clear context by articulating the learner profile and learning objectives.
How to generate a syllabus that is complete, well-structured, and free of repetition or generic content.
How to integrate a knowledge base that is both rich and adaptable.
I’ve shared parts of this journey in previous issues of Radical Curiosity: Meet Serena. From idea to syllabus in minutes and Prompt. Chain. Build. Lessons from Serena and the frontlines of generative AI. But over the past few months, something has shifted.
As we worked on the new interface and became increasingly familiar with vibe coding, using Replit as our development environment, I began to sense a subtle tension. The more I explored the potential of agentic systems, the more I realized we were thinking about Serena through the wrong lens. We were attempting to integrate artificial intelligence into a rigid, deterministic, and sequential process —a well-ordered flow, but one that was closed.
On the contrary, tools like Replit, Lovable, or Cursor don’t follow a predefined path: they develop the project in collaboration with the user, adapting to their way of working. There’s no fixed sequence of steps to complete. Instead, there’s a reference tech stack that serves as the operational foundation. The order in which the application takes shape—what gets written first, what gets tested, what gets revised—depends entirely on the interaction. It’s the user who leads the process, while the system responds, assists, and suggests. In this sense, these aren’t just tools; they’re collaborative spaces.
So we took a step back. Not to start over, but to look with fresh eyes at what we had already built, and to understand how we might transform a system that currently produces good results into a platform that, through human-machine collaboration, can generate something truly remarkable.
Working on Serena has reminded me, yet again, of the importance of iterating: of building prototypes not just to test solutions, but to think through problems, surface hidden assumptions, and pressure-test ideas that seem promising on paper but prove fragile, partial, or even misleading in practice.
Each development cycle becomes an opportunity to learn something new—not just about how the system behaves, but about what we actually want it to do, how we imagine the interaction between humans and AI, and how much control we’re willing to delegate—and in exchange for what.
To iterate is to accept that some solutions will need to be discarded, but were still worth exploring. And that failing fast, if done thoughtfully, is often the most effective way to understand where it’s worth doubling down.
If you were forwarded this email or if you come from a social media, you can sign up to receive an article like this every Sunday.
Curated Curiosity
The Future of Software Development - Vibe Coding, Prompt Engineering & AI Assistants
This conversation with the a16z infrastructure team examines how AI is reshaping the very idea of infrastructure. Rather than running on top of the stack, AI is becoming part of it, a foundational layer alongside compute, storage, and networking.
The discussion moves across several key shifts: how the rise of foundation models is changing developer behavior; how agents might reshape software architecture; what “defensibility” looks like in a world where models are increasingly commoditized; and why infra is no longer the exclusive domain of specialists, but a space where product and platform strategy converge. There’s also a proper framing on the economics of building in AI-native environments and the implications this has for startups and incumbents alike.
AI Needs UI
The article by Dan Saffer presents a clear and pragmatic perspective on why interface design remains essential in the era of generative AI. It’s a helpful lens for thinking about how to build tools that guide, rather than overwhelm, the user, especially when AI is working behind the scenes.
Hire me
If your organization is trying to make sense of generative AI and how to use it effectively, I can help.
With over 20 years of experience in innovation, product management, and education, I bring a pragmatic and strategic lens to emerging technologies. I work with leaders to unpack what AI means for their work and how to apply it to enhance productivity, performance, and long-term competitiveness.
Transparency Note. Radical Curiosity was written in collaboration with artificial intelligence, used as a co-pilot to expand the capacity to gather sources, analyze them, and structure ideas. The writing process unfolded through a series of sessions involving dialogue, exploration, and rewriting with the support of AI, culminating in a final revision entirely authored by the writer. At every stage, the AI acted as a companion in reflection; the conceptual, stylistic, and argumentative choices remain fully human.