Back to Home GenAI AI Bubble or AI Boom? What's in it for you in the “Decade of AI Agents” AI bubble or AI boom - Insights from helvia.ai's Presentation at the GenAI Summit By Stavros Vassos, Co-Founder & CEO at helvia.aiThis post is based on a recent talk I gave at the 4th GenAI Summit on the 24th of November, 2025. You can find the slides of my presentation here. My intention in the talk and this writeup is to share some thoughts on three things that I see people asking all the time about AI:Is it an AI Bubble or AI Boom? How can I benefit from AI Today? What is a promising technical path forward?First, I'd like to go through a quick overview of the last three years with GenAI.Three Years of GenAI: A Rapid TransformationThe time of my talk (November 24, 2025) marked almost three years since ChatGPT first introduced Generative AI (GenAI) and Large Language Models (LLMs) to the world. Key milestones include:November 30, 2022: ChatGPT by OpenAI came out and shocked everyone with what was possible at that time! At the time it was using a version of GPT-3.5 that was called "InstructGPT".March 14, 2023: Then a few months later GPT-4 came out and shocked everyone again with more capabilities that were considered an order of magnitude more powerful! Then not other big shocks took place but lots of things happened.April 18, 2024: Llama 3’s by Meta release set a framework for open source (OSS) models enabling progress and growth in LLM providers.June 21, 2024: Almost 1.5 year ago Claude 3.5 is released by Anthropic, shortly becoming the "coding king," and essentially making coding a prominent use case for LLMs. This led all major LLM vendors to develop models specifically tailored for coding.November 2025: If we look at just the last couple of weeks, we see a new release of the latest version of LLMs by many major vendors (including GPT-5.1, Gemini 3, Claude Opus 4.5), as well as open source models (Kimi K2 Thinking, DeepSeek 3.2) that seem to be on par with the closed source models that are leading the race. In just three years, LLMs became the cornerstone of a new tech stack, and a rapidly evolving ecosystem that blends research and development of new applications and services.❓Big question: Where is all this heading, and how does it affect us?AI Bubble or AI Boom? Two Visions of the FutureThere is a wild spectrum of expectations about AI and there are passionate debates about whether AI works or not. Let’s look into two common perspectives:“Existential” Vision: AI will evolve to Artificial General Intelligence (AGI).“Industrial revolution” Vision: AI will bring us Digital labor. 1. The “Existential” Vision: AGI According to this vision, AI will bring us Artificial General Intelligence - AGI. There are many definitions for this, let's go with the following for today. If you have an AI system that can perform equally or better than any human on things you can do while sitting on a computer, then we have AGI. What AGI meansSuper-intelligence! Many people believe that since AI iterates faster on its capabilities than a person, ΑΙ bootstraps itself to unbounded skills.Which means that AGI essentially brings unbounded acceleration on science, economy, everything really.And to many it also means that there is a possible extinction of humanity.Either way, whoever gets there first will have tremendous advantage! No wonder literally hundreds of billions of dollars are invested on the frontier labs pursuing larger LLMs and AGI.Is AGI coming soon? To be honest, I never thought nor had the feeling that this is going to happen soon. It's not an easy subject of course, and there is debate and strong opinions by very prominent figures in AI, e.g.:Geoffrey Hinton, widely known as one of the "godfathers of AI", has maintained many years now (one early reference can be found here) that we have just a few years to prepare ourselves before AI surpasses us.Yann LeCun, a high-profile AI researcher at Meta, has been arguing that LLMs are a dead-end and we need to start different lines of research (one recent reference here).If they can't agree, how can you or me decide on this?!Scaling toward AGIOne way to form an opinion is to see how things scale within the last few years. There is a quote from the CEO of OpenAI from almost a year ago that I think shows a common line of thought about how AI can grow to become AGI. It is from Sam's blog on February 10, 2025 and it goes more or less like this:The intelligence of an AI model roughly equals the log of the resources used to train and run it.These resources are chiefly training compute, data, and inference compute.It appears that you can spend arbitrary amounts of money and get continuous and predictable gains.By the way, this is perhaps an optimistic view, especially coming from the CEO of a leading LLM company, and one whose funding relies on such assumptions about the future of AI. There is an important detail in this line of thought too: as mentioned in Sam's blog, the assumption is that the intelligence of an AI model roughly equals the log of the resources used to train and run it. Since log is not always easy to understand, let's see it with some examples of what this means (hand-wavy but true to the spirit of the assumption). According to this:To "double” intelligence, you might need 10× the resourcesTo "triple” it? Maybe 100× the resourcesTo "quadruple" it? Maybe 1000× the resourcesThis makes progress increasingly expensive and perhaps infeasible due to bounds on infrastructure or energy or.. earth size!Overall, there is an indication that more resources will give us better AI and get us closer to AGI – but it could be 5 years or 500 years to reach AGI and may require other unconceived breakthroughs. Whether it's close or not seems to be more of a "gut feeling" and less of a predictable extrapolation of current results. So, let's say this is a "bubbly" expectation!2. The “Industrial Revolution” Vision: Digital Labor According to this vision, AI will bring us Digital labor, that is, digital employees that train themselves, onboard instantly, work 24/7, and cost less than human employees. This vision too is kind-of sci-fi coming to life, and at the same time there is lots of investment on supporting these new AI roles that will be working side by side or replacing human roles. I would like to look into three prominent use cases where we see AI roles already being explored:Personal assistantBusiness automationCodingAI roles - Personal assistantsEvery “LLM app” like ChatGPT, Claude, Gemini, etc, acts essentially as a powerful personal assistant that operates on the digital world. Looking into some recent numbers about ChatGPT as reported by Financial Times:ChatGPT has more than 800 million regular users5% of those are paying subscribersThis amounts to 70% of OpenAI annual revenueJudging from OpenAI alone, the personal assistant is probably the biggest use case of GenAI at the moment! And in fact, these AI personal assistants are already changing how we write and read text, how we search online, how we shop, and there is plenty of room to grow.AI roles - Business automationThis is the core of the "Digital revolution" vision, according to which many human roles are going to be transformed into AI roles. Here we have mixed results. These are two articles coming from Gartner that are indicative of how this idea has evolved in the business world in the last couple of years:Gartner, Aug 2023: By 2026, investment in generative AI will lead to a 20% to 30% reduction of customer service human roles. (source)Gartner, Jun 2025: By 2027 50% of organizations will abandon plans to reduce customer service workforce due to AI. This shift comes as many companies struggle highlighting the complexities and challenges of transitioning to AI-driven customer service. (source)The first article form a couple of years ago shows the initial excitement that in customer service we will have already replaced 20-30% of the human roles by today, while the second article of a few months ago reports on how this plan proves to be more difficult than initially thought. This is indicative of many cases trying to replace human roles entirely by so-called AI Agents. (Note: I got these two articles by a recent post of the AI Realist who does an excellent job fighting the hype!). But what is an AI Agent exactly? I often joke that an AI Agent is this mythical (AI) creature that is going to do all kinds of things and change our lives. There are plenty of definitions, most of them technical, e.g., a common one is that an AI Agent is an LLM-based system that is able to use so-called tools. I like to use a more high-level definition that is more practical and helps understand what AI Agents represent today, and it goes like this:“If you’re not sure what it’s going to do next, then it’s probably an AI Agent!”Of course with LLMs we don't know exactly what they are going to write every time, there is this uncertainly. But with an AI Agent the uncertainty is about what they are going to do next: Are they going to send an email? Call an API? Send a message? Or something else?AI Agents operate on an implicit tradeoff between these two desired qualities:Powerful – being able to handle scenarios that are not explicitly “programmed” for.Predictable – acting in the way an existing employee would handle things.This is one of the major reasons why the promise of business using AI roles widely is not happening instantly. It is possible though, just not instant! At Helvia.ai we have worked with large organizations and we have employed AI roles in business operations successfully and we have proved that you can get meaningful gains and ROI. The key is to do careful preparation and consider AI Agents in the scope of a wider Business and AI Transformation.AI roles - CodingCoding has become a very large niche and there is a long list of tools that apply to the whole software engineering lifecycle, some examples:New command-line tools (CLI): Codex, Gemini-CLI, GitHub Copilot, etc.New integrated development environments (IDE): Windsurf, Cline, Cursor, Antigravity, etc.New prototyping tools: Lovable, v0.dev, and others.The high demand has led LLM providers create specialized models for coding and we see a release of something new every couple of weeks! In some sense, coding with AI is similar to business automation with AI, but the coding / software engineering environment is more structured and much faster to iterate, which makes it a great application domain for AI.Results so far seem to show that AI can:Help junior software engineers learn faster if used as training aid.Assist senior engineers do the heavy lifting of projects.Change the way software engineers interact with informationon the web.Overall it provides the grounds for accelerating software engineering.“AI Industrial Revolution” Vision – Are We There Yet?So what's the verdict on this one? My take is that while fully replacing a human worker with a digital worker does not seem to work yet (or very soon), there are meaningful AI roles that can by employed by AI Agents. Even if AI stopped progressing today, there are tons of practical use cases to transform and accelerate using the new technology. So, more of an AI boom rather than an AI bubble!What’s in it for You? How to navigate in this new world?In practical terms, no bubble. In investment terms, it's another story. Let's say this in another way too. The technology is real: LLMs bring a new way of building "computers" and apps. It's the “Decade of AI Agents” in which we will transform culturally, socially, and operationally to incorporate new forms of autonomy in our personal and work lives. How can you make the most out of it now? I'd like to share some thoughts:At a personal levelAs an SMEAs an EnterpriseWhat’s in it for You As a PersonIt's like all the leading labs are building flying cars and they give away free kits to try flying in your backyard! There is cutting-edge new technology being built every day and literally everyone can use the latest version of it as it comes out! In fact all major AI vendors offer a generous freemium and a ~$20 per month license for their personal assistant. This is a ticket to reinvent your interaction with technology, using AI as the ultimate interface to everything. And how should you start? My personal favorite is if you don't know how to code, learn how to code! Or “vibe code” something for fun :) Here’s a mini game I did while creating this presentation, inspired by the bubble talk and an arcade game I used to play when I was young: Bubble Arcade by Helvia What’s in it for You As an SMEAs an SME typically there is no budget to build a tailored solution just for you, therefore you should not expect to get an AI Agent that solves exactly the problem you have in mind. However, you can use the built-in AI Agents on the Software-as-as-Service (SaaS) business applications you use, as every major SaaS provider is racing to add an in-app AI Agent on their offering. E.g., Atlassian offers Rovo to help with Jira, etc. Also, a more hands-on / DIY direction is to learn how to craft automation pipelines with AI-ready automation tools, e.g., use Make to setup email automations. There are hundreds of tools to use with affordable cost and great ROI to help you become the AI expert of your own SME.What’s in it for You As a Large OrganizationLet's first say that the AI Industrial Revolution is for you! At this stage of technology and commercialization of LLMs, the AI Agents are most impactful when specialized on the actual business processes and requirements – and large organizations and enterprises are in the right position to make this happen as part of an AI transformation.The build-vs-buy decision here is about how to become an AI-first company: with internal resources or with partners or with a hybrid approach. It’s the right time to get a competent team internally and trusted partners to help you navigate in this emerging landscape. At Helvia.ai we have been working with enterprises almost 10 years now and we are happy to start a journey together to bring AI Agents in the places it makes more sense and brings more value – send us an email at contact@helvia.ai to get started if you are interested.Looking Ahead: Beyond LLMsArtificial Jagged Intelligence (AJI)First, I would like to highlight something about the type of AI we have now. I like the term Artificial Jagged Intelligence (AJI) that Andrej Karpathy coined a little more than a year ago:"The word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems."There are plenty of examples of "silly" mistakes by LLMs, such as confusing that 9.11 is larger than 9.8 (which has to do with a way of interpreting 9.11 as a section number or a date, rather than a rational number). My favorite one that you can try today has to do with visual models and generating images. If you ask a model to make an image transparent, most of the times what it will do is replace the background with a checkerboard pattern like this – which is something I actually tried earlier for this presentation on one of the latest and most powerful models. I find this very interesting and it's a nice way to remind ourselves that the capabilities of the models today come to a great extend from reusing patterns they have "seen" at training time. Why does this happen? My guess is that most models struggle with generating transparent images because many images online that are marked as transparent are not available for direct download, and the training set did not include the actual transparent image to put as an example. Instead the training set probably included the preview of the transparent image online that looks like the real one but with a checkerboard background to highlight the transparency.Neurosymbolic AIAnd this brings me to a subject I am very fond of, namely, Neurosymbolic AI. As with many things with AI there are many definitions for this as well and it's an emerging field. The easiest way to describe it is mixing methods and techniques from two disciplines in AI: the neural networks camp and the symbolic logic camp. I've been around in AI research enough to have seen an AI winter and an AI spring and to observe the shift of attention and power dynamics between these two camps. When I started my PhD the symbolic logic camp was ruling and the neural net camp was setting the stage for the dominating the field as it has done today.It's interesting to observe these type of dynamics as it influences and sometimes defines the research that is visible to new researchers. For example now AI is almost (if not 100%) synonymous to neural networks and perhaps Generative AI. But AI is not just this, it's a complex network of disciplines that is not easy to navigate if you are not an expert. Notably, the AI literature includes a long list of research on symbolic logic, knowledge representation, and a different type of automated reasoning than the one we are used to see in the LLM space.Now as the LLM hype is settling down, we see that it's not easy or immediate to solve everything with simply writing instructions via prompting, prompt engineering, context engineering, etc. Moreover, we see the prompts becoming more of a programming language with their own coding-like constructs such as conventions about <final_answer_formatting>, <user_update_immediacy>, <frequency_and_length>, and more (as per the latest OpenAI prompt guide), as well as coding-based specifications for tools and APIs. Overall, specifying what an AI Agent should do is getting more structured on top of the "just say what you think in natural language".This type of structure has been the cornerstone of the other camp of AI, with symbolic logic being a strict/formal form of writing instructions for AI systems. I strongly believe that there is an untapped potential in combining tools and tricks from the symbolic logic camp of AI to the LLM space and the neural net camp of AI. There is work already at different levels of synergy: from all the way down into training different kinds of deep network layers to all the way up to having an LLM use a symbolic AI system as a "subroutine" or tool to solve a particular type of problem.For a quick intro see this recent position paper in the upcoming AAAI conference: The Future Is Neuro-Symbolic: Where Has It Been, and Where Is It Going? I am excited by all the spectrum of directions being explored, and I believe that there are low-hanging fruits in a "service-like" synergy where an LLM consults with one or more expert symbolic AI systems to get help.Lots of Opportunities for Service-like SynergyAs an example, consider the way we typically handle memory for AI Agents. AI Agents often need to keep a persistent memory as a blackboard of past interactions and notes that can help them provide contextualized responses based on what worked (or not) in the past. This is typically done using LLMs to write the notes, a retrieval mechanism for getting relevant notes, and using LLMs to revise the memories. For simple use cases this works effectively, but as things get more complex then you need to refine and tune how the memories are stored, how they are retrieved, e.g., using some more elaborate retrieval approach with semantic search, and how they are updated in light of new information.At the same time, there is a large literature on research on they symbolic logic side, on “knowledge representation” and “belief revision” that formalizes the flow of information, the rules, the defaults and exceptions, and other useful machinery to make bookkeeping robust, fast, and inexpensive! Consider also that even if LLMs work well for managing memories in some cases, it is often not an optimal approach in terms of speed and cost. For example, setting the updated set of memories through an LLM requires in the typical case to generate the whole new updated set of memories. Similarly, keeping a representation of the state for a complex task faces similar challenges. Both cases are central in how AI Agents work and when complexity increases there are benefits in crafting an optimized approach. To this direction, there are lots of "deliberation algorithms" for drafting hypotheses and getting conclusions which can be leveraged as tools as part of the thinking process of an LLM-powered AI Agent.Anyway, I probably got too technical for the scope of this post!Closing ThoughtsClosing, let's reiterate on a single message that is worth taking away with you today. It's an exciting time to be alive technology-wise but also culturally and socially. We seem to be the ones that will get to shape how AI technology will be adopted and incorporated in our society, or at least set some pragmatic basis. As for AGI, the biggest frontier seems to be the physical world, i.e., how AI systems can actually live and work in the space we live and operate. To me we will reach AGI when a robot can navigate in Athens and survive running errands in the city :)