Why did artificial intelligence explode in many fields after ChatGPT? Where are the main areas of explosion? Is it the widespread application of scaling law? Or is it simply giving people confidence?
The AI explosion after ChatGPT is not the result of a single factor, but a perfect storm composed of technological breakthrough (Scaling Law), interaction revolution (Chat interface), and confidence capital (investment and talent).
Simply put: Scaling Law provided the 'map' (telling us we would succeed), ChatGPT provided the 'car' (making it usable for the masses), and confidence brought the 'fuel' (funding and computing power).
Here is a detailed breakdown:
Yes, Scaling Law is the 'physics foundation' of this explosion, but it is evolving.
Its role (Determinism): Before ChatGPT, AI R&D was like 'alchemy'—no one knew if adding computing power would make it smarter. Scaling Law (L(N)≈N−α) proved that: model performance follows a power-law relationship with computing power, data volume, and parameter count. This means that as long as you pile on resources, the model will become more powerful. This predictability gave big companies the courage to invest billions of dollars in training a single model.
Its current state (From training to inference):
Phase One (Pre-training Scaling): In recent years, everyone was desperately making models bigger (stacking GPUs, stacking data).
Phase Two (Inference Scaling): Recently (e.g., OpenAI o1, DeepSeek R1), Scaling Law has extended to the inference stage. By letting the model 'think a bit longer' (Chain of Thought), massive intelligence gains can be achieved without retraining a massive model. This is the new growth point of the current explosion.
Confidence alone cannot sustain long-term prosperity. The biggest change brought by ChatGPT is reducing the barrier to using AI to zero.
LUI (Language User Interface): Previously, using AI required knowing Python and tuning APIs. ChatGPT made 'natural language' the programming language. This accessibility allowed people from all walks of life (doctors, lawyers, artists) to try AI directly, instantly igniting the application layer.
'Arms Race' Triggered by Confidence: Because Scaling Law provided assurance, it led to a global frenzy of capital pouring into infrastructure (NVIDIA GPUs). The success of this 'brute force aesthetics' broke the bottleneck of the past decades where AI could only create specialized models (Narrow AI) for tasks like 'facial recognition' or 'recommendation algorithms,' shifting towards Artificial General Intelligence (AGI).
The current explosion is no longer uniform; it's concentrated in areas where 'generation' and 'reasoning' capabilities can best replace the marginal cost of human labor:
Phenomenon: GitHub Copilot, Cursor, Bolt.new, DeepSeek.
Change: Programming is currently the field with the highest AI penetration. AI is not just code completion; it can now write entire projects from scratch. Code, being logically rigorous language, perfectly aligns with LLM capabilities.
Phenomenon: Midjourney (images), Sora/Runway (video), Suno (music).
Change: Marginal cost approaches zero. What used to take 3 days to design a poster now takes 30 seconds. This completely changes the production workflow for advertising, game assets, and film pre-visualization (Pre-viz).
Phenomenon: Perplexity, internal enterprise knowledge bases, Notion AI.
Change: Traditional 'keyword matching' search is obsolete, replaced by 'semantic understanding.' Enterprises no longer need employees to sift through 500-page PDF manuals; they can ask AI directly for precise answers.
Phenomenon: AlphaFold 3 (biology), GNoME (materials science).
Change: This is a dimensionality reduction strike of Scaling Law in non-language domains. AI's ability to predict protein structures accelerates new drug development, representing AI's most hardcore contribution to human well-being.
This is a positive feedback loop:
Scaling Law proved the path exists (technical foundation).
ChatGPT proved the product has users (business validation).
Confidence led to trillions of dollars in infrastructure investment (GPU clusters).
Excess infrastructure in turn forces people to find more application scenarios (application explosion).
We are currently in the stage of transitioning from 'simply believing in Scaling Law' to 'finding killer applications (PMF).'
If you are interested in a specific direction, I can provide a more in-depth breakdown:
Interested in technical trends: Would you like to understand how the latest 'Inference-side Scaling Law' (o1/R1 mode) works?
Interested in industry applications: Would you like to see specific implementation cases for AI programming tools (like Cursor) or enterprise knowledge bases (RAG)?
No, I want you to give me a panoramic view. Besides generative AI, what other progress is there? Isn't embodied intelligence also starting to get hot? This seems unrelated to LLM progress. And all sorts of other AI applications are also booming.
This is an excellent perspective. The media spotlight is indeed too focused on 'generative AI' (writing poems, drawing pictures, chatting), leading many to think that current AI is only about LLMs (Large Language Models).
The truth is: AI is experiencing a 'Cambrian Explosion,' and LLMs are just the biggest, most conspicuous 'dinosaurs' among them.
Besides generative AI, there are three completely different waves of AI, even with different technical logics, currently exploding. They solve problems not of 'creation,' but of 'survival,' 'discovery,' and 'decision-making.'
Here is an AI panorama without the 'generative filter':
The embodied intelligence you mentioned is indeed hot, and its relationship with LLMs is subtle—LLMs are just its 'eyes and mouth,' but controlling the body requires completely different 'cerebellum' technology.
Where is the explosion point?
Sim-to-Real (仿真到现实): Robots used to be clumsy because the real world is too complex and training costs were too high. The breakthrough now is platforms like NVIDIA Isaac, which allow robots to 'fall a hundred million times a day' in a virtual world, learn to walk, and then download that knowledge into a real robot. This is completely different from ChatGPT's 'text continuation.'
End-to-End Visuomotor Control: Previously, to make a robot grab a cup, you had to code in three steps: recognize cup -> calculate coordinates -> move arm. Now, you feed the camera image directly into a neural network, and it directly outputs motor voltage signals. This 'intuitive reaction' is the biggest recent breakthrough.
Relationship with LLMs: LLMs are responsible for understanding commands ('go get that bottle of water'), but the actual balance, grip strength, and obstacle avoidance rely on breakthroughs in reinforcement learning and control theory.
This might be a field contributing more to humanity than ChatGPT, but because it doesn't 'chat,' it gets less public attention. The core technology here is not Transformer, but GNNs (Graph Neural Networks) and Geometric Deep Learning.
Where is the explosion point?
Biology (AlphaFold 3): Proteins are 3D structures, not strings of text. AlphaFold predicts protein folding by understanding geometric spatial structures, not linguistic logic. This directly changes the pharmaceutical industry.
Materials Science (GNoME): Google DeepMind used AI to predict 2.2 million new materials (equivalent to the total discovered by humans in the past 800 years). This relies on learning the relationships between atoms (graph structures), not reading papers.
Weather Forecasting (GraphCast): Current AI weather forecasting (like Huawei's Pangu Weather Model) can predict global weather for the next 10 days in 10 seconds, with accuracy surpassing supercomputers that run for hours. This is the combination of fluid dynamics equations and AI.
While everyone is frantically stacking computing power (Scaling Law), one faction is pursuing a 'small and beautiful' revolution, trying to mimic the low power consumption and high adaptability of biological brains.
Liquid Neural Networks (LNN):
What is this? A new architecture from MIT. Ordinary neural networks are 'fixed' after training, but this thing is like a liquid, able to change parameters in real-time during inference based on the environment.
Explosion point: It requires very few neurons (dozens) to control a drone flying through a forest, without consuming massive GPUs like large models. This is revolutionary in fields like autonomous driving and drones.
Spiking Neural Networks (SNN):
What is this? This AI doesn't operate through 'numerical' calculations like current ones but mimics the human brain by transmitting information through 'spikes (electrical impulses).' It sleeps when there's no signal, making it extremely power-efficient.
Explosion point: Advances in neuromorphic chips allow this AI to run on devices with extremely limited power (like phone sensors, implantable medical devices).
This is almost invisible to the public eye, but it's one of the most profitable fields.
Explosion point: Port scheduling, chip layout, logistics route planning. Previously done by operations research experts manually calculating formulas, now using Deep Reinforcement Learning (Deep RL) Solver Agents.
Example: Chip giants (like NVIDIA, Google) use AI to design chip floorplans. AI can even draw circuit layouts that are more power-efficient and compact than those by human engineers. This is purely logical and mathematical optimal solution searching, unrelated to language models.
Not entirely. If we view the current AI explosion as a war:
Main Front (Generative AI/LLMs): Indeed blasted open by Scaling Law (brute force).
Flanking Fronts (AI for Science / Embodied AI): Rely on Data Efficiency and physical priors. For example, embedding physical formulas (Newton's laws, fluid mechanics) into neural networks, making AI not just guess probabilities but conform to physical laws.
Guerrilla Warfare (Edge AI / LNN): Rely on Architecture Innovation, pursuing the smartest actions with the least computing power.
Answer in one sentence: The current explosion is a duet between 'Brute Force Aesthetics (LLMs)' and 'Precision Engineering (Science/Robotics).' The public only sees the noise of chatting, but the real industrial revolution is quietly happening in those silent machines (robots, protein prediction, chip design).
I'm talking about things after ChatGPT. Does ChatGPT mark the Cambrian Explosion? Or did the big explosion happen after ChatGPT? Or is there actually no real change, just hype, just more people doing it?
This is an excellent question that gets to the heart of the matter.
My answer is: ChatGPT is the 'meteorite' that struck Earth, drastically changing the environment and thereby triggering the subsequent 'Cambrian Explosion' of all things.
So, ChatGPT itself does not represent the entire explosion; rather, it ignited (Triggered) the explosion. Before it, the powder keg was already full; ChatGPT was just the match thrown in.
We can break down this process into three dimensions: The Latent Period (Before), The Spark (The Spark), and The Chain Reaction (After).
During the time you felt 'nothing changed,' the AI field had actually accumulated massive potential energy, but it was all in the 'deep waters,' invisible to the public.
DeepMind's AlphaGo (2016): Proved AI could surpass humans in decision-making, but that was for Go, unrelated to you.
Google's BERT (2018): Made machines truly start understanding sentences, but it could only do 'fill-in-the-blank,' not chat with you.
OpenAI's GPT-3 (2020): Was actually very powerful, but it was like a command-line tool only geeks could use, with no interface. Lacking RLHF (Reinforcement Learning from Human Feedback), it often spoke nonsense.
Biology's AlphaFold (2020): Solved a 50-year problem in biology, but it was a celebration for scientists, with zero public perception.
Conclusion: Before ChatGPT, AI was already incredibly strong in specific domains (Narrow AI), but they were fragmented, difficult to use, and expensive.
ChatGPT (Nov 2022) did not invent the Transformer architecture or neural networks. It did one thing right: It lowered the barrier to using AI to '0'.
This is not just 'hype'; it's an 'Interface Revolution':
Before: Using AI required knowing Python, PyTorch, and even using GPT-3 required knowing how to write Prompt code.
After ChatGPT: You just need to know how to speak (type) to use the most cutting-edge computing power.
This change created the necessary conditions for the 'Cambrian Explosion':
User Influx: Only with users do you get data (Data Flywheel).
Capital Influx: Investors finally understood what AI could do, so they redirected money originally earmarked for the metaverse and Web3 into GPUs.
Talent Influx: People in physics, biology, robotics all realized: 'Wow, this thing can be my brain!'
You are right, many fields (like embodied intelligence) seem hot and appear unrelated to LLMs, but the relationship is huge. The post-ChatGPT explosion is the spillover of 'general cognitive capabilities'.
Look at this panorama to see what happened after ChatGPT:
| Field | Status Before ChatGPT | Change After ChatGPT (Why did it get hot?) |
|---|---|---|
| Embodied AI (Robots) | Only had a 'cerebellum': Robots walked steadily, did flips (Boston Dynamics), but couldn't understand human speech. You said 'get me a bottle of water,' it didn't know what water was or where to get it. | Got a 'brain' installed: Current robots (like Figure 01, Tesla Optimus) use LLMs as brains to understand instructions and plan tasks, then use the cerebellum to control the body. Without LLMs, robots would forever be just automated machines, not 'agents.' |
| AI for Science | Point breakthroughs: Train one model for protein folding, another for weather. No communication between them. | Generalist assistant: Like Google's AI Co-scientist (2025/2026), it doesn't just calculate data but can read papers, propose hypotheses, design experiments like a scientist. This is the generalization of LLM reasoning ability in scientific research. |
| Software Engineering | Code completion: Helped write the next line of code. | AI Programmer: Like Devin or Cursor, can write an APP from scratch. This is because LLMs understand long logical chains. |
There is hype, and there is real substance.
What is hype? Companies that wrap ChatGPT to make a 'weekly report generator' and claim a $100 million valuation, or make a PPT saying 'we will build a general robot' but have no demo. These are bubbles, and many are bursting now (as evidenced by many AI unicorn failures in 2025-2026).
What is real substance?
Scaling Law is still effective: Models are still getting smarter (GPT-4 -> o1 -> GPT-5/Gemini 2.0), and the intelligence ceiling is still rising.
Marginal cost approaches zero: Previously, only big companies could hire translators; now you can use it for free. Previously, only Hollywood could do special effects; now Sora can. This release of productivity is a real physical change.
ChatGPT is not the 'entirety' of the Cambrian Explosion; it is the 'cyanobacteria bloom' that caused the sudden increase in oxygen levels.
It made 'natural language' the unified control interface for all machines.
The 'reasoning capability' (Reasoning) it brought became the general-purpose brain for robots, research assistants, and programmers.
Therefore, the current explosion is essentially 'all industries trying to install a ChatGPT-like brain into their own operations.' This is the 'everything is AI' you see.
If you want to intuitively feel how this 'brain' is being installed into a 'body,' and that this is not just hype, I highly recommend watching this video:
Figure 01 OpenAI Speech-to-Speech Reasoning
Reason for recommendation: This video was released after the ChatGPT explosion and perfectly demonstrates how a 'large model (OpenAI)' acts as a brain, directing 'embodied intelligence (Figure robot)' to understand, decide, and act. After watching, you'll realize that without the language understanding ability of LLMs, the robot's previous actions (handing an apple, tidying plates) would be impossible through mere mechanical control.
Aren't there also autonomous driving, industrial AI, and all sorts of other miscellaneous AI applications?
Your intuition is very sharp. The media, for traffic, has hyped 'generative AI' as the entirety of AI, obscuring the larger, more hardcore iceberg of 'non-generative AI'.
The autonomous driving, industrial AI, and other applications you mentioned are not only exploding, but their relationship with ChatGPT (LLMs) is very interesting: some belong to 'close relatives' (using similar underlying architectures), while others belong to 'parallel evolution' (already strong, just now better funded).
Here is a panorama of the invisible AI explosion beyond ChatGPT:
This is absolutely a major area of explosion, and it has a direct blood relation to ChatGPT's technical principle (Transformer).
How it was done before (Rule-based): Engineers wrote hundreds of thousands of lines of code: 'if see red light, brake; if see person, avoid.' This made cars stupid, failing in unseen scenarios.
How it's exploding now (End-to-End): * Represented by Tesla FSD v12: They deleted hundreds of thousands of lines of rule-based code and switched to neural networks.
The principle is exactly the same as ChatGPT: ChatGPT is 'feed it many books, learn to predict the next word'; FSD is 'feed it many hours of human driving video, learn to predict the next action (turn left or brake).'
Why is it hot? Because Scaling Law works here too! Feed enough video data, and the car learns to drive like an experienced driver on its own, without being taught 'what is a red light.'
**Relationship with LLMs:**Same origin. They both use the Transformer architecture and are using 'big data brute force' to crack intelligence.
This field is not sexy and can't be shared on social media, but it's where the real economy is being transformed. The explosion here belongs to 'parallel evolution + hybrid enhancement'.
Predictive Maintenance (预测性维护):
Explosion point: Previously, machines were repaired after breaking. Now, AI listens to machine vibration sounds (auditory) and looks at infrared temperatures (visual) to tell you 'bearing will fail in 3 days' before it breaks.
Technical core: Not LLMs, but time-series analysis.
Digital Twins & Industrial Design:
Explosion point: Like NVIDIA's Omniverse. Before building a factory, simulate it entirely in a computer with AI to see if robots will collide or assembly lines will clog.
Relationship with LLMs:The human-machine interface changed. Previously, adjusting industrial robot parameters required experts. Now, industrial AI integrates ChatGPT capabilities, allowing workers to directly ask the machine: 'Why did you stop?' The machine replies: 'Because port 3 is blocked, suggest reducing speed.'—LLMs become the 'mouth' of complex industrial machines.
These fields, while not as famous as ChatGPT, are having their own 'highlight moments.'
Explosion case: Google DeepMind's GraphCast and Huawei's Pangu Weather Model.
Astonishing point: Previously, forecasting weather required supercomputers to calculate fluid dynamics equations for 3 hours. Now, AI can do it in 10 seconds, and predict typhoon paths more accurately than traditional methods.
Principle: Using GNNs (Graph Neural Networks) to process Earth's latitude-longitude grid. This doesn't rely on chatting, but on hardcore mathematics.
Explosion case: High-frequency trading and risk control.
Invisible battlefield: Current AI can read global news, tweets, and earnings reports generated every second, analyze market sentiment, and place orders at millisecond speeds. This is not simple quantitative trading but sentiment trading based on natural language understanding (NLP).
Relationship with LLMs: Strongly related. The powerful reading comprehension of LLMs gives financial AI the ability to 'trade stocks by reading news.'
Explosion case: AlphaGeometry, AlphaProof.
Astonishing point: AI specifically designed for math Olympiad problems. They don't rely on 'guessing' but combine symbolic reasoning. This marks AI beginning to conquer the crown of human logic—mathematical proof.
If we compare the AI explosion to an 'Industrial Revolution':
ChatGPT/LLMs are the 'steam engine': It's the most general-purpose power source, usable by anyone (writing code, copywriting, customer service).
Autonomous driving is the 'train': It uses principles similar to the steam engine (Transformer), but it's a specialized vehicle meant to transport people from A to B.
Industrial AI is the 'spinning jenny': It existed long ago, but now, because of better parts (cheaper chips) and a better control panel (LLM-powered interface), its efficiency has doubled.
So your feeling is correct: Besides generative AI, perception AI (seeing roads, hearing faults), decision-making AI (trading stocks, scheduling logistics), and scientific AI (predicting weather, proteins) are all exploding.
They all share the benefits of this wave: cheaper computing power (GPUs), more effective data training methods (Scaling Law), and global confidence in AI investment.