Chasing AI, Losing Meaning

Community Article Published January 28, 2026

生成幻想插画

Have you noticed that in recent years, every few days or so, a brand-new “super impressive” AI product pops up? First it was ChatGPT, then immediately Gemini, Grok, Claude, Midjourney, Sora, followed by all kinds of Agents, Clawdbot, and other fresh things. The moment one appears, the community explodes—X, Bilibili, Zhihu, every group chat is flooded with discussion. Everyone dives in: studying, reproducing, comparing parameters, running demos, terrified of missing out on something.

Very quickly, all sorts of “from zero to mastery” tutorials start appearing on Xianyu, Knowledge Planet, course platforms—prices are cheap, they look extremely cost-effective. At the same time, cloud service providers seize the moment and launch one-click deployment and one-click invocation products, making it seem like just a few clicks are enough to put you at the very forefront of the AI wave.

So you also throw yourself in, spending huge amounts of time: setting up environments, watching tutorials, trying models, tweaking configurations. After you’ve messed around for a while, you suddenly realize one problem: I actually don’t know why I’m using this AI. It’s undeniably powerful, but it doesn’t seem to solve any clear problem of mine. The reason I’m researching it is mostly because “everyone else is researching it.” If I don’t, it feels like I’ll be left behind by the times. In the end, what you learn is a pile of tool names and operating steps, but what’s left is a vague sense of exhaustion and spinning-in-place: technology iterates at breakneck speed, the feeling of participation is strong, but the sense of meaning is extremely weak.

Why does this happen?

It’s mainly a collective psychological state of anxiety and confusion, being swept along by the technological tide.

First layer: the core psychology is “I’m afraid of missing out, but I don’t even know what I want.” It’s not driven by demand, but by trend. You weren’t pulled in by a “problem”; you were pushed forward by the “hype.”

Second layer: group conformity and technological worship. Seeing so many people researching it, you also research it. Over time, “useful” gets quietly replaced by “popular.”

Third layer: the “illusion of progress” created by low-cost participation. This produces a psychological misperception: I seem to be improving, I seem to be at the cutting edge, I seem not to have been eliminated. But in reality: no real business scenario, no long-term usage motivation, no internalization into actual capability—this is “busywork with tools”: very busy, but no direction.

Fourth layer: mild technological nihilism. You realize you don’t know why you’re using this AI. You keep chasing new things, but none of them truly enter your life. This is close to a mild form of technological nihilism.

I’m desperately trying not to be left behind by the times, but I don’t even know where I’m running toward.
It’s not that technology is being denied; it’s that technology is moving too fast, and people’s sense of meaning simply can’t keep up.

Community

I found that as I continued, I ended up using a lot more scientific research AI to better match my needs. I learned a lot of what I was trying to in particular. I often stopped wasting time on videos, having them summarized quickly in text. It's easy to feel lost in a lot of it, or like you're being dragged along, with "AI slop" and so much (often valid) criticism. But as a research tool on the very fringe of major advances, or to better develop novel scientific and engineering ideas, it's served very well, and I can only see it getting better in terms of serving the scientific community. When it learns the language of a science, it's now producing valid results it can't explain in fields like electronics, and although that's somewhat of a black box, many of our scientific advances are of that type, just using what happened to work.

Article author

you are right

·

I just found out about Chai Discovery from AI, and I've played around with Alphafold Server. The tools are getting impressive. When people start making good money off of better drugs and solving the environmental issues using AI, there will be less question about the benefit, and companies may actually start to profit off of AI in general. But it must become much more efficient and better. Anyway, the focus on scientific AI should help us all. Boltz-2 and other focused projects will end up embedded into the processes we already use. AI Scientist will likely be used by all sorts of scientists to design experiments. I'll be using Concensus, Elicit, and Scholar Deep Research to stay up to date, because they have recent scientific articles and search the news quite well. Medical AI will probably one day have a degree, rather than just being able to diagnose from imaging far better than experienced doctors.

Sign up or log in to comment