The Current State of Technology - AI
The current state of technology is AI - or large language models, to be precise. Everyone is excited about it. It will change our lives and disrupt entire markets. They say. Society is no longer driven by facts, but instead narratives that fits into whatever political agenda. That being to please shareholders, or fear of missing out on something. It is good to be excited about something, this all comes at a cost.
Not everyone is excited about AI. In the US, a study revealed half its participants are nervous when it comes to AI. The amount of people engaged being enthusiastic is falling. I personally recognize myself in this development where my enthusiasm for AI spiked within the first few months ChatGPT became a thing. After this, it became a tool much like every other piece of technology. My life does not revolve around AI - it is an utility that makes some of my work easier.
Compared with many other technological shifts, it is not technical people driving the AI shift. It is management, shareholders, and LinkedIn. “AI first” is a strategy almost every corporate workplace has heard of. The strategy is always the same: Find a use-case for AI, some cases mandatory trainings on how to use AI. Much of the motivation is to drive AI adoption and report that the business is “AI enabled”. This shows that the company is ahead of the curve. This has gone to the point where a copypasta on AI-enablement is circulating on reddit.
Not everything is doom and gloom with AI. Knowing when to use it and understanding its limitations is key to know how to actually make value with AI. This is when it has already been trained on a similar use-case it is being prompted at, which will help out in some cases. Asking on topics it has little training data on typically results in hallucinations, meaning you will not find an answer to your question in Google. There are several challenges that remains unsolved with ho we adopt AI:
- The AI responses can be very convincing - how will we avoid its responses making us delusional from overestimating our own abillities?
- How will we maintain ownership with the generated output? Who is responsible when something breaks or is wrong?
- How will we drive innovation and create new programming languages, tools, and frameworks when we constrain ourselves to generating based on previous knowledge? How will juniors learn and develop into seniors?
- What is even the point of having a programming language when we constraint ourselves to vibe coding a project and disregard tge output? Many people tend to forget we have had technologies enabling people without technical skills to create webpages since long ago. Remember Adobe Dreamweaver and Microsoft Frontpage?
Currently, way too few people speak up about these problems and ask critical questions. As a consequence of the top-down decision of “AI first”, many are too concerned that they might be replaced if being too negative. Many people I work with have become surprised at how frequent I use AI tools despite asking critical questions regaring how we adopt the technology. Even though management pushes for AI-first, it is also part of our responsibility as technologists to ask questions back in order to drive a sustainable and responsible adoption of it.
As a consequence of how the “AI shift” is being pushed, many people are talking about the AI bubble. Circular investments between model vendors and hardware manufactures, and uncertainty when AI will deliver return on investment. There are many narratives circulating around this from that the market will crash because white collar jobs will become automated thanks to AI, that the market will crash due to the circular investments popping as for example OpenAI struggles to find a sustainable business model, or data center hardware supply chain halting because of the middle east war leading to a vacuum for the available AI compute.
Today the focus is shifting towards on the costs of AI. AI compute has been heavily subsidized and now the costs are starting to rise. OpenAI is shutting down Sora to “focus more on core products”, Anthropic bans OpenClaw after it started burning up available AI compute, and GitHub Copilot is cranking up their model cost multipliers. Now Microsoft and Meta is laying off people to justify their AI investments. Several companies have started restricting the availability of AI compute to their employees due to costs. Other companies attempts to put lipstick on the pig by telling how innovative they are with how many tokens they use. This lead to token maxxing become a trending new term.
There is no doubt that AI is here to stay. The way AI is currently being pushed has too many similarities with other bubbles such as the dotcom bubble, and the web is still here to this day. In ten years many of the places we see AI could be gone. How all of this spans out depends on how society adops AI and which usecases we will find for it. But in the end, was it really worth it? Which narrative do you chose?