One way to boil down the present AI hype cycle is to say that ChatGPT demonstrated that language is not as intractable a problem as we previously thought. Phrased this way, this may sound unassuming, but really, it’s not. Language is unstructured information. Traditionally, machines have required mathematically precise input to perform a task. A transition to a world of this kind of “fuzzy input” is nothing less than revolutionary. The past few years have been an attempt to come to terms with all this implies.
Language is deeply human. It’s perhaps the most important way in which we express ourselves. Now that machines have entered this domain, everything seems to be in motion. Work, education, and private life have all been fundamentally shifting. Perhaps even the way we see ourselves on a more philosophical level. Isn’t it clear that these models (at least seem to) directly challenge some human qualities, like creativity and logical thought, we are accustomed to priding ourselves on?
In this post, I’ll try to sketch some ideas regarding what this means for knowledge workers. Even though the full effects of the AI transition are still propagating through our systems and societies, it seems that over the past 3+ post-ChatGPT years, we haven’t so far seen a qualitative shift in the models. They are getting better by the month, yes, but the progress feels incremental. The kind of quantum leap many experienced when they first tried ChatGPT through the web UI has yet to be repeated. Hence, it seems we are in a position to make some educated guesses on what kind of effects these models will have.
Expertise Echo Chamber
It’s often said that LLMs are like a mirror: what you see is essentially yourself. Perhaps the image is distorted in ways that you wouldn't have come up with yourself, but it’s still you. Even though this analogy is a little corner-cutting, I think it captures quite well how I see things.
An implication for knowledge workers is that LLMs are essentially echo chambers for your own expertise. Using them, you can’t reliably perform tasks that you couldn’t have otherwise done. Many people feel like there is a huge productivity boost, but this is quite difficult to measure. It also seems very task-dependent. My personal experience is that LLMs can amplify both the good and the bad, and it’s all about finding ways to get more of the former than the latter. How this happens in practice, however, is subject to intense debate. It may well turn out that some tasks should not be left for LLMs at all.
What will the future professional be like?
I think that an interesting idea is to look at what search engines did to work, and use that to infer AI's long-term effects. There is a huge variation in the competence of search engine users; the best ones have various creative strategies to look for the desired information. If I had to guess, these same people are the ones who will do well in the LLM era: LLMs will probably be an integral part of the future professional’s workflow. Those who will do really well are among the ones who really know how to get the most out of them.
Some speculate that LLMs will democratize professional skills. I would conjecture the opposite: AI will really benefit those who are able to creatively find ways to work with it, and these are likely the same people who would have done well otherwise. A sophisticated input yields a sophisticated result. The net result will be a multiplier for existing talent rather than a substitute for it.


