I was thinking this morning that if I didn't know what I think I know about markets, and my experience captured merely the past few months, I'd be convinced that whatever happens on a given day can be expected to be more-often-than-not reversed the very next.
Yesterday, for example, we watched the tech sector suffer -- what I'm sure too many investors deemed to be -- a meltdown; taking the Nasdaq 100 and the S&P 500 down with it... Interestingly, despite somewhat ugly action across our commodities exposures, we didn't feel it much, as places like consumer staples and healthcare saw sharp rallies... Our non-US exposure held up okay (relative to US tech) as well...
Well, per my opening paragraph, today we're experiencing the virtual opposite... While tech is bouncing back nicely, taking the aforementioned indexes up with it, as I type the majority (70%) of S&P 500 stocks are down on the session, while 54% of Nasdaq 100's members are in the red as well... In both cases 8 of the 11 sectors are definitely not catching the headline tailwind.
Suffice to say that while we are finding value in pockets of the global equity market, the place where folks remain uber-concentrated hasn't been sending the healthiest of signals of late.
Speaking of that place, the following from our internal notes is long-term concerning:
1/27/2025
Peter Berezin’s take on the “AI Meltdown” pretty much (and some) captures the essence of what we’ve been warning about for months:
"Just because a new technology lifts productivity does not mean it will lift profits.
The internet is a classic example. The rollout of the internet helped boost US productivity growth by about one percentage point between the mid-1990s and the mid-2000s. However, it was only around the mid-2000s that companies started making serious money from the internet, by which point the dotcom bubble had already burst and productivity growth had started to go back down.
In other words, the productivity preceded the profits by 10 years!
When tech companies finally did figure out how to monetize the internet, they did so by harnessing two economic forces that allowed them to create natural monopolies for their businesses: 1) network effects; and 2) economies of scale.
Network effects stem from the fact that certain technologies become increasingly attractive when more people use them. Social media platforms are a classic example: Lots of people use Facebook and Instagram because many other people use them.
Bitcoin is another example. People value Bitcoin simply because other people value Bitcoin. There is nothing special about Bitcoin’s algorithm other than it was the first to come on the scene.
Network effects tend to apply to software in general. I am currently typing this note on a Windows PC - not because I like Windows but because that is what most of my colleagues use.
The problem for large language models is that they do not benefit from network effects to any great degree. If I use ChatGPT, it does not really matter to me if others use it too.
This brings me to the second force that sustains tech profits: economies of scale. Economies of scale occur in cases where there are high fixed costs and low marginal costs. Again, software is a good example: It takes a lot of money to produce a good piece of software but once the code is written, creating additional copies is almost costless.
Large language models do not fit neatly into this fold. As it turns out, creating large language models may not be that expensive (especially if they are based on open source technologies). In contrast, using them on an ongoing basis is expensive, not just because of the pricey chips required for inference, but also because of the energy costs needed to run all those data centers where those chips are housed.
In that respect, large language models are a lot like airlines. Airlines are indispensable for global commerce but never seem to make much money because of their high operating costs and the fact that they are largely indistinguishable from one another.
A few weeks ago, Sam Altman admitted that OpenAI is losing money on its $200 per month ChatGPT Pro plan. However, he spun this news in a positive light, emphasizing that OpenAI was losing money on the service only because people were using it so much. This raises the question: When will OpenAI be able to eventually raise prices to cover its costs? As the fracas over DeepSeek reveals, the answer may be “not anytime soon.”"
Stay tuned...
No comments:
Post a Comment