Tech – again – stole the market spotlight in 2024 thanks to AI and investor focus on the Magnificent 7.
Can the tech giants repeat the feat in 2025?
Mark Hibben, Investing Group leader behind the Rethink Technology service at Seeking Alpha, suggests key companies in the sector will continue their 2024 successes this year.
Nvidia (NVDA) stock ended 2024 with a 180% gain. Hibben says the chip company may continue to dominate its data center and consumer markets, and he expects more gains for shares in 2025. Not so clear: How Intel (INTC) competes in the chip market and if it really follows through with its foundry strategy.
Investors also will closely follow Google (GOOG) (GOOGL) and its antitrust case, which Hibben thinks could end up with a resolution that’s friendly for the search giant. And the markets will watch how Apple (AAPL) traverses tricky political and trade issues. Hibben expects more clarity on Apple Intelligence, which he says could evolve into a bigger opportunity for the tech giant.
Hibben’s outlook for 2025 follows:
Seeking Alpha: What else can be said about Nvidia? Obviously, the company remains the tech darling for investors. It looks as if the momentum will continue into 2025?
Mark Hibben: Nvidia investors have much to look forward to in 2025. With Nvidia’s flagship Blackwell AI accelerator in full production, the Data Center segment will likely continue its impressive growth. In addition, Nvidia is likely to see continued, albeit more modest, growth in consumer markets through the release of its new RTX 50 series discrete GPUs and through a new series of ARM processors for Windows Copilot+ PCs.
On Data Center growth:
There’s natural anxiety among Nvidia investors whether the explosive revenue growth of the Data Center segment can continue into calendar 2025. Especially since Nvidia’s guidance for its fiscal Q4 suggested a downturn in the rate of growth. Considering that guidance, revenue growth in the Data Center segment for fiscal 2025 (ending January 2025) will be a mere 137%.
The rate of growth may well moderate next year, and I’m currently modeling a little over 50% revenue growth in the Data Center for fiscal 2026. Demand is still very strong for generative AI platforms in the cloud and enterprise data centers.
Next year, Nvidia’s competition, at least in the first half, will still be relatively weak. Advanced Micro Designs (AMD) will offer the AMD Instinct MI325X:
And Intel will offer the Gaudi 3:
Betraying its heritage as an accelerator for supercomputing, the MI325X excels at high precision floating point calculations. But these number formats, 32 bit and 64 bit floating point numbers, FP32 and FP64, are rarely used for AI. AI models have been moving to progressively lower precision numbers, such as FP16 and FP8, and here, Blackwell’s performance towers above its competition:
These TOPS (tera (10^12) operations per second) ratings are provided by the manufacturers and represent theoretical maxima. For operational AI performance, I prefer to rely on ML Commons benchmarks. However, few companies besides Nvidia post their results to ML Commons. Google posts their results for their custom Tensor Processing Units (TPUs), and there are a few inference results for the AMD Instinct MI300X and Intel Xeon CPUs.
The lack of postings for AMD Instinct, Gaudi, or Intel Ponte Vecchio probably sums up the competitive landscape better than the raw TOPS ratings. A dark horse competitor to Nvidia which also has never posted results to ML Commons is Cerebras (CBRS).
Cerebras has created a “wafer scale” GPU accelerator which claims a huge size advantage over Nvidia:
The Cerebras chip is made by stitching together the 84 different zones which would normally be separate devices during the lithography process. It’s a difficult process that hasn’t been accomplished before.
Cerebras also claims a huge performance advantage over Blackwell in a blog post:
The most comparable Nvidia system is the rack system consisting of 36 Grace-Blackwell superchips (72 Blackwell B200), the GB200-NVL72:
The previous generation of Cerebras processors, the CS-2, also boasted impressive performance compared to Nvidia’s H100 “Hopper.” So why hasn’t Cerebras cornered the market?
Probably, it’s due to the cost of the systems. Building conventional GPUs on a wafer is probably much less expensive than combining all those GPUs so that they work together as a single chip on a wafer.
Cerebras filed for an IPO on Oct. 1. In the filing they revealed that they only had $136.4 million in revenue in the first six months of 2024 and lost $66.6 million. So it’s probably going to be a few years at least before Cerebras can make its wafer scale chips profitably. Until then, it can’t afford to make many.
At Computex this year, AMD revealed that the MI350 series would come out in 2025, but no details about exactly when or what the performance of the new series would be. And that’s it for the competitive landscape going into 2025. Is it any wonder that Nvidia stated back in October that Blackwell is “sold out” for the next 12 months?
On Growth in consumer markets:
Given the explosion in Data Center revenue the past couple of years, it’s easy to overlook the fact that Nvidia’s other market segments have been growing at a healthy rate. The Gaming segment, where Nvidia posts its revenue for its consumer GPU add-in boards, popular with gamers, grew by 15% in fiscal 2024 and will likely by about the same in fiscal 2025.
Nvidia is about to announce a new series of cards, dubbed the RTX 50 series, with the RTX 5090 replacing the now venerable RTX 4090 as the flagship. There are, of course, the usual complaints in advance from reviewers about how expensive the cards will be.
But Nvidia is simply doing what any well run business should do, charging what the market will bear. Nvidia has become so dominant in PC gaming that AMD’s SVP and General Manager of the Computing and Graphics Business Group, Jack Huynh, indicated in an interview with Paul Alcorn of Tom’s Hardware that AMD was bailing out of the high-end GPU competition with Nvidia. Alcorn asked Huynh:
There’s been a lot of anxiety in the PC enthusiast community that, with this massive amount of focus on the data center that AMD has created and your success, there will be less of a focus on gaming. There have even been repeated rumors from multiple different sources that AMD may not be as committed to the high end of the enthusiast GPU market, that it may come down more to the mid-range, and maybe not even have flagship SKUs to take on Nvidia’s top-of-stack model. Are you guys committed to competing at the top of the stack with Nvidia?
Huynh replied:
I’m looking at scale, and AMD is in a different place right now. We have this debate quite a bit at AMD, right? So the question I ask is, the PlayStation 5, do you think that’s hurting us? It’s $499. So, I ask, is it fun to go King of the Hill? Again, I’m looking for scale. Because when we get scale, then I bring developers with us.
So, my No. 1 one priority right now is to build scale, to get us to 40 to 50 percent of the market faster. Do I want to go after 10% of the TAM (Total Addressable Market) or 80%? I’m an 80% kind of guy because I don’t want AMD to be the company that only people who can afford Porsches and Ferraris can buy. We want to build gaming systems for millions of users.
I think Huynh’s argument is a little specious. I own an RTX 4090 and enjoy playing Cyberpunk 2077 at 8K, but I don’t own a Porsche or Ferrari.
The market share consideration is not unreasonable. As of 2024 Q1, Nvidia’s share of the add-in board market was 88%, according to Jon Peddie Research, via Tom’s Hardware. And according to Steam’s Hardware Survey as of August, 76.5% of Steam users had Nvidia GPUs.
In fact, I think AMD has diverted resources to the Data Center effort and minimized investment in gaming GPUs. Once again, the RTX 50 series will see little competition from AMD. Or from Intel for that matter. Intel’s latest “Battlemage” GPUs, released on Dec. 3, have been applauded by reviewers for their value, but no one is claiming that they will compete at the high end.
In consumer GPUs, Nvidia once again stands alone at the high end. The RTX refresh will undoubtedly spur sales and growth next year. And Nvidia is thought to be preparing to enter the market for Microsoft Copilot+ PCs.
Nvidia has long had a line of SOCs (Systems on Chip) that featured ARM architecture CPU cores and its own GPU architecture sections. These have mainly been targeted at robotics and automotive driver assistance and self driving. Given their strong GPU and AI capability, these would seem to be ideal for the new generation of AI PCs.
Rumors to that effect first appeared about a year ago, and a more recent report from Oct. 31 confirms that Nvidia plans to release a consumer ARM-based SOC by September 2025 for Windows PCs.
This could greatly expand the sales volume for its ARM SOCs, but Nvidia will not have this market to itself. It will have vigorous competition from Qualcomm (QCOM), AMD, and Intel (INTC). But Nvidia will have a powerful advantage in its on-board GPUs as well as AI capability.
Overall, I expect continued revenue and earnings growth in both the Data Center and in consumer-driven markets such as Gaming, PCs, and automotive. I continue to be long Nvidia and rate it a Buy.
Seeking Alpha: The other side of the spectrum is Intel. With CEO Pat Gelsinger out, what’s next for this beleaguered company?
Mark Hibben: I’ve seen it suggested that firing Gelsinger was a mistake and that he might even be reinstated. Whatever befalls at Intel in the future, I’m quite certain that Gelsinger will not be returning to the company.
Intel investors should assume that the board did not act capriciously in ousting Gelsinger, even though investors have been left in the dark regarding its reasons. This lack of transparency is an ongoing problem with Intel’s corporate culture which needs to be corrected by the next CEO.
Investors and analysts are left to sift through the available data in order to arrive at a viable hypothesis for Gelsinger’s removal and Intel’s future prospects. I summarized much of this data in my article Intel: The Problems Gelsinger Leaves Behind.
Much of this data is incontrovertible: Nvidia’s disruption of the data center and its huge Data Center segment revenue growth compared to Intel’s relative stagnation. Nvidia’s fiscal Q3 Data Center segment revenue of $30.77 billion dwarfed Intel’s total Q3 revenue of $13.284 billion.
These facts speak plainly to the failure of Intel’s own data center GPU accelerator, Ponte Vecchio, now called the Data Center GPU Max 1550. Released in 2022, it should have been perfectly timed to capture a major share of the data center AI market. But Intel doesn’t even have it listed in its processor data archives any more, which is odd considering that the processor would normally have had a lifespan of several years.
And the latest Gaudi 3 AI accelerator isn’t going to help either. As I reviewed above, its specs indicate that it’s completely inadequate to compete with Blackwell or even the MI325X. This leaves Intel with nothing to counteract Nvidia, and to a lesser extent AMD, in data center GPUs until its next generation of GPUs, dubbed Falcon Shores is released sometime in 2025.
In March 2023, Intel updated its data center GPU roadmap, indicating that Falcon Shores would be delayed from 2024 into 2025. Intel made big promises for Falcon Shores:
But Intel made similar promises for Ponte Vecchio and came up short, and late. I would not bet on Falcon Shores to staunch the hemorrhaging in the Data Center.
In advanced semiconductor process development and Intel Foundry, the facts are less clear cut, but no less damning. By all appearances, Intel is staying the Gelsinger course, and by implication maintaining that “5 nodes in 4 years” is “on track.”
This at least was what Intel seemed to want to convey at the UBS Global Technology and AI Conference in December at which David Zinsner, Interim co-CEO, and Naga Chandrasekaran, Chief global Operations Officer and GM of Foundry Manufacturing, gave an interview. Zinsner began by saying:
. . . the Board was pretty clear that the core strategy remains intact. We still want to be a world-class foundry. We want to be the western provider of leading edge silicon to customers and that remains our goal. But we also understand that it’s important for the No. 1 customer of foundry to be successful in order for foundry to be successful. And so the board wants to also put emphasis on execution around the product side of the business to make sure that the foundry business remains successful.
I think this should be qualified as “the core strategy remains intact, for the time being.” Or until Intel finds a new CEO. I thought it was interesting that Zinsner seemed to put the burden on the Products group to generate more sales. Yet, in Q3, almost all the segment operating loss was in Foundry.
Intel’s Foundry strategy has a real problem, which is being cost competitive with mature foundries such as Taiwan Semiconductor Manufacturing Company (TSM). And it doesn’t help that Intel is still playing catch up in advanced processes. As I discussed in detail, the cancellation of the Intel 20A process left it with no recourse but to use TSMC’s “3 nm” N3 process for its latest Lunar Lake Copilot+ PC processors as well as its Arrow Lake desktop processors.
On process node development, Chandrasekaran was far less sanguine than Gelsinger had been. Tim Arcuri of UBS asked:
But can you talk a little bit, A, about where 18A is vs. where you think it needs to be to sort of intersect the second half of ’25 a ramp. And B, the thing that I hear from some of the customers is that or some of the prospective foundry customers is that 18A is still a bit more geared toward HPC. And as a broad foundry node, the customers that I talk to are sort of like 18A is great if you have an HPC application, 14A might be the node that’s more broadly applicable to external foundry customers. Can you talk about that as well?
Chandrasekaran replied to the first part:
So when Pat announced the defect density D0 less than 0.4, it was a point in time and it was to give the indication that we are progressing as expected. If I look at it today, we are progressing. There are several milestones that we have met and there are still many milestones ahead for the technology development. And if I wear my technology development hat for a minute, there’s always challenges when you’re introducing new technology and there’s ups and downs. But what I would say is there’s nothing fundamentally challenging on this node.
Now it is about going through the remaining yield challenges, defect density challenges, continuing to improve it, improving process margin and getting it ramped. Will there be challenges? There will be, but I think we are progressing. And next year, as I look at it, primarily the first half will be getting the node into engineering samples into our customers’ hands and getting the feedback and starting a ramp in Oregon. And the second half of 2025, our milestone is certifying the node, getting it ramped in Arizona and getting the product on the shelves so that customers can buy it. So that’s the milestones and we are working towards meeting all those milestones over the next year. It’s very critical for us.
What’s notable in the reply is that Chandrasekaran never uses the word “expect” with regard to 18A readiness. Instead, he states that they have goals for production, first half of 2025 sampling, then a production ramp in the second half. At the same time, he acknowledges remaining yield and defect challenges.
So does Intel have an 18A node that can yield sufficiently at production volumes to be a viable manufacturing process? At this juncture, I think the obvious answer is no.
With regard to the second part of the question, it appears that Arcuri is aware that prospective customers are already dissatisfied with 18A and have put off any commitments until 14A is ready. Chadrasekaran continued:
It [18A] can benefit mobile depending on how the designs are done, but because the customer engagement is more later, it doesn’t address the full TAM. And 18A, our biggest customer for the next two, three years is still Intel products, which goes back to what Dave was saying. The Intel products, we know the demand, we know what needs to happen and our focus is to ramp it and continue to get more customers on 18A. But all this learning is getting implemented into 14A.
So as 14A comes in, there will be a broader market that 14A will address, including compute and mobile and other applications and also how the PDKs are done so that it’s not just for with Intel Focus, but it’s also focused on the broader ecosystem taking 14A and applying it to their designs.
Chandrasekaran acknowledges that Intel doesn’t have many customers for 18A but expects more interest in 14A. When might 14A be ready? He doesn’t say.
As I keep saying, what’s important in process node development is what a manufacturer can deliver in terms of high volume production, not what it can show in a marketing presentation. Chandrasekaran was refreshingly honest in acknowledging the challenges that Intel still faces in bringing 18A to full production. But he’s also hemmed in by the expectations of his management, which prevented him from acknowledging the obvious: 18A probably won’t be ready for high volume production next year.
Here, I’ll go out on a limb and make some inferences and predictions. My inference is that the board has given Chandrasekaran until the end of 2025 to deliver high volume production. This was done not because it was satisfied with the state of progress in process nodes, but because Intel had already invested so much money in advanced process and manufacturing.
I predict that if 18A mass production doesn’t arrive next year, the board will pull the plug on the Foundry strategy, simply because it doesn’t see a way to become competitive. One could argue that this would be premature and short sighted, but Intel’s bottom line probably can’t sustain more than another year of the Foundry strategy without some sign of a payoff.
This doesn’t necessarily mean that Intel would give up on advanced manufacturing. I’ve argued that Intel’s efforts to become a Foundry actually made its manufacturing less efficient in the near term. However, there will certainly be a very strong monetary temptation to offload Foundry and go fabless.
How this plays out, time will tell. I continue to rate Intel a Sell based on its poor financial performance and uncertain future.
Seeking Alpha: Google’s antitrust case was a big development for the company. How does this impact the search giant in 2025 and beyond?
Mark Hibben: Predicting how the incoming Trump Administration will handle an antitrust case initiated by the Biden administration is difficult. Normally, conservatives are hostile to business regulation.
However, Trump supporters in the media have expressed hostility toward so-called “woke” companies, and, regardless of how “woke” is defined, such commentators would likely place Google in this category. As such, Google may not garner much sympathy from the new administration.
Subsequent to my article on Google’s loss in the antitrust case, the DOJ requested rather draconian remedies, including the spin-off of the Android operating system and the Chrome browser, as well as elimination of “exclusive dealing” contracts with companies such as Apple (AAPL).
In my article, I argued that a breakup of Google was unlikely to be granted:
While a possible breakup of Google is appealing to its rivals, I’m not convinced that it will be implemented. The problem here is that it will be difficult to show that a breakup along reasonable organizational lines will be effective in reducing Google’s dominance in search and search text advertising.
As free offerings, Chrome and Android depend entirely on Google’s search revenue. I pointed out:
The fundamental problem here is that separating the search business from other Alphabet businesses simply leaves the search business free of the financial burden of supporting the other Alphabet businesses. The search would be free to apply its enormous revenue to maintain its dominance.
Any spin-off scenario one can concoct ends up in the same predicament. While search user tracking is enabled within Chrome, it doesn’t require Chrome, since tracking can be done through any browser that supports cookies. Also, Google Analytics probably provides most of the user tracking that Google needs.
I’m not surprised the DOJ requested a breakup, but I don’t think it would have been granted in any case when the remedies phase begins next August. Under the Trump Administration, the DOJ is likely to back off on the breakup remedies, but still pursue the behavioral remedies that I thought likely to be implemented:
I believe that rather than a breakup, which arguably harms consumers, the court will mainly focus on behavioral remedies, such as the abolition of RSAs (Revenue Sharing Agreements). There will likely also be restrictions on auction pricing, since this was clearly abusive. And the proposal that Google not prefer its own services in search results will also likely be adopted.
The most impactful remedy, financially, is the abolition of RSAs, so let’s look at that. The cost of the RSAs, including Apple, is reported as Traffic Acquisition Costs – TAC. In fiscal 2023, TAC was $50.866 billion, according to Alphabet’s 2023 annual report, or 29% of search advertising revenue.
Abolition of the RSAs would actually save Google about $50 billion per year in costs. Instead of default placement in browsers and the Android home screen, users would need to select from a menu of options at the initial setup of the device or browser.
Google would likely lose some search share in this process, but would it lose roughly 30%? That’s a hard question to answer, but I think that it would not, at least at the beginning when competitors are still relatively weak.
Over time, competitors may gain market share. And Apple, deprived of the incentive to do nothing, may pursue development of its own search engine. Also, in the context of future AI enabled operating systems, search and generative AI are inextricably linked. Apple would likely pursue search as part of its broader AI strategy.
So, in the short term, I doubt that Google is harmed financially by the lack of RSAs. It loses some percentage of search revenue but makes up for it by recovery of the TAC expense. In the near term, I think Google comes out ahead, although the top line will see a year-over-year decline.
Given that the RSAs are viewed by some as illegal “exclusive dealing” contracts under Sherman, I can’t see the judge not granting this remedy at the very least. Unless of course, the new DOJ simply moves to dismiss the case. I continue to rate Google a Hold, but I may upgrade them to Buy depending on the disposition of the incoming administration.
Seeking Alpha: You’re also well known for your coverage of Apple. How does the company navigate tricky political and trade issues in 2025 and beyond? And thoughts on Apple and its AI efforts?
Mark Hibben: On tariffs and trade:
President-elect Trump has indicated that he will impose tariffs of 60% or higher on Chinese made products. Since most of Apple’s (AAPL) main products, including Mac, iPhone and iPad, are still assembled in China, that could have a major impact.
When Trump first imposed tariffs on Chinese goods, consumer electronics including Apple’s products were excluded. The tariff burden mostly fell on Chinese-made components used by manufacturers in this country. It’s not clear that such an exclusion will be made this time around. Probably not.
Apple has been moving to diversify its manufacturing to places such as Vietnam and India, and Apple will likely accelerate this process with or without tariffs. However, Apple’s contract manufacturers such as Foxconn (Hon Hai Precision) have made huge infrastructure investments in mainland China. It could take years to get all of that manufacturing moved out of China.
How Apple will respond to the immediate tariff impact is uncertain. Apple’s margins are not so large as to absorb the entire cost of a 60% tariff, so most of it would have to be passed on to consumers.
I think it likely that the Trump Administration will once again exempt consumer electronics rather than suffer the political fallout of huge price increases being borne by consumers. Trump has vowed to roll back the inflationary price increases of the Biden years, and a large price increase in consumer electronics would run contrary to that goal.
On Apple’s AI efforts:
I recently posted an article for my Rethink Technology investing group subscribers on my personal experiences with Apple Intelligence. As an Apple Developer, I’ve invested in high end versions of the latest M4 Max MacBook Pro, the M4 iPad Pro, and the iPhone 16 Pro Max, so I was able to explore Apple Intelligence (let’s call it AI for short), on the best available Apple devices.
Most AI features are implemented on-device and don’t require connection to the internet. This is in keeping with Apple’s emphasis on privacy and security, but it limits what AI can do. Apple has delivered all the AI features it promised for the end of the year back at WWDC. The main effort still outstanding is a cloud based version of Siri that uses generative artificial intelligence.
Everything I tried out worked, but often not impressively. There’s only so much smart you can squeeze into a smartphone, even an iPhone. Users looking for the functionality of a Microsoft (MSFT) Copilot or Google Gemini will likely be disappointed.
But these are huge cloud-based generative AI models and it’s unreasonable to compare them to what Apple has done on-device, although consumers may do so in any case. And that’s a potential problem for Apple. Consumers may not care about the distinction.
Apple has made a long term bet that it can weave on-device and cloud based AI into a seamless whole that “just work.” Microsoft is also trying to blend on-device AI with cloud-based AI in its Copilot+ PCs.
Apple’s main advantage in this is Apple Silicon, which continues to make enormous strides compared to competitors, whether using ARM or x86 architecture. According to Geekbench results, Apple’s M4 Max CPU outperforms the latest Intel Lunar Lake and Arrow Lake processors in the multicore CPU benchmark:
Apple’s internal graphics also test out to be far superior to competitors in the Geekbench OpenCL benchmark:
I’ve personally confirmed these results on my own 16” MacBookPro with the M4 Max processor. The internal graphics results are particularly relevant since the GPU section can be used for AI calculations.
The biggest problem with Apple Intelligence right now is that Apple has mandated that it be backwardly compatible to the M1 series, which means that it can’t take advantage of the processing power available with the M4 series Macs.
Fortunately, Mac users aren’t limited to Apple’s on-device AI but can take advantage of open source AI models downloadable through an MIT-licensed AI platform called Ollama. I’ve used Ollama and found that I could run even very large 405 billion parameter Llama 3.1 models on the MacBook Pro.
My conclusion is that the latest Apple Silicon Macs are an excellent platform for on-device AI, even if Apple Intelligence doesn’t fully exploit them. As Apple Intelligence software matures and progresses to more capable platforms in the future, users will find them ever more capable and useful.
The power of Apple Silicon also bodes well for the server based version of intelligent Siri to come next year. This new Siri will run on Apple Silicon based cloud servers. These servers will likely be Apple’s secret weapon in the competition with cloud based AIs from Google and Microsoft.
I continue to expect that Apple’s ultimate destination for its Intelligence is a new AI based user interface in which virtually all computer interactions are mediated by the on-device AI.
These are what Microsoft and Google have referred to as “agency” functions, where the AI is allowed to take actions on the device on behalf of the user. But both companies have been very tentative in their approach to agency because of the obvious security implications of having a cloud-based AI control the user’s local device.
These security concerns mostly go away if the AI is hosted on-device. Siri already has more agency capability than Microsoft or Google contemplate. Users can ask Siri to turn on WiFi or launch an app, just by asking. Voice response is very reliable, and it’s all on-device.
The more intelligent Siri will use Apple’s secure server approach that allows it to process user queries in the cloud when needed. User data is always sent encrypted, and never stored once the query is processed.
Ultimately, I expect Siri to become a fully functional user interface capable of handling almost any function the user might perform on the device by conventional means. Apple is once again pioneering a new computer interface, something that the cloud based AIs can’t do without putting the user and local device at risk of a privacy breach or worse, a malware attack.
While Apple Intelligence may be off to a rocky start, I think it has a bright future. I remain long Apple and rate it a Buy.
Read the full article here