Previous Blogs

June 5, 2024
Computex Chronicles Part 6: MediaTek Highlights Its Hidden Connections

June 4, 2024
Computex Chronicles Part 5: Intel Strikes Back

June 3, 2024
Computex Chronicles Part 4: Qualcomm Reinforces Copilot+ PC Benefits

June 3, 2024
Computex Chronicles Part 3: Arm Unveils New Architectures and AI Libraries

June 3, 2024
Computex Chronicles Part 2: AMD Leaps into Copilot+ PCs and Outlines Infrastructure GPU Roadmap

June 2, 2024
Computex Chronicles Part 1: Nvidia Expands GenAI Vision

May 21, 2024
Dell Works to Make On-Prem and Hybrid AI a Reality

May 15, 2024
GenAI-Powered Agents Bring Promise of Digital Assistants Back to Life

April 23, 2024
Amazon Web Services Expands Bedrock GenAI Service

April 11, 2024
Google Integrates Gemini GenAI Into Workspace

March 26, 2024
Adobe Brings GenAI to Brands and Enterprise Creatives

March 19, 2024
Nvidia Advances GenAI Adoption

March 14, 2024
Arm and Cadence Push Software-Defined Vehicle Development Forward

February 29, 2024
Two Words That Are Critical to GenAI’s Future

February 20, 2024
Intel’s Gelsinger Describes a Different Kind of Foundry

February 1, 2024
How Will GenAI Impact Our Devices?

January 17, 2024
Samsung Focuses Galaxy S24 Upgrades on Software

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

June 19, 2024
HPE Shows Future of Presentations and Private Cloud GenAI

By Bob O'Donnell

Thanks to HPE Discover’s keynote at the Sphere in Las Vegas, I believe I can now safely say that I have seen the future…on many different levels.

As the first corporate speech to be held within the domed wonder, HPE had the unique opportunity (and challenge) to rethink how big event keynotes can be done. I’m pleased to report that they did a great job of demonstrating how companies can leverage the unique venue to their advantage. Accompanied by some of the awe-inspiring clips from the Sphere’s “Postcard from Earth” experience video, impressive original graphics, rumble-seat enhanced customer videos and massive images of their own, HPE CEO Antonio Neri managed to deliver a compelling and informative presentation that few attendees are likely to forget.

Part of the impact, of course, is because attending most anything in the Sphere is pretty compelling. It’s also an absolute textbook example of Marshal McLuhan’s (in)famous comment that “The medium is the message”, and there was certainly a concern that the message could have been lost in the new medium that the Sphere represents. Nevertheless, Neri’s keynote still managed to officially kick off HPE’s customer and partner event, Discover, in a big way and, importantly, in a manner that allowed the company to announce its news and provide some interesting context around it.

While that may not always prove to be the case, given the increasing frequency of tech events now happening in Las Vegas, and the never-ending desire to make an impact, I have no doubt that we’re going to be seeing a lot more Sphere keynotes over the next few years. Like it or not, it is the keynote of the future.

The other aspect of the future that HPE discussed during its keynote is bringing generative AI capabilities into its customers’ environments. While most companies are still doing the majority of their AI workloads in the cloud—if they’ve started at all, that is—there’s growing interest in doing more of these efforts internally where many organizations’ most important data still remains. As a result, it’s good to see HPE moving in this direction with offerings designed to tap into this growing trend. Specifically, Neri had Nvidia CEO Jensen Huang join him onstage for an extended conversation to help launch the company’s new Nvidia AI Computing by HPE initiative and, more precisely, the HPE Private Cloud AI offering.

As with many other large enterprise hardware companies (Dell, Lenovo, Cisco, etc.), HPE’s latest partnership with Nvidia involves putting together GenAI-focused solutions based around not just Nvidia GPUs, but also Nvidia NeMo, Enterprise AI and NIM (Nvidia Inference Microservices) platform software.

On the one hand, this series of announcements shows how far (and how quickly) Nvidia’s influence has grown among the enterprise hardware industry, particularly with regards to its software offerings. In fact, its recent ascension to the most valuable company in the world is arguably due to the growing recognition of its software portfolio and the opportunity that future software revenues represent. On the other hand, it does seem to present a dilemma for its partners as they’re all getting the same set of tools from Nvidia. Given how important—and differentiating—these software tools are, it could make it more difficult for these various companies to distinguish their offerings from those of their competitors.

In HPE’s case, the company made the point that it is bringing a great deal of its own IP to the Private Cloud AI offering, both in terms of its GreenLake tools and the custom work it has done to greatly simplify the process of deploying GenAI-powered applications. For GreenLake, which is HPE’s cloud native, hybrid cloud-focused and consumption-based offering that now spans across compute, storage, software networking and services, Private Cloud AI shows up as another service that sits within the GreenLake platform. That means for existing HPE GreenLake customers, it offers an easy way to integrate into their existing environments. For those companies that may still purchase HPE equipment and services in a traditional manner, it could prove to be a good way to experiment with GreenLake, especially given the incredibly rapid evolution of GenAI-focused hardware. HPE has also extended the OpsRamp observability tools within GreenLake to work across the new AI offerings, enabling companies to track performance and more across the complete AI stack including GPUs, Nvidia InfiniBand and Ethernet switches, and the Nvidia software components.

Many of HPE’s other GreenLake offerings focus on specific applications, such as disaster recovery or building a private cloud, and that same outcome-focused approach has been applied here. One key competitive difference for the new Private Cloud AI offering is that HPE has packaged together all the hardware and software components necessary to create GenAI applications and has reduced the deployment process down to three clicks. Given how complicated the process can be otherwise, it’s an impressive step. It’s also a way to distinguish what HPE is doing versus, say, Dell Technologies, which is offering a broad array of different component choices so that companies can create “AI Factories” of their own. In real-world terms, those differences may prove to be more subtle than they first appear as the solutions evolve, but at least there’s different positioning from each company in terms of what they’re offering. Of course, HPE also recognizes that even with this simplified setup, ongoing development of these workloads will often require consulting help, so it partnered with system integrators such as Deloitte, HCLTech, Infosys, TCS and WIPRO to help customers create and maintain the applications they need.

To power these solutions, HPE unveiled four different Proliant computing hardware configurations—all powered by the latest Nvidia GPUs—that are designed to let companies tap into the level of performance they need. So, for example, if they want to just do an initial inferencing deployment for something like a GenAI-powered custom chatbot, they can tap into a basic Proliant system, whereas if they want to do customized fine tuning of the base model and/or leverage RAG (Retrieval Augmented Generation), they can use one of the two most powerful Proliant options with more GPUs, faster networking, etc. HPE also promised that they would come to market with configurations that included the latest Blackwell GPUs as soon as they are available—a point that the other vendors made as well (and probably one of the key reasons all these vendors are arranging these kinds of announcements with Nvidia).

For organizations looking to the future, it’s clear from these and other announcements that HPE is focused on making GenAI a key part of its future plans. It is also clear from the keynote that the company is working to build a stronger presence in GenAI, after arguably playing a bit of catch up with its competitors. Now that the playing field is closer to level, however, it will certainly be interesting to watch how HPE continues to evolve its differentiation strategies and how its customers react to its latest offerings.

Here’s a link to the original article: https://seekingalpha.com/article/4700005-hpe-future-of-presentations-private-cloud-gen-ai

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.