Previous Blogs

May 15, 2024
GenAI-Powered Agents Bring Promise of Digital Assistants Back to Life

April 23, 2024
Amazon Web Services Expands Bedrock GenAI Service

April 11, 2024
Google Integrates Gemini GenAI Into Workspace

March 26, 2024
Adobe Brings GenAI to Brands and Enterprise Creatives

March 19, 2024
Nvidia Advances GenAI Adoption

March 14, 2024
Arm and Cadence Push Software-Defined Vehicle Development Forward

February 29, 2024
Two Words That Are Critical to GenAI’s Future

February 20, 2024
Intel’s Gelsinger Describes a Different Kind of Foundry

February 1, 2024
How Will GenAI Impact Our Devices?

January 17, 2024
Samsung Focuses Galaxy S24 Upgrades on Software

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

May 21, 2024
Dell Works to Make On-Prem and Hybrid AI a Reality

By Bob O'Donnell

Getting companies interested in deploying generative AI (GenAI)-based applications is no longer a challenge for tech industry suppliers. Figuring out how to best use the technology, however, is still difficult for many of these business customers.

One new technique that Dell Technologies discussed at its Dell Tech World event in Las Vegas is the concept of hybrid AI, where some of the work happens in the cloud but some is done on premises within an organization’s data centers. As Dell and others have pointed out, because businesses still have most of their data stored within their own or co-located IT facilities (83% according to Dell), it makes sense to bring the AI to the data as opposed to moving their data to the cloud.

To do that, Dell talked about the idea of an AI factory, where enterprises can put together the various components that they need to do AI projects of their own. For Dell, an AI factory can be understood as a GenAI-enabled infrastructure that’s designed to power fine-tuning of foundation models, data inferencing workloads, and the ability to create custom applications that leverage those customized models. In addition, Dell’s AI factory vision brings together services and even client devices that can all be part of an organization’s AI strategy.

In the early stages of the GenAI revolution, the ability to run foundation models within the walls of an enterprise data center wasn’t possible, because tools like OpenAI’s GPT-3 and GPT-4 were only accessible via the cloud. What’s happened recently, however, is that more and more companies are starting to take advantage of open-source foundation models, such as Meta’s Llama 3 and many different options from open-source AI marketplace Hugging Face. All of these models can be run within corporate data centers, making it easier and more cost effective for companies to fine tune these models with their own data and then build custom applications around these self-tuned models.

Of course, to do that, you need the right kind of computing hardware, storage, and networking capabilities, along with software tools and, most importantly, services packaged to help organizations make the GenAI journey successfully. Essentially, that’s exactly what Dell introduced at this year’s Dell Tech World. Building on the exclusive announcements it made with Nvidia at last year’s event, Dell debuted a broader range of products and services that allow companies to pick from a wider range of core component and software suppliers to build their own GenAI-capable infrastructure. At the same time, Dell also extended its Nvidia-specific offerings, integrating the new Blackwell architecture GPUs, system designs, and new software that Nvidia announced at its own GTC event.

On the infrastructure side, Dell unveiled several new servers, including a liquid cooled version of its XE9680 compact chassis, appropriately named the XE9680L, that can hold up to 8 Nvidia GPUs. It also took the wraps off its PowerScale F910, an all-flash storage array optimized to speed access to the large data sets that are necessary for GenAI workloads. Additionally, the company previewed Project Lightning, a parallel file system optimized for PowerScale. The PowerSwitch Z9864F-ON is a network switch with twice the throughput speeds on GenAI workloads than its previous offerings. Along a similar vein, Dell also announced new Broadcom-powered 400G PCIe Gen 5.0 Ethernet adapters for its PowerEdge XE9680 servers.

Even more interesting was the announcement of a new Dell Enterprise Hub on Hugging Face, designed to make the process of selecting the right LLMs and other software tools for building custom GenAI applications much easier. Dell also announced further work with Meta on its Llama 3 models and with Microsoft on offering a Dell Solution for Microsoft Azure AI Services. Related to all these was a comprehensive set of new service offerings to help organizations make sense of how to best use all these new tools.

Beyond just traditional AI infrastructure, Dell also talked about bringing AI capabilities on prem via both workstations and PCs. On the PC front, Dell unveiled 5 new Copilot+ AI-accelerated PCs—the most of any PC OEM—as part of Microsoft’s big event. The company unveiled its first ever XPS13 without an Intel CPU, as well as two new Inspirons and two new Latitudes. This large number of systems highlights the fact that the company is making a big bet on the Arm-powered Qualcomm Snapdragon X Elite and X Plus SOCs that are powering these devices. Like others in the PC industry, Dell believes the combination of a powerful new NPU (Neural Processing Unit) for accelerating AI workloads, as well as huge battery life improvements will prove to be compelling to many users.

For business PC buyers, the idea is that even if they don’t initially intend to leverage the NPU and AI acceleration for much, there are other important benefits. Specifically, the slim designs, impressive compute performance and 24+-hour battery life of these Snapdragon-based machines will provide a solid alternative to end users who have been asking for the latest Macbooks or who are just frustrated with the short battery life of some existing x86-based PCs. For those who are interested in the AI capabilities, things get even more intriguing, because, while the initial software support for NPU acceleration is expected to be limited, it will grow over time. In other words, the capabilities of these new Copilot+ PCs will improve with age—not something we’ve heard for a long time.

Of course, the question of app compatibility and emulation performance—two issues that essentially sank the first two iterations of Windows on Arm-based PCs—still remains. However, early reports suggest that the new Prism emulation layer built into the special new version of Windows designed for Copilot+ PCs is showing promising results, even for custom enterprise applications.

In addition to PCs, one of the more intriguing but overlooked announcements from Dell is centered around workstations. Specifically, the work the company is doing to enable enterprises to fine tune their open-source models with RAG (Retrieval Augmented Generation) via Dell’s Accelerator Services for RAG on the company’s Precision AI Workstations is compelling. RAG is one of the hottest areas in GenAI and many organizations are very interested in leveraging it for their own use. Providing a means to make this process easier is bound to get a lot of attention and could end up making this one of the sleeper hits from this year’s Dell Tech World event.

All told, Dell put together a set of announcements that provide an intriguing perspective on where the world of GenAI in the enterprise is headed. While there’s little doubt that most companies are going to do much of their initial GenAI using cloud-based resources, it’s increasingly clear that just as hybrid cloud became a standard way for companies to leverage cloud computing, hybrid AI is poised to become a mainstream option for GenAI applications. What’s also interesting about these announcements is that they reflect the much more technologically progressive viewpoint that Dell has started to show since the first appearance of GenAI. The company sees an opportunity to be a first mover in this market, and these latest developments make it clear that Dell is moving aggressively in that direction.

Here’s a link to the original article: https://www.linkedin.com/pulse/dell-works-make-on-prem-hybrid-ai-reality-bob-o-donnell-i0mrf

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.