| |
November 19, 2025
Microsoft Brings Agents to Office and Windows
November 13, 2025
The World of Enterprise AI is Turning Hybrid
November 13, 2025
The Google Enterprise Story Finally Feels Real
November 5, 2025
Cisco Brings Agents to Network Management and Compute to the Edge
October 9, 2025
Intel’s Latest Chips Push Innovation Forward
September 24, 2025
Qualcomm Focuses on Agentic AI with Latest Chips
September 10, 2025
Arm Lumex Platform Lifts Smartphone AI
August 26, 2025
Nvidia Brings Blackwell to Robotics
July 17, 2025
AWS Puts Agent-Focused Platform Center Stage
July 9, 2025
Samsung’s Latest Foldables Stretch Limits
June 24, 2025
HPE’s GreenLake Intelligence Brings Agentic AI to IT Operations
June 18, 2025
AWS Enhances Security Offerings
June 12, 2025
AMD Drives System Level AI Advances
June 10, 2025
Cisco Highlights Promise and Potential of On-Prem Agents and AI
June 4, 2025
Arm Brings Compute Platform Designs to Automotive Market
May 20, 2025
Dell Showcases Silicon Diversity in AI Server and PC
May 19, 2025
Microsoft Brings AI Agents to Life
May 14, 2025
Google Ups Privacy and Intelligence Ante with Latest Android Updates
April 30, 2025
Intel Pushes Foundry Business Forward
April 29, 2025
Chip Design Hits AI Crossover Point
April 24, 2025
Adobe Broadens Firefly’s Creative AI Reach
April 9, 2025
Google Sets the Stage for Hybrid AI with Cloud Next Announcements
April 1, 2025
New Intel CEO Lays out Company Vision
March 21, 2025
Nvidia Positions Itself as AI Infrastructure Provider
March 13, 2025
Enterprise AI Will Go Nowhere Without Training
February 18, 2025
The Rapid Rise of On-Device AI
February 12, 2025
Adobe Reimagines Generative Video with Latest Firefly
January 22, 2025
Samsung Cracks the AI Puzzle with Galaxy S25
January 8, 2025
Nvidia Brings GenAI to the Physical World with Cosmos
|
|
|














 |
TECHnalysis Research president Bob O'Donnell publishes commentary on current tech industry trends every week at LinkedIn.com in the TECHnalysis Research Insights Newsletter and those blog entries are reposted here as well. In addition, those columns are also reprinted on Techspot and SeekingAlpha.
He also writes a regular column in the Tech section of USAToday.com and those columns are posted here. Some of the USAToday columns are also published on partner sites, such as MSN.
He also writes a 5G-focused column for Forbes that can be found here and that is archived here.
In addition, he also occasionally writes guest columns in various publications, including RCR Wireless, Fast Company and engadget. Those columns are reprinted here.
December 3, 2025
By Bob O'Donnell
One of the biggest challenges in creating a successful AI deployment within enterprise companies today is fully tapping into the unique requirements, data sets and existing infrastructure that every organization has. I’d argue that every new AI solution sounds like a great option if you’re a startup and don’t have an existing environment into which you need to integrate these new tools. But existing companies all have to deal with these issues.
That’s why I’m intrigued by several of the new announcements that Amazon’s AWS cloud computing division made at their annual re:Invent conference here in Las Vegas. While other news from the event may get more initial attention, the company’s new AWS Factories and Nova Forge AI model customization tool look to bring some critically needed new capabilities to the many, many organizations who are still trying to implement AI in a clearly beneficial, meaningful way. Toss in some important control and evaluation tools for creating and deploying agents—which most companies need to fully trust the work agents can do before they start using them in their environments—and you have a trifecta of AI features targeted to meet the customization needs of existing enterprises.
Arguably the most interesting announcement is the news on AWS Factories. What was interesting, though, is that in some ways the company downplayed it as just extending a service they already offer to a tiny set of customers to a wider audience. AWS Factories enables organizations to set up their own AWS-powered AI infrastructure in the on-prem environment of the customer. Companies can choose to install AI infrastructure racks powered either by AWS Trainium AI accelerators or the latest Nvidia GPUs as well as the full AWS AI software stack. Practically speaking, this offers an important new degree of flexibility that companies in highly regulated industries can immediately benefit from.
Philosophically speaking, however, the news is much bigger because it highlights AWS’s willingness to extend their custom AI environment in ways that even just last year the company said it had no intention of doing. Of course, market realities have kicked in and AWS smartly realized that the desire for on-prem AI workloads is extremely common (as my recent study “The Future of AI is Hybrid” found to be the case)—even beyond regulated industries. In addition, the other major cloud and AI model providers have all now created an option for running models on-premises, so you could argue that AWS is a bit late to the game. In truth, though, the move toward on-prem and Hybrid AI models that leverage both local datacenter and public cloud-based resources has just started. As a result, Amazon’s move is early enough that they won’t miss out on what could prove to be a very large opportunity. Plus, by allowing individual companies to get access to AWS custom silicon for their own on-prem workloads they’ve even outpaced Google, who just announced the intention to sell its custom TPU AI accelerators to third parties.
What makes the AWS Factories news even more interesting is that Amazon also announced a new effort to work first with Google Cloud (and next year with Microsoft’s Azure) to ease the process of creating multi-cloud environments. For decades, “multi-cloud” was a dirty word inside the halls of AWS and the company was relatively slow to enable on-prem hybrid cloud offerings when that phenomenon started to hit. So, to see Amazon simplifying the creation of a hybrid, multi-cloud and hybrid AI world is pretty amazing.
The company’s new Nova Forge offering is interesting on many levels as well. First, it builds on the fact that, after a few initial stumbles with their own models, Amazon has continued to build out its Nova family of foundation models (several new versions were also introduced at the show) and emphasized the company’s commitment to these efforts. But more importantly, Nova Forge represents an intriguing new way for companies to leverage their own data to create customized AI models specifically trained to their unique needs. Instead of just fine-tuning an existing model, as most companies have tried, Nova Forge provides a means to fully train a custom frontier model without the enormous costs and complexity of trying to create a new model from scratch. Essentially, Nova Forge allows you to insert your custom data into several different (and early) stages of the training process via various pre-written “recipes” and in that way adjust the open weights of the model. The net result goes way beyond the normal RAG-style fine-tuning techniques and allows for advanced capabilities such as reinforcement learning as the model continues to be used.
Of course, we’re not just in an AI but an agentic AI world these days, so AWS also had a number of big announcements related to their agent development tools. One of the most important is an extension to the company’s existing Bedrock AgentCore framework for building and deploying agents. The new AgentCore Evaluations will constantly monitor the actions of any agents to basically ensure that they are doing what they’re supposed to be doing and not doing what they’re supposed to avoid. Along with a new security-focused extension for AgentCore, these new features should give companies more confidence that the billions of agents that Amazon (and many other tech vendors) believe will soon be running in corporate environments can be trusted. Plus, building on the customization theme, the Evaluations for agents can be customized so that the quality of the output/actions they generate can be made to match the specific requirements of each organization.
In addition to these customization-focused AI capabilities, there were—as always—a wealth of other big announcements the company made at re:Invent. On the silicon side, the formal release of their Trainium 3 accelerator, a new rack design for the Trainium 3 with Amazon’s custom chip-to-chip, rack-to-rack, and datacenter-to-datacenter networking capabilities and even a quick tease of Trainium 4 all highlighted that the Amazon continues to be a big believer in the critical important of custom silicon. The company also unveiled new fully autonomous “frontier agents” designed to help dramatically improve the productivity of software developers and much more.
All told, it was yet another firehose of new announcements and yet another example of how quickly developments in AI and agents continue to occur. Most importantly, though, at a bigger picture level, it highlighted a new kind of attitude from AWS. Several of the announcements made it clear that they see and understand their role as part of a much bigger tech ecosystem and are working to make it easier for companies who do use products from multiple vendors (as most every company does!) to integrate AWS-powered solutions right alongside them. And that, is an important step forward.
Here's a link to the original column: https://www.linkedin.com/pulse/amazons-aws-brings-ai-customization-capabilities-bob-o-donnell-wbjlc
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.
Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research has a video-based podcast called TECHnalysis Talks.
LEARN MORE |
|
Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE |
|