How Lenovo Thinks about Bringing ‘AI to All’: Interview

Why Trust Techopedia
KEY TAKEAWAYS

As Lenovo launches their AI-focused server ranges alongside Intel, we speak to VP Kamran Amini about how hardware is evolving to keep up with AI demands, how AI interacts with cloud and edge computing, and how the future unfolds for "AI for All".

While companies like OpenAI, Google, Microsoft, and many others, are developing and launching new large language models (LLMs), we may often forget about another aspect of AI: The hardware needed to sustain these memory and energy-intensive processes.

So while we work out how to best use ChatGPT, MidJourney, and the dozens of other AI systems out there — large and small — there’s the other side of the coin, the companies who are busy engineering the hardware, digital infrastructure, and environments that organizations need to deploy the new technology. Lenovo is one of those.

Lenovo announced yesterday its new range of hybrid cloud solutions, services, and servers to accelerate artificial intelligence (AI). The ThingAgile hybrid cloud solutions and ThinkSystem server platforms are powered by next-gen Intel Xeon Scalable Processors, offering increased performance, consolidating IT, and lower power consumption to simplify AI journeys.

The new products add to Lenovo’s portfolio and are are aligned with its AI for All strategy. The goal? To provide any organization or business, small, medium, or large, with everything they need to design, build, deploy, monitor, and manage AI technology.

Techopedia talked to Kamran Amini, VP and General Manager of Server, Storage, and Software at Lenovo Infrastructure Solutions Group, to understand the journey for users, debate trends in the AI market, and understand how the company works to make AI accessible for everyone.

When to use AI In-house, and When to Outsource

As the pace of AI innovation and AI-ready hardware accelerates rapidly, and businesses begin to deploy AI solutions worldwide, some companies are developing in-house AI infrastructure, while others outsource most of the engineering and lifecycle.

Advertisements

However Amini explained that small or medium businesses that don’t have the IT staff to maintain the lifecycle of the infrastructure, don’t have the skill sets, and don’t have all the engineering that large enterprises have available.

“AI is going to be a continued evolution. AI is not one size fits all. Some customers will go to the public cloud to get it because they don’t have the resources.

 

“There’s others that are investing on-premises, because they see the financial value of what they could do with AI and monetize their data.

“If you think about a lot of customers that want to deploy AI, they probably do not have the data scientist skills to build these massive large language models.

“They need someone to help them build those LLMS, to help them through that AI journey from building the AI model to actually deploying and then monitoring and managing that environment.

“Customers that are Capex rich, such as banks, have the resources for engineering and in-house skills.

 

“These types of industries don’t need to outsource, bring in as-a-service models, or turn to public clouds, as they do it all in-house.

Scaling Through Hardware

Lenovo says AI-ready platforms are an essential next step for a hybrid AI approach across public, private, and foundational models.

Amini said: “It’s about how you take the intelligence of AI and then deliver solutions for different scaling needs.

“What we are announcing today with Intel, it’s about bringing AI for the enterprise that’s looking to run and optimize large language models with under 20 billion parameters.

“What we do is engineer the end-to-end capability, and we build those solutions for customers that are looking for turnkey, hybrid cloud, or private cloud deployments.”

Lenovo claims a 21% boost in performance with the new tech. This increase in power allows companies to run more sophisticated applications and architectures while reducing their IT footprint, cutting down costs, meeting ROI faster, and delivering business outcomes.

The Lenovo ThinkAgile HX, MX, and VX are optimized for AI, and engineered as turnkey hybrid cloud solutions powered by new 5th Gen Intel Xeon Scalable processors and an open ecosystem of partners, including Nutanix, Microsoft, and VMware.

Their hybrid cloud solutions offer cloud software enabling new capabilities, faster backup and recovery, and can reduce deployment time by up to 75%, Lenovo claims.

Lenovo also partnered with Intel to deliver the latest server technology across its ThinkSystem portfolio of dense optimized, rack, and tower solutions.

These dense optimized servers are half the size of previous servers, and Levono says they offer up to a 40% reduction in power consumption thanks to unique designs and the Lenovo Neptune liquid cooling technology.

How Today’s AI Servers Differ From Previous Generations

Amini explained what makes these new products different from those already in the market, besides the computing power they offer.

“When you deploy rack servers, there’s a lot of redundancy of power and fans in these environments. In our two node boxes, now you have shared fans and shared power supplies,” Amini said.

“If you’re thinking about sustainability and energy costs and how else you can reduce consumption of power, the new Lenovo SD 550 removes all that duplicate power supply and fans and creates a common power supply and fan system.

“That’s driving more power savings. And when you deploy a whole rack or multiple racks, you’re going to see a tremendous amount of energy savings.”

Amini added that with the new chassis design, customers can also run mix architectures inside the chassis.

“Why is that important? Well, depending on the CPU and the application, certain applications are better optimized to run on certain CPUs.

“So now, within that container, you could actually have different nodes with different technology and still be in the same container — and really optimize the application running on those nodes.”

Another unique design difference, which Amini says was created to solve customers´ pain points, is the front access service.

“Products in the market are all rear access services. So imagine you have cables on the back, and you have to access and rear service the server from the back on hot aisles in the data center.

“We’re bringing front accessibility. Ease of service from a customer perspective which reduces their cost of operation.”

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.