Throwing Money at AI Is Not the Best Way To Use It: Expert Analysis

Why Trust Techopedia

When bringing artificial intelligence (AI) into the workforce, developers and organizations often invest in licensing the best large language models (LLMs) and then buy the most expensive GPUs. Who doesn’t want the premium model?

However, there are other ways to maximize AI performance cost-efficiently, from deploying open-source to maximizing graphics processing unit (GPU) usage  — such as careful balancing of the demands on the system and paying attention to heat dissipation

As open-source LLM models begin to gain traction versus closed-source AI, Techopedia talks to experts in the field to learn how developers and businesses of any size can maximize existing resources through techniques like GPU management and leveraging open-source technologies.

Companies may think that the best solution to develop highly performing AI systems is throwing money at the problem, but this brute-force approach comes with a hefty price tag.

Key Takeaways

  • While current trends show that companies are throwing money at the AI problem by buying powerful LLM models and GPUs to drive AI, experts say there are smarter ways to supercharge AI.
  • New open-source models are worth exploring compared to their paid-for counterparts.
  • Open-source solutions such as ClearML are able to split, manage, and monitor GPU usage and maximize AI efficiency and performance.
  • GPU energy and cooling systems and open-source tools and technologies can help organizations with budget-friendly, ethical, and cost-efficient tech.

Beyond Brute Force: Smart Strategies for AI Development

While the world hears a lot about making generative AI more accessible, free, and transparent, developers have inside knowledge on how to take a different road to achieve the same goals.

Whether it be developing, scaling, or training AI models, open-source tools can drive power efficiency within GPUs themselves, reducing costs while improving accuracy, performance, and deployment time.

Advertisements

Sylvester Kaczmarek, the CTO at OrbiSky Systems, and former contributor to projects at NASA, and ESA, spoke about computation AI-ML demands, GPU management, and other open-source solutions.

Kaczmarek said:

“AI and machine learning models require significant computational resources due to the complexity and volume of data they process. Training these models involves multiple iterations over vast datasets to adjust and optimize the parameters that guide their decision-making processes.

“GPUs are pivotal in this context because they can handle thousands of threads simultaneously, significantly speeding up the computation and data processing tasks, which is vital for the efficiency and scalability of AI systems.”

Referring to GPU usage management, Kaczmarek highlighted several innovations:

“One of the most innovative ways developers can manage GPU usage to maximize AI performance is through dynamic allocation and efficient scheduling.

“Techniques such as multi-tenancy, where multiple AI tasks share the same GPU resources without conflict, and predictive scheduling, which allocates GPU resources based on the predicted workload, are crucial.”

Kaczmarek explained that containerization technologies like Docker, coupled with Kubernetes for orchestration, allow for scalable and efficient deployment of AI applications, optimizing GPU utilization across clusters.

“There are several notable open-source solutions for managing, monitoring, and configuring GPU usage,” Kaczmarek said.

Kaczmarek highlighted NVIDIA’s Data Center GPU Manager (DCGM), along with the free GPUView — part of the Linux kernel that offers comprehensive insights into GPU performance and utilization — and the open-source Prometheus, coupled with Grafana, which can be used for monitoring GPU metrics in real-time, allowing for detailed analysis and optimization of GPU usage in AI projects.

AI Energy: Technology for AI Efficiencies

The International Energy Agency (IEA) 2024 report predicts that energy consumption from data centers, AI, and the blockchain could double the sector´s global energy consumption by 2026.

The report reveals that after globally consuming an estimated 460 terawatt-hours (TWh) in 2022, data centers’ total electricity consumption could reach more than 1,000 TWh in 2026.

Rick Bentley, the CEO of Cloudastructure – an AI surveillance and remote guarding company and advisor to Google at the time Tensorflow was made open-source, talked to Techopedia about AI energy usage and how it can also help maximize resources.

“Power supply — the first part of our equation — always equals heat. Dissipating the heat is a challenge.

“Every watt a GPU consumes in a data center has to be cooled by the HVAC system. HVAC systems might consume 2 watts to cool 1 watt of heat. That three watts required to run and cool the one watt need to be backed up by the power system in case power to the building is lost.”

That is where water cooling comes in. “When a card heats up, it has to be throttled back,” Bentley explained.

“This means you’re paying for a very expensive and powerful GPU that you can only run at maybe 50% power until it cools down again.

“Air cooling, having a big metal heat sink on the card that fans blow air across, is not as efficient as water cooling. With water cooling, you put a water block on the card and run cold water through it.

“That water is heated up by the card and then run out through hoses to a radiator somewhere to dissipate the heat. This can be outside of the data center and relieve it of the 2 watts of HVAC spent to dissipate every watt of heat.”

Companies like Lenovo are now offering new solutions uniquely designed to support NVIDIA’s architecture. The colling tech, called Neptune, is designed to as cutting-edge engineering to enable more efficient computing of intense AI workloads, with a focus on reducing power consumption even at high GPU levels.

Bentley said that by water-cooling AI infrastructure, resources are better managed. Additionally, Bentley said that small changes can have an impact, for example running training models on the hardware in low usage hours.

Why are Open-Source Technologies Vital for The Gen AI Transformation?

For many in the industry, there is no debate about the role that open-source tech has for the future of GenAI. Popular among developers, open-source AI technologies provide numerous benefits over close-source solutions.

Erik Sereringhaus, founder and CEO of Bloomfilter, spoke to Techopedia about the issue and why he thinks that open-source is where it’s at for the GenAI revolution.

“Everyone’s invited. Open-source tools level the playing field, giving everyone access to cutting-edge AI tech without breaking the bank. With open-source software, you can peek under the hood, tweak things, and see exactly what’s going on. It’s like having X-ray vision for your code.”

Sereringhaus added that the open-source community is “a crew of developers collaborating, sharing ideas, and making cool stuff together”.

“With the right know-how and tools, you can unlock the full potential of your GPU and ride the wave of the GenAI revolution like a boss.”

Sereringhaus said open-source helps AI teams make those GPUs work smarter, not harder. He called on developers to feed data into GPUs in batches, trim the “fat” of their AI models without sacrificing performance, and split work across multiple GPUs.

ClearML Releases Free Tech to Split GPU Usage and Monitor AI Resources

Open-source technologies and platforms like ClearML work to democratize access to AI infrastructure and allow any developer to contribute to and benefit from advancements in deep learning and generative AI.

On March 18, ClearML — the free and popular open-source ML platform used by over 250,000 developers — announced free fractional GPU capability for open-source users, enabling multi-tenancy for all NVIDIA GPUs and new orchestration capabilities to expand control over AI infrastructure management and compute cost.

Designed for a range of professionals in the field of AI and machine learning, including AI infrastructure leaders and data scientists, the platform helps organizations that need to optimize their increasing GPU workloads while optimizing GPU hardware resources and improving efficiency without incurring additional hardware costs.

Moses Guttmann, CEO and co-founder of ClearML, told Techopedia:

“Our new technology allows open source users to dynamically partition a single GPU, enabling it to run multiple AI tasks simultaneously inside a secure memory-bounded container.

“This is achieved by partitioning the GPU into smaller, distinct units that can independently handle different AI workloads and GPU memory limitations.

“This method leverages NVIDIA’s time-slicing technology as well as our new memory driver container limiter, allowing for more efficient use of the GPU’s processing power.

“It is also valuable for industries where AI and machine learning are rapidly evolving and where efficient resource utilization is critical.”

Guttman said that there are other open-source tools that can partially help manage GPU resources; for instance, Kubernetes can be used to orchestrate containerized applications and optimize GPU usage across clusters.

Balancing Standards for Speed, Energy, Memory, and Accuracy

In the world of machine learning, where computers learn from vast amounts of data, the way numbers are stored and processed plays a crucial role. Three key standards, FP32, FP16, and INT8, represent different levels of precision.

FP32, also known as single-precision floating-point, is the most common format. It offers a high level of detail, ensuring calculations are accurate. However, this precision comes at a cost. FP32 requires more memory and processing power, leading to slower computations and higher energy consumption.

FP16 (half-precision floating-point) and INT8 (8-bit integer) are designed to strike a balance between performance and resource usage.

FP16 uses half the memory of FP32, allowing for faster computations and reduced energy needs.

However, this comes with a slight trade-off in accuracy that, in most applications, is negligible.

INT8 takes this efficiency even further. By using only 8 bits to represent numbers, it offers the fastest processing speed and lowest memory footprint. However, INT8 sacrifices the most accuracy, making it suitable for tasks where high precision is less critical.

Bentley spoke about how these standards can be managed in innovative ways.

“One of the other smart ways to enhance efficiency is by utilizing FP16 or INT8 precision for computations instead of the more commonly used FP32 precision.

“In deep learning, inputs and intermediate values are often normalized or standardized to have values that fall within a specific range, typically around 0, with small standard deviations. So we don’t need large numbers to store them.

“Also, by using lower precision, the amount of data that needs to be processed and stored at any one time is reduced, which can also lead to less memory bandwidth usage and potentially lower memory requirements.”

Bentley added that modern GPUs and specialized hardware accelerators are increasingly designed to support these lower precision formats efficiently, with hardware units specifically optimized for FP16 and INT8 operations.

“This means that not only does the switch to lower precision save resources but it can also be fully leveraged by the hardware to achieve even greater performance improvements.”

The Bottom Line

While tools mentioned by Sereringhaus, such as NVIDIA CUDA Toolkit, TensorFlow and PyTorch, Kubernetes with GPU Support, and RAPIDS, are well known in the community for their GPU management capabilities and AI efficiencies streamlining, new open-source tools will continue to emerge. As Kaczmarek told us:

“Open source promotes collaboration, lowers entry barriers, and fosters innovation by making tools and frameworks readily available to a wider developer community. It also aligns with principles of transparency and ethical AI development.”

More people working on the same problem and sharing their results will bring us solutions much quicker.

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.