In-Memory Computing

Definition - What does In-Memory Computing mean?

In-memory computing is the storage of information in the main random access memory (RAM) of dedicated servers rather than in complicated relational databases operating on comparatively slow disk drives. In-memory computing helps business customers, including retailers, banks and utilities, to quickly detect patterns, analyze massive data volumes on the fly, and perform their operations quickly. The drop in memory prices in the present market is a major factor contributing to the increasing popularity of in-memory computing technology. This has made in-memory computing economical among a wide variety of applications.

Techopedia explains In-Memory Computing

Many technology companies are making use of this technology. For example, the in-memory computing technology developed by SAP, called High-Speed Analytical Appliance (HANA), uses a technique called sophisticated data compression to store data in the random access memory. HANA's performance is 10,000 times faster when compared to standard disks, which allows companies to analyze data in a matter of seconds instead of long hours.

Some of the advantages of in-memory computing include:
  • The ability to cache countless amounts of data constantly. This ensures extremely fast response times for searches.
  • The ability to store session data, allowing for the customization of live sessions and ensuring optimum website performance.
  • The ability to process events for improved complex event processing
Posted by:

Connect with us

Techopedia logo for Linkedin
Techopedia on Linkedin
Tweat cdn.techopedia.com
Techopedia on Twitter


'@Techopedia'
Sign up for Techopedia's Free Newsletter!

Email Newsletter

Join 138,000+ IT pros on our weekly newsletter