Definition - What does In-Memory Computing mean?
In-memory computing is the storage of information in the main random access memory (RAM) of dedicated servers rather than in complicated relational databases operating on comparatively slow disk drives. In-memory computing helps business customers, including retailers, banks and utilities, to quickly detect patterns, analyze massive data volumes on the fly, and perform their operations quickly. The drop in memory prices in the present market is a major factor contributing to the increasing popularity of in-memory computing technology. This has made in-memory computing economical among a wide variety of applications.
Techopedia explains In-Memory Computing
Many technology companies are making use of this technology. For example, the in-memory computing technology developed by SAP, called High-Speed Analytical Appliance (HANA), uses a technique called sophisticated data compression to store data in the random access memory. HANA's performance is 10,000 times faster when compared to standard disks, which allows companies to analyze data in a matter of seconds instead of long hours.
Some of the advantages of in-memory computing include:
- The ability to cache countless amounts of data constantly. This ensures extremely fast response times for searches.
- The ability to store session data, allowing for the customization of live sessions and ensuring optimum website performance.
- The ability to process events for improved complex event processing
Join thousands of others with our weekly newsletter
The 4th Era of IT Infrastructure: Superconverged Systems:
Approaches and Benefits of Network Virtualization:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: