Loading

SOFTWARE DEFINED MEMORY Memory Meets flexibility by zsah

INTRODUCING

SOFTWARE DEFINED MEMORY (SDM)

To meet the growing memory demand from AI, Big Data workloads, virtualisation, and high-performance computing, today’s data centres require expensive RAM-filled servers, but that memory is never used 100% of the time, resulting in memory stranding across the datacenter.

Software Defined Memory (SDM) revolutionises how developers, administrators, and data centres think about memory by breaking down the physical barriers of traditional server hardware and decoupling memory from the server. The result is a massively scalable, provisionable memory resource that is just as fast as local memory, is easy to configure, requires no application code changes, and utilizes cost-effective commodity servers.

Various initiatives have attempted software-defined memory solutions for decades, spending billions of dollars. While these efforts to decouple CPU and memory have yielded some advancements, they still suffer from scaling and performance challenges.

zsah's solution is mature, deployed and fully patented providing significant ROI for customers across a broad range of industries.

FLEXIBLE, SCALABLE, SOFTWARE DEFINED MEMORY

MEMORY MEETS FLEXIBILITY

JUST AS FAST AS LOCAL MEMORY

SDM delivers deterministically low latency and, in most cases, statistically matches CPU needs for local memory, allowing consistent CPU saturation. Because SDM can be indefinitely sized, it’s possible to run complete computations in memory.

USE SERVERS MORE EFFICIENTLY

Today’s data centres require expensive RAM-filled servers for “big memory” use cases but memory is rarely fully utilised. Memory and CPU are also bundled together, forcing the repurchase of the same memory just to upgrade CPUs. With SDM, individual servers need less memory, making frequent and cheaper server upgrades to next-gen CPUs possible. Server lifespans can even be extended by adding older servers to the memory pool.

REDUCE COSTS

Traditional memory is never used 100% of the time, resulting in memory stranding across the data centre. With SDM, previously stranded memory is available for use to any server or application. Fewer servers are needed with less memory and upgrades to next-gen CPUs are possible without repurchasing the same memory. As a result, all associated operating costs, such as power, cooling, software, and personnel are reduced (ROI in the 50-80% range is common).

SCALE MEMORY BEYOND SERVER LIMITS

With SDM, memory capacity is no longer limited to local physical DIMM slots and users can seamlessly use memory up to CPU-addressable limits. For example, allocate 2.25 TiB of memory to any of 200 servers for nightly jobs, create a virtual machine with larger memory than the physical hypervisor, or allocate 40 TiB to a single server for an hour for burst ingest. When jobs complete, memory returns to the pool, available for allocation to new servers and jobs.

EASY INTEGRATION

With SDM, applications using common memory or storage methods can access and use the memory pool without any code changes. For maximum performance, applications can take advantage of APIs which bypass the kernel altogether. Virtualisation platforms can also seamlessly integrate with the memory pool, enabling massive flexibility and scale.

RESILIENT & RECOVERABLE

SDM provides foundational memory resiliency unavailable by any other technology. Without SDM, consider recovering from physical memory failure:

  • In an on-prem server, an organization makes a service call and faces potentially hours or longer of downtime before server recovery
  • In the cloud, a user logs back-in and receives an altogether new server (memory failure removes CPU and memory)

SDM introduces memory recovery, independent of the CPU, that takes the blink of an eye. With SDM, recovering from physical memory failure, software creates a replacement allocation in a few hundred milliseconds, requiring neither server downtime nor new server construction. When repaired, memory automatically rejoins the memory pool, becoming accessible to all servers.

SECURE

SDM provides strong security against memory target penetration attacks. All memory is zeroed out prior to use or re-use. Similar to “LUN Masking” in storage, SDM provides secure Client Masking to secure customer isolation and support multi-tenancy. Fabric partitioning is enforced, and 64-bit keys secure the host-fabric adapters.

SDM vs COMPETITION

SDM VS THE COMPETITION

Various initiatives have attempted software-defined memory solutions for decades, spending billions of dollars. While these efforts to decouple CPU and memory have yielded some advancements, they still suffer from scaling and performance challenges.

In contrast, SDM delivers a true software-defined memory solution, without the downside of current offerings.

Why not Enterprise Flash, NVMe, or Optane / 3D XPoint / Z-NAND?

These solutions address storage performance, not limited or stranded memory. SDMs external memory delivers a true software-defined memory solution that effectively decouples memory from the server. Media selection, such as Optane, remains a customer choice, unaffected by SDM.

Why not SMP or vSMP?

SMP and vSMP do not scale linearly and can require hardware and software changes. SDMs external memory scales linearly and does not require hardware or software changes. A single rack of SDM provides the highest performance density in the world, delivering consistent nano-second determinacy.

Why not PCIe fabrics?

PCIe fabrics extend individual buses beyond server limits, switching a bus to a target, one at a time. SDMs external memory uses routable fabrics like InfiniBand, RoCE, and Omni-Path to aggregate I/O across bus boundaries, achieving unlimited scale and flexibility across the data centre.

Why not in-memory databases like SAP Hana, Coherence, MemSQL, or eXtremeDB?

SDM is memory, not an in-memory database. Some in-memory databases may "shard" capacity to improve performance, but gains do not scale linearly. SDMs external memory scales linearly, allowing in-memory databases to grow dynamically and achieve any objective. SDM is a terrific complement to in-memory databases.

Why not in-memory caches like Memcached or Redis, or in-memory Grids like Gridgain or Hazelcast?

The same story as in-memory databases, some may "shard" capacity to improve performance, but gains do not scale linearly. SDMs external memory scales linearly.

Why not RAM Disks?

RAM disks convert server memory into local storage, strand memory inside servers, and do not scale linearly. In contrast, SDMs external memory delivers memory as a shareable data center resource, solves memory stranding, and scales linearly. SDM supports storage like RAM disks, but also real memory and a host of additional interfaces not provided by RAM disks.

ENTERPRISE & CLOUD COMPUTING

ENTERPRISE & CLOUD COMPUTING

SDM makes it easier to scale out infrastructure so administrators can quickly respond to handle any level of demand or workload.

With SDM, standard servers achieve superior results and memory scales linearly, dynamically growing capacity on-demand.

  • Reduces per-server costs
  • Reduces the number of servers needed
  • Reduces operating costs, such as software, power, cooling, personnel, etc
  • ROI in the 50-80% range is common on hardware savings alone, excluding OpEx savings
MACHINE LEARNING

MACHINE LEARNING

Machine learning (ML) attempts to bridge the gap between what some traditionally view as “human-addressable” versus “computationally-addressable” problems.

In recent years, ML hardware and software technologies have:

  • Increased performance and accuracy
  • Decreased development complexity

This trend has expanded the ML audience, diversified applicability, and increased model complexity demands. Increased model complexity expands memory and compute resource needs.

SDMs patented Software-Defined Memory offering, enables on-demand memory that can respond to dynamic and changing machine learning requirements.

  • Pool Datasets — share datasets across memory pools larger than possible with physical servers
  • Reduce Bottlenecks — run any batch size, dataset size, preprocessing load, and model complexity
  • Improve System Utilization — reduce hardware waste and improve efficiencies, for better outcome
  • Create Adaptable Infrastructure — add memory dynamically to a node, set of nodes, or all nodes
HIGH PERFORMANCE COMPUTING

HIGH PERFORMANCE COMPUTING

SDM software delivers deterministically low latency and, in most cases, statistically matches CPU needs for local memory, allowing consistent CPU saturation. Because SDMs external memory can be indefinitely sized, it’s possible to run complete computations in memory, benefiting a wide range of scenarios such as:

  • Analytics
  • Databases
  • Number Crunching
  • Genomics
  • Monte Carlo
  • Big Data
VIRTUALISATION OF MEMORY

VIRTUALISATION OF MEMORY

SDM enables virtual machine deployments not possible with traditional servers

  • Create any sized virtual machine, any time
  • Create a virtual machine with larger memory than the physical hypervisor, such as a 64 GiB RAM physical server (hypervisor) hosting a 512 GiB RAM virtual machine
  • Deploy more virtual machines than previously possible without careful consideration for over-provisioning

SDM is suitable for working with Virtualised GPU servers and a range of graphic intensive applications ranging from petro-technical to healthcare.

RETURN ON INVESTMENT

RETURN ON INVESTMENT

SDM is not just an incremental change, it's a paradigm shift, revolutionary, a disruptive technology that is going to transform the design of every datacenter in the world.

Yes, that's a bold, over-cliched statement, and we understand you may be skeptical. SDM benefits data centre's small and large. Consider a 1000 server environment with a 25% server reduction.

(1) 1,000 server environment with a 25% server reduction. All numbers in THOUSANDS ('000).

Your next step is to request an SDM Toolkit to gain a better understanding of your environment and see the efficiencies, cost savings and benefits SDM can provide you.

EXCEPTIONAL COST SAVINGS

BUY FEWER & LESS EXPENSIVE SERVERS

Memory comprises up to 80% of server costs. By decoupling memory from the server, data centres need fewer servers with less memory and upgrades to next-gen CPUs no longer require repurchasing the same memory.

BUY LESS EXPENSIVE MEMORY

128 GB DIMMS are 4-5 times more expensive than 64 GB DIMMS. If you are not limited by the number of sockets in a server you can pool together as much cheap memory as you like. Need a petabyte server? - no problem.

SDM enables you to buy less expensive DIMMS, install into an X86 Server and with no code changes, create a scalable memory target with a pool of memory accessible across the data center.

With SDM you can create the largest memory server in the world using COTS hardware and software and buying the least expensive DIMMS available.

RE-USE EXISTING HARDWARE

Are you ready to decommission servers as you need to upgrade the CPU? Don't throw them on the scrap heap just yet, SDM can convert your old servers to memory targets because we don't care if your CPU is a few generations older.

LESS EXPENSIVE UPGRADES

If you are a follower of Moore's law your CPU server refresh cycle could be as often as every two years. However, memory performance has increased at a much slower rate. As SDM decouples memory from the server you no longer have to replace the whole server just to increase your CPU performance. Buy a less expensive server every 2 years and refresh your memory pool every 7 years for significant savings.

Want to learn more About SDM?

Click the button below for our contact details, or call 0044 (0)20 7060 6032 today.