Memcached is a widely used distributed caching system employed by some of the worldâ€™s largest websites. Resinâ€™s new Memcached transport enables access to our powerful distributed cache at the wire protocol layer! This means Resin can function as a drop-in replacement for Memcached solutions. Resin’s Memcached support is durable and fully elastic, allowing for the addition or removal of nodes as needed.
The big advantage Resin cache has over Memcached is that there are no RAM size limits. Data is automatically persisted and replicated between Triad hub servers. The size of the cache is limited only by the size of hard disks on the host OS. And since it employs optimized native access with memory-mapped files, its efficiency is similar to that of OS virtual memory. If you size the RAM cache properly it performs as well or better than the Memcached daemon.
Version 4.0.24 will support three reference tiers: Web-tier, App-tier, and Memcached-tier. The Memcached-tier can simultaneously provide JCache services to the App-tier as well as support non-Java clients who use Memcached directly. This tiered approach gives you two advantages: massive scale out of the distributed cache system like Memcached, and the speed advantages of an in-process clustered cache. The in-process distributed cache streamlines access to a cache backed by RAM that is not managed by the JVM. However it does not have the same constraints as most in-process distributed cache systems, such as GC pauses and large memory limits imposed by the JVM.
Resin’s distributed cache is easy to configure, use, and works well with the rest of Resinâ€™s support.
More details will follow in the proceeding weeks on data sharding, replication and shard clustering to improve reliability of cache layer while maintaining cloud elastiscity.