Techopedia Explains Distributed Cache
Distributed cache is widely used in cloud computing systems and virtualized environments because it provides great scalability and fault tolerance. A distributed cache may span multiple nodes or servers, which allows it to grow in capacity by simply adding more servers. A cache has traditionally served as a very fast method for saving and retrieving data, and, as such, has been mostly implemented using fast hardware in close proximity to whatever is using it. But distributed cache sometimes needs to be accessed over communication lines aside from hardware-level bus, which gives it additional overhead, meaning that it is not quite as fast as traditional hardware cache. Because of this, it is ideal to use distributed cache for storing application data residing in databases and Web session data. It is more suitable for workloads that do more reading than writing data, such as product catalogs or set images that do not change frequently and multiple user access at the same time. It would not provide much benefit for data unique to each user that can be dynamic; this is served better by local cache.
Although not as fast as traditional local cache, distributed cache has been made possible because main memory has become very cheap and network cards and networks in general have become very fast.
- 5 Tech Experts Share Their Caching Secrets
- Who's Responsible for Cloud Security Now?
- Grounding the Cloud: What You Need to Know About Cloud Service Brokers
- A Look at the Growth of Cloud Services
- The 4 Most Confusing Concepts in Networking Explained
- Multimodal Learning: A New Frontier in Artificial Intelligence