Caching *extremely* large blobs

Topics: Caching Application Block
Jul 13, 2011 at 5:17 PM

Hello,

We've been using the enterprise library for quite some time now and so far it's been great, up until now we haven't had a single problem with it.

Background:

Without getting into too much detail we've got a storage medium containing about 10-20 million blobs whose sizes can range between 1 KB and 500 MB. Since these blobs are frequently requested, we intended to cache each of them as they're requested on the server which requested them for a pre-determined period of time. Originally we started working with the IsolatedStorage backing store, but due to the files being placed on a limited partion on the server we ended up writing our own backing store that allows the blobs to be stored in a location of our choosing. Everything has been working beautifully so far, until we noticed an out of memory error logged on the servers with increasing frequency.

Keep in mind we haven't had any problems with the backing store at all, the problem seems to be coming from the underlying Cache class storing the value to be cached in-memory along within the backing store. I originally thought the Cache class would simply be communicating with the backing store to retrieve information contained within it. Whether that's in-memory if you're using the null backing store, in the cloud, or wherever the custom backing store is storing the data. However after looking at the code, it seems to be storing them inside itself in-memory as well as in the backing store.

My question: Is there an easy way to change the Cache class to either not store the values in-memory or use a custom implementation without having to rewrite most of the caching block?

If we're going to end up rewriting most of the caching block to accomplish this, we'll probably just remove the dependency and write a custom caching provider. Has anyone on the entlib team ever considered rewriting that bit inside the Cache class to not store the data and just request the data from the backing store? Since we've been having to put the null backing store type for in-memory caching, simply moving the in-memory caching logic to inside the NullBackingStore class would alleviate the dependency the Cache class has on memory. Granted there would be a performance hit if you're accessing the data on the file system, but it's still faster than having to retrieve the blob across the network again.

Thanks for all your hard efforts, they're definitely appreciated!

Jeff

Jul 15, 2011 at 8:28 AM

Hi,

Unfortunately, the only way to achieve this is to create your own implementation of CacheManager. According to Replacing the Default Cache Manager found here:

            Change the way that the cache manager loads the in-memory cache, perhaps so that it fetches only the most recently used items from the backing store and then fetches less recently used items on demand.

 


 Noel Angelo Bolasoc
Global Technologies and Solutions
Avanade, Inc.
entlib.support@avanade.com

 

 

Jul 15, 2011 at 2:21 PM

Thanks for the response... you confirmed what I had seen after reviewing the code. Since the Cache class hasn't been abstracted at all, that means we'd need to implement the CacheManager, and the Cache instance it uses, along with all the factory methods on the configuration classes to create the objects necessary for it to function. It would be much easier to do if the Cache class was abstracted enough that a large majority of the code is reusable, but as of right now that just isn't the case. We appreciate all the efforts you've put forth on the product, but since we'd basically be rewriting the block and using only the infrastructure to create the objects we'd be better off just dropping the use of your product and look for caching solutions elsewhere.

Thanks again for your efforts and support!

Jeff

Jul 17, 2011 at 4:44 AM

The caching block is most likely going to be removed in the next version of Enterprise Library anyway, since the .NET framework now includes System.Runtime.Caching.