I’ve written many times about various Apps Script caching techniques such as how to deal with size limits and use different back ends for more persistent caching. In this article, I’ll bring together a few of those concepts into a single library, and also introduce ‘pre-caching’.
The property stores and cache stores are pretty fast, but it’s faster if the result you want is already in memory. Pre-caching writes recent caching to memory (as well as to a store/cache). Get methods first check in-memory before accessing the selected store. However, you probably want these to be fairly ephemeral, since you often share caching and stores across script instances. You also will want to limit the amount of memory to set aside for in memory caching.
Compression and multiple cache entries
This library uses all the same techniques described in Squeezing more into (and getting more out of) Cache services, but it also adds pre-caching to recently set values from memory where it can.
You use both Cache and Property Store in the same way, but you can set a few extra parameters on Caching instantiation.
These use the Store class
The prefix is a string that the store adds to keys in case you want to seperate keys with the same name from each other in the property stores. I’ll explain the others later when I cover the PreCache class.
Instantiate Cacher stores like this
Cacher store options
As with Property Stores, evictAfter and maxLength apply to PreCache, but there are some specific parameters to control caching behavior.
- expiry – how many seconds to keep items in Cache befpre expiring them
- prefix – an optional key addition to separate from other items sharing the same cache
- stale – whether to operate stale detection
- staleKey – the key to use if stale detection is turned on
- log – useful for debugging
- reCache – if true, the expiry clock will restart for the fetched item each time on each access
Stalenessof data is a challenge for all types of caching. We need a more flexible handling method than just expiration times. This library provides a way to invalidate all or some of the cache entries.
Here’s how it works.
Cache key digests
When you write data to cache, you provide a key and other options to retrieve it by. When you instantiated the Cacher, you provided a prefix (or used the default one) which enables segregation of different kinds of data. The cacher uses a digest of your key and options plus the prefix behind the scenes Using a different Cacher instance with a different prefix segregates items that could share the same key
Stale and stale key
In addition to prefixes and keys, you can enable stale detection and provide a stale key.
How does stale detection work
When stale is turned on, the cache key digest now includes whatever value is recorded against the staleKey in cache. The Cacher automatically maintains the value of staleKey. When you set a value to cache, it’s possible you want to invalidate other dependent entries, like this.
In this example, we made any cache entries to do with employees stale, because now we are dealing with another company. Since the key digest includes the latest value of staleKey when stale is enabled, the cacher will ignore any out of date, earlier entries.
Of course you could enhance the key in a single cache to reference both the company and the employee too. However, this staleness feature is handy when it’s too complex to figure out dependencies between different cache entries.
What can you write to these stores
You can set any primitive (eg boolean, number, string etc), or anything that’s Stringifyable (object, array etc). There’s not need to convert it to a string as the Store and the Cacher will do that and return it to its original state on retrieval.
Format in the property store
A property store entry looks like this. If you do want to access the property store directly (without using this library) you’ll need to JSON parse it and take the .ob property of the result. However, using store.get() will take care of all of that, and you’ll get back whatever you sent over with store.set(key, value)
Format in the Cacher store
Just like the property Store, the cacher will return the value in its original state. However, items are only accessible via the libary with the cacher.get() method because
- The key is a digest of your key and options, the prefix and possibly a stale value using a method private to the library. You won’t be able to find it.
- The data is compressed.
- If it’s larger than 100k (the cache property service limit), it’s spread over several cache entries and reconstituted when you do a cacher.get()
The PreCacher is some extra sugar on the Cacher and Store classes. It keeps an in-memory copy of whatever it writes via the .set() methods. It also respects deletion and staleness.
Precached items are kept in memory for the period of milliseconds you specify in evictAfter. When you .get() from a store, it first checks the value isn’t in memory, then goes to the respective Cache or Property Store to check there. Obviously in-memory is a bit faster than going out to the stores so this can give a little bit of a performance boost.
You can control the memory usage with the maxLength parameter. This applies to the maximum length of the total of all the entries in the in-memory store at any one time. If you attempt to add a value that would cause the store to exceed this, it first evicts expired and older entries. Setting maxLength to 0 turns off PreCaching.
I recommend you create and use an Exports script to organize your libraries and internal modules. That way you don’t need to worry about the order of script loading, nor declaring a particular library. Here’s how I use bmCacher from my testing script.