There are a few articles on this site about getting more into (and out of) cache and property services. Now my cUseful library contains the ability to create caching plugins that take advantage of some of those techniques, but using a variety of backend stores.

Some of the limitations of the cache and property store are –

  • Sharing across projects is not supported (but can be hacked around with the use of libraries)
  • Limited payload sizes, and cache lifetimes
  • Property service rate limiting
  • Key inflexibility
  • Data limited to strings

This technique helps to get round that by

  • Using plugins to support different kinds of cache stores/databases/files – any kind of store – share data across projects
  • Automatic compression, and spreading data across multiple cached items in stores that have a small data size
  • Built in exponential backoff
  • Keys can be automatically constructed from objects.
  • Supporting blobs in property stores

You’ll need the cUseful library


Built in plug-ins

The CacheService and PropertyService are supported through plug-ins available from the cUseful library. Here’s how to use each of them. Drive service is also supported and written up here Google Drive as cache

Properties Service

Cache Service


3 methods are supported


This can be of any type, or even an object, for example


This can be of any type, for example

Getting the item will reconstitute to its original form. A missing item will return null

An item can be removed like this, irrespective of type.
crusher.remove (“name”);

How does it work?

The main techniques are:

  • If data is over a certain size, then it will be automatically compressed and uncompressed when retrieved
  • If data is still too large for a given store’s limits (which you can set), then it will create a series of linked items, which are reconstituted when retrieved.
  • Objects are stringified and re-parsed automatically when detected
  • Dates are converted to timestamps, then back again when retrieved
  • Blobs are converted to base64, preserving their content type and name, and reconverted when retrieved
  • The store is abstracted from the crusher, so the methods are exactly the same, irrespective of which underlying store is being used.


These examples for the built in property stores and cache stores show some initialization options.


You need to at least pass a store to use

Cache store

Property store

You can set a few other options to affect the behavior, although there’s probably not much call for these in normal usage (other than plug-in testing). This example sets small chunk sizes (which would provoke spreading the data over many entries) and a very small compression threshold (normally compression will actually increase the size of anything under about 200 bytes). By default anything under 250 bytes is not compressed.


You can write your own plug-ins to support other stores such as databases, files .. even spreadsheets. Essentially – anything that can be used as a key/value store.

As a simple extension, here’s how to use Google Cloud storage as a store. It used my GcsStore overview – Google Cloud Storage and Apps Script library, which has exactly the same methods available as used in the Cache Service – so it means we can simply re-use the CacheService plugin. The benefits of Cloud storage over Apps Script services are

  • Much bigger items can be written in a single file
  • The lifetime of items can be short (as a cache) or permanent (as in property store), or anywhere in between
  • You can share a data across projects, or even outside of apps script
  • You can organize the data into folders to create any kind of scope you want (as opposed to just user,script or document) like in Apps Script services.

First we need a little set up, as OAuth2 is required.

You’ll need the
GcsStore overview – Google Cloud Storage and Apps Script library


OAuth2 for Apps Script in a few lines of code


Goa setup

    • Go to the cloud console project hosting the storage bucket you’ll use for this purpose (create it if necessary), generate a service account with the storage admin role, and download the JSON credential file to Drive.
    • Create a one off function that looks like this, substituting the fileid of the file you just downloaded.
  • Run it. You can delete that now if you want – it’s no longer needed.

GcsStore setup

As previously mentioned, we can use the exact same plug-in for gcsstore as the one for Apps Script cacheservice, but the gcsstore needs a little setup so it knows where to write stuff to and how to do it. Modify the below with your bucket name, and folderName (which can be used as a ‘visibility scope’ for your store, and (if required) a default expiry.

That’s it – your store is ready to be passed over to the CacheService plug in like this.

All the previous examples will work, except they’ll now write to cloud store instead of the cache service.

Since the cloud store is permanent, you can go there and see what’s been written using the storage browser.

Setting lifetimes with gcsstore

The cloud storage items contain expiry information in their metada, according to how you have written them. This means that if you try to get an item that has expired it will return null (even though it may still be present in the store).

This is because cloud storage is meant to be permanent. However you can set lifecycle management for the bucket, which means that items will last for a given number of days, then be automatically deleted.

If you are planning to use your storage bucket only for temporary data, then gcsstore supports managing this for you. Note though that it applies to the entire bucket, not just items written by gcsstore. When you create the store, add this to turn this on.

Plug-in skeleton

These are rather simple to create, with very little customization required between platforms. Here’s the plug in for the property service. If you create one you’d like to share, let me know and I’ll incorporate it into the library.
function CrusherPluginPropertyService () {

For more like this see Google Apps Scripts Snippets

For help and more information join our community, follow the blog or follow me on Twitter.