Motivation

This library used to be part of my cUseful collection, but I’ve decided to pull out into a library in it’s own right. The idea is not only to be able to squeeze more into cache by compression, but also to spread across multiple cache entries. In addition, through the use of plugins, it also allows multiple backend cache stores, all accessed the same way, with the option of creating additional ones. This abstraction allows you to switch platforms as you outgrow them without any main code changes.

The library includes plugins for caching to CacheService, PropertyService and Drive, and in this series of posts I’ll also show you an example of creating a plugin so it can support cloud storage, Office OneDrive, Firebase, and github as a back end. Sadly we can’t do redis directly from Apps Script. If you create any plugins – let me know and we can write them up in these articles.

Update

Since writing this post, all these plugins are now built into the bmCrusher library, s0 they can all be accessed in the same way as the CacheService built-ins. In addition I’ve implemented Node versions of these so that data can be cached between Apps Script and Node.

Google Apps Script now has a selection of built in supported plugins

some of which are specific to Apps Script platform

  • CacheService
  • PropertyService

and others which are generic and are/will be implemented for Node as well

  • Upstash (redis/graphql)
  • Github
  • Drive
  • Google cloud storage
  • One Drive

 

How to use

Let’s start by looking at how you use the library from your apps script app. This example is using the Properties Service as the store.

  const crusher = new bmCrusher.CrusherPluginPropertyService().init ({
    store:PropertiesService.getScriptProperties()
  })
Initialize the crusher

The library is dependency free – so you pass whichever Property store you’d like it to use.

Writing

Irrespective of the platform, all writing is done with a put method.

crusher.put(key, data[,expiry])
write some data

put takes 3 arguments

  • key – a string with some key to store this data against
  • data – It automatically detects converts to and from objects, so there’s no need to stringify anything.
  • expiry – optionally provide a number of seconds after which the data should expire. If missing, the default expiration for the store will be used if it supports automatic expiration. More on how expiry works later.

Reading

const data = crusher.get (key)
retrieving data

get takes 1 argument

  • key – the string you put the data against, and will restore the data to it’s original state

Removing

crusher.remove(key)
removing an item

Built in plugins

These are all initialized in a similar way, passing in the Apps Script Service it should use.

Property Service

  const crusher = new bmCrusher.CrusherPluginPropertyService().init ({
    store:PropertiesService.getScriptProperties()
  })
Initializing with a property service

Cache Service

  const crusher = new bmCrusher.CrusherPluginCacheService().init ({
    store:CacheService.getUserCache()
  })
Initializing with a Cache Service

Drive service

  const crusher = new bmCrusher.CrusherPluginDriveService().init ({
    store:DriveApp,
    prefix:'/crusher/store/'
  })
Initializing with the Drive service

You’ll notice that an initialization option is the folder to use as the store – you’ll probably want to keep this stuff separate from your regular Drive files. Using Drive is a lot slower than the other solutions, but it can be very useful for sharing cache between projects – especially with the ability to segregate by folder.

Expiry

Now of course not all platforms support automatic expiry, so you may need to do some housekeeping from time to time. The expiry time is used to pretend the entry no longer exists if it has expired, if the store doesn’t support autoexpiry. However, it does try to automate that as much as it can. If you attempt to access an item, say in the property store that has expired, it will delete it.

if you visit your drive folder specified for holding this kind of files you’ll see something like this, where the filename is the key you specifed in the .put() method. If you try to .get() any of these and they’ve expired it’ll return null, and delete the file while it’s at it.

Fingerprint optimization

Since it’s possible that an item will spread across multiple physical records, we want a way of avoiding rewriting (or decompressing) them if nothing has changed. Crusher keeps a fingerprint of the contents of the compressed item. When you write something and it detects that the data you want to write has the same fingerprint as what’s already stored, it doesn’t bother to rewrite the item. However if you’ve specified an expiry time, then it will be rewritten so as to update its expiry. There’s a catch though. If your chosen store supports its own automatic expiration (as in the CacheService), then the new expiration wont be applied. Sometimes this behavior is what you want, but it does mean a subtle difference between different stores.

You can disable this behavior altogether when you initialize the crusher.

  const crusher = new bmCrusher.CrusherPluginPropertyService().init ({
    store:PropertiesService.getScriptProperties(),
    respectDigest: false
  })
Always rewrite store even if the data has not changed

 

 

To follow

In subsequent articles I’ll show you how to create plugins for a series of backends.

 

Links

library id: 1nbx8f-kt1rw53qbwn4SO2nKaw9hLYl5OI3xeBgkBC7bpEdWKIPBDkVG0

Github: https://github.com/brucemcpherson/bmCrusher

Scrviz: https://scrviz.web.app/?repo=brucemcpherson/bmCrusher

3 Favourite things in one article- redis, apps script and graphQl

Caching across multiple Apps Script and Node projects using a selection of back end platforms