This is part of the writeup on Using Google Cloud Storage with Apps Script. By now you should have completed all the steps in Setting up or creating a console project, Enabling APIs and OAuth2 and Using the service account to enable access to cloud storage and now have a project and credentials you can use to create your storage buckets.

If you prefer to look at some examples first, then see GcsStore examples

Libraries

You should already have the cGoa library, from when you set up your credentials in Using the service account to enable access to cloud storage. See also Using multiple service accounts to work across projects and lock down permissions with Apps Script and Goa

You’ll also need the cGcsStore library on github or at this key.

1w0dgijlIMA_o5p63ajzcaa_LJeUMYnrrSgfOzLKHesKZJqDCzw36qorl

 

It’s not necessary, but I usually also include the cUseful library too, as it has lots of shortcuts and I may use some of them in the examples throughout this post. You’ll find that on github or at

1EbLSESpiGkI3PYmJqWh3-rmLkYKAtCNPi1L2YCtMgo2Ut8xMThfJ41Ex

 

So my final resource manifest looks like this.

Setting up your data buckets

You can use the Cloud storage browser, which you can get to via the console

Here the browser is showing my buckets

However, GcsStore already knows how to create buckets, so unless you have some special setup you want to do, you can leave it up to it.

Deciding on what buckets to use

It’s worth thinking about how you are going to use cloud storage with respect to Apps Script. In my case, I’m planning to set up 3 buckets.

  • xliberation-cache – as an alternative for the CacheService
  • xliberation-store – a general purpose place to put permament data
  • xliberation-properties -an alternative to the Properties Service

Of course you can just lump them all into one, but if you are planning a cache type usage, then you can setup GcsStore (or use the browser) to set up lifecycle management to clear out object after a period of time. Lifecycle management applies at the bucket level, so that’s why you’d need a separate bucket.

Setting up access token and handle

You will have set up Oauth2 in Using the service account to enable access to cloud storage, or perhaps you didn’t use cGoa and have an access token in some other way. In any case, here’s the pattern for setting up any app that plans to use this GcsStore.

If you want to use GcsStore to create a bucket, then you can use this pattern.Note that to create a bucket you need the Project Id. Goa knows it (as in this pattern), but if you are not using goa, you’ll need to provide the project id

Finally, we just need to assign the bucket to the handle

Lifetime

If you are using cache, you can set expiry times for entries, just like the cacheservice. However, the Cloud Storage platform does not have this capability natively. It does have lifecycle management, where you can ask it to clean up unused items after a certain number of days. GcsStore respects expiry times by ignoring items that have expired, but the lifecycle mechanism (along with the cleaner method which we’ll look at later), can be used to keep the storage clean.

By default, the bucket will not have lifecycle enabled, which will mean that items written are permanent. You can change this either via the browser, or with GcsStore

gcs.setLifetime (1);

Visibility

One of the problems of the Properties and CacheService is that you can only see values in them from the same script. If you want to share values across script you have to do something else. Although the cloud platform is actually flat storage you can simulate folders. Using this technique you can make particular data visible across any sharing community just giving it a folder key.

For example you could simulate the UserProperties() visibility with this

which would look like this in the browser

 

 

and any puts and gets such as

 

and

would show up in the folder and be visible to any script using this project and using the same folder key.

The default folder key is “globals” as in

Expiry

Objects can be expired by setting a default expiration time (in seconds) for the store

or/and passing it in a put operation.

GcsStore uses object metadata to decide whether an item has expired , and if it has , gcs.get will act as if it doesn’t exist.

An item expiring does not remove it, as there is no mechanism in Google Cloud Platform to make it happen, but you can use gcs.setLifetime as discussed earlier to automatically delete all items in a bucket after some number of days.

In addition, GcsStore has a cleaner function, invoked by gcs.cleaner(), which will remove any expired items.

If necessary, you can trigger something like the code below to run every now and again.

You’ll notice when it’s finished it writes a report to the store.

which looks like this

Data types

The value in gcs.put (key,value) can be a string, an object or a blob. If it’s an object, it will be stringifed and automatically parsed when read back in again with gcs.get(key). The content type of a blob is preserved through to the cloud storage, and if it was written as a blob, the same blob will be returned by gcs.get() as in this example.

Compression

GcsStore can automatically compress items. If they were compressed by gcs.put(), they will be uncompressed by gcs.get().

Objects can be compressed by setting a default for the store

or/and passing it in a put operation.

Compressed items are stored as zip files, so they can be downloaded as normal through the browser if required.

There are some examples of all this in GcsStore examples

For more like this see Google Apps Scripts Snippets

For more like this, see Google Apps Scripts snippets. Why not join our forum, follow the blog or follow me on Twitter to ensure you get updates when they are available.