In Blistering fast file streaming between Drive and Cloud Storage using Cloud Run I showed how you could use Cloud Run to hand off transfers between Drive and Cloud Storage to make them faster and more reliable. This is particularly pertinent for Apps Script where larger file transfers can cause timeouts and unexpected runtime errors. In this article I’ll show an example of how to use Cloud Run transfers directly from Apps Script.

Prequisite

Before you can do this you should set up and test your Cloud Run Proxy transfer service, as described in Blistering fast file streaming between Drive and Cloud Storage using Cloud Run  and test it out using curl. You’ll need the service accounts that you set up there to give to Apps Script, and of course the endpoint for your deployed cloud run service.

We’ll assume that you’ve put those service accounts somewhere on drive so that Apps Script can get them. If you get stuck setting up your own Cloud Run service, I may give you limited access to a running service, but you’ll still need to create your own service accounts to access your own Drive and/or Cloud storage first. Ping me at bruce@mcpher.com

Setup

Just to make it easier to work with folders on drive, I’m using A handier way of accessing Google Drive folders and files from Apps Scripts if you’re using these snippet you’ll need that library.

Apps Script test

This will copy a bunch of random files between gcs and drive each way.

/**
* this will use cloud run to transfer files between drive and cloud
*
*/
const testCopy = () => {
// get the service accounts
const sa = getServiceAccounts()

// get the files to transfer work package
const work = getWork()

// now we can fire that at cloud run to take care of
const cloudRun = 'https://xxxxxxxxxx.run.app'
const response = UrlFetchApp.fetch(cloudRun, {
method: 'post',
contentType: 'application/json',
payload: JSON.stringify({
work,
sa
})
})
console.log(response.getContentText())
}
copying files test

Getting the service accounts

The first part is to get the content of the downloaded service account json files so we can pass them to cloud run to enable it to access your Drive and GcS files. Since I’m planning to use both GCS and Storage and I have separate service accounts from each, I need to get and combine the content.

/**
* we'll need to get the service accounts to access drive and gcs
* these are on Drive - i can get them by path for convenience in case I change them
*/
const getServiceAccounts = () => {
// force a permission dialog with this comment
// DriveApp.getRootFolder()
const ff = bmFolderFun.paths(DriveApp)
const getParsedFile = (path) => {
const file = ff.getFile(path)
if(!file) throw new Error(`failed to get file from ${path}`)
return JSON.parse(file.getBlob().getDataAsString())
}
// get the 2 service accounts & combine them
return [{
name: 'drive',
content: getParsedFile("/serviceaccounts/sa/drive.json")
} , {
name: 'gcs',
content: getParsedFile("/serviceaccounts/sa/gcs.json")
}]

}
get service accounts content

That will generate the first part of the parameter defintion to pass over to the Cloud Run service.

Getting the work package

To give it a good test, I’m going to send over a mixture of file types and sizes in each direction. The cloud run service supports both ID and Path specification for Drive files, so I’m going to send over both versions.

/**
* make up some random work package
*/
const getWork = () => {

const ff = bmFolderFun.paths(DriveApp)
// we'll get all the files in some random drive folder
// and copy them over to some other path on cloud

const cloudBucket = 'bmcrusher-test-bucket-store'
const cloudPath = `${cloudBucket}/dump/csvs/`
const drivePath = "/csvs/"
const filesToCopy = Array.from(ff.iteratorFromFolderPath (drivePath))

// this will be a load of from, to for each file to be copied
const filesByIds = filesToCopy.map(f=>({
from: f.getId(),
to: cloudPath f.getName()
}))
// lets do the same again, this time passing the drive folder rather than the id
// but copy them to somewhere else on cloud
const filesByName = filesToCopy.map(f=>({
from: drivePath f.getName(),
to: cloudPath "byname/" f.getName()
}))

// now copy something from storage to drive at the same time
const images = ["/images/a.png", "/images/c.png"].map(f=>({
from: cloudBucket f,
to: f
}))

// package all that up into a single work package
const drive = {
type: 'drive',
sa: 'drive',
subject: 'bruce@mcpher.com'
}
const gcs = {
type: 'gcs',
sa: 'gcs'
}
return [{
op: 'cp',
from: drive,
to: gcs,
files: filesByIds.concat(filesByName)
}, {
op: 'cp',
from: gcs,
to: drive,
files: images
}]


}
make a work package

This will format 2 sets of transfers to try out both directions. Remember that the service will do all the transfers in a work package simutaneously, so we should expect this to all run very quickly.

Note that if any folders specified don’t exist already, the cloud run service will create them.

files from Drive

Here’s the files being transferred from Drive – I’m making 2 copies from Drive to GCS. One using the file IDS, and another using the file paths and leaving it to the Cloud run service to figure out the Ids. The results should be the same.

files from Gcs

Here are the files being transferred from Gcs to Drive

The copied files

On drive

copied to drive

on gcs

on drive

The log file

We can check progress on  cloud console/cloud run/logs

cloud run log file

The total run time was about 12 secs to transfer about 250mb across 10 files.  Not Bad!

The response

The service returns a summary of what was transferred and how long it took for each one – so if you’ve copied files from GCS to Drive you can pick up the generated fileIds from here.

[
[{
"size": 30606016,
"took": 8123,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/20210624.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/20210624.csv"
},
"from": {
"pathName": "1YrODoKLFdaMc0G7c3aW7DQQxErAJNs1Y",
"type": "drive",
"mimeType": false,
"fileId": "1YrODoKLFdaMc0G7c3aW7DQQxErAJNs1Y"
}
}, {
"size": 33811410,
"took": 8878,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/20210622.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/20210622.csv"
},
"from": {
"pathName": "16tRi45LIXVNbvh3NvTKcaiDH4ik23d5H",
"type": "drive",
"mimeType": false,
"fileId": "16tRi45LIXVNbvh3NvTKcaiDH4ik23d5H"
}
}, {
"size": 29189802,
"took": 8423,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/20210616.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/20210616.csv"
},
"from": {
"pathName": "1oxWHnvxcbBanQ6lTOOqy5snn6767d_a8",
"type": "drive",
"mimeType": false,
"fileId": "1oxWHnvxcbBanQ6lTOOqy5snn6767d_a8"
}
}, {
"size": 33027954,
"took": 9216,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/20210614.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/20210614.csv"
},
"from": {
"pathName": "1LS1YWlN0MtLqgo95iBwmO7gk0G8vX7M5",
"type": "drive",
"mimeType": false,
"fileId": "1LS1YWlN0MtLqgo95iBwmO7gk0G8vX7M5"
}
}, {
"size": 30606016,
"took": 9021,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/byname/20210624.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/byname/20210624.csv"
},
"from": {
"pathName": "/csvs/20210624.csv",
"type": "drive",
"mimeType": "text/csv",
"fileId": "1YrODoKLFdaMc0G7c3aW7DQQxErAJNs1Y"
}
}, {
"size": 33811410,
"took": 8620,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/byname/20210622.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/byname/20210622.csv"
},
"from": {
"pathName": "/csvs/20210622.csv",
"type": "drive",
"mimeType": "text/csv",
"fileId": "16tRi45LIXVNbvh3NvTKcaiDH4ik23d5H"
}
}, {
"size": 29189802,
"took": 8608,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/byname/20210616.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/byname/20210616.csv"
},
"from": {
"pathName": "/csvs/20210616.csv",
"type": "drive",
"mimeType": "text/csv",
"fileId": "1oxWHnvxcbBanQ6lTOOqy5snn6767d_a8"
}
}, {
"size": 33027954,
"took": 9091,
"to": {
"pathName": "bmcrusher-test-bucket-store/dump/csvs/byname/20210614.csv",
"type": "gcs",
"mimeType": "text/csv",
"fileId": "dump/csvs/byname/20210614.csv"
},
"from": {
"pathName": "/csvs/20210614.csv",
"type": "drive",
"mimeType": "text/csv",
"fileId": "1LS1YWlN0MtLqgo95iBwmO7gk0G8vX7M5"
}
}],
[{
"size": 1405036,
"took": 2004,
"to": {
"pathName": "/images/a.png",
"type": "drive",
"mimeType": "image/png",
"fileId": "1sssLa86RmeEzC3rEaKtHdoYRYY3oVEMN"
},
"from": {
"pathName": "bmcrusher-test-bucket-store/images/a.png",
"type": "gcs",
"mimeType": "image/png",
"fileId": "bmcrusher-test-bucket-store/images/a.png/1626168317604308"
}
}, {
"size": 40005,
"took": 1513,
"to": {
"pathName": "/images/c.png",
"type": "drive",
"mimeType": "image/png",
"fileId": "17EILQgZtDodIVxmBY7a3ZmcqcAjZP2Mu"
},
"from": {
"pathName": "bmcrusher-test-bucket-store/images/c.png",
"type": "gcs",
"mimeType": "image/png",
"fileId": "bmcrusher-test-bucket-store/images/c.png/1626261391213214"
}
}]
]
service response

links

bmFolderFun

github

IDE

library: 16NWIRmwJJY_wN4erx_QZA36_ssaB2GiKDPYebj7fjBU1SpVQlo9N_RA7

scrviz: https://scrviz.web.app?manifest=brucemcpherson%2FbmFolderFun%2Fappsscript.json

Setting up Cloud Run service

Blistering fast file streaming between Drive and Cloud Storage using Cloud Run