Like most Google Services, there are various quotas that apply to scriptDb. One of these is the maximum object size. Here’s how to split up large arrays to help get round that.
In this case, I’m not using the mcpher library (Using the mcpher library in your code) functions to write to scriptdb, so this code has no dependencies, since you may have to make some adaptions.

The data

I’m assuming your data has some static properties, plus a big array. Something like this
{static:{a:”something a”,b:”something b”},bigArray:[{something:’xxx’,another:’yyy’},{}…]}
The objective is to write the static data only once, but to split the array into multiple parts less than the maximum size permitted. The maximum object size is 4k, but it’s hard to get the size of an object, so I’ll use the stringified version of the object to judge its size for now. You may want to tweak that down a bit to give more size estimate room.

The code

In this example, I’m generating some random data of random length to populate an array that will likely be more than the maximum object size.  I read it back in again later and compare against the original to make sure it worked. Notice that I’m using a siloId property to identify each object. Every chunk of data associated with this object will have this siloid, plus some index, so that it can be reconstructed.
The 2 functions of interest are
putBatch(db, siloId, stat, bigArray);
where db is your scriptDb, siloId is some unique key to identify this object, stat is the static part of your object, and bigArray is the array that is too big to fit. putBatch() will split up the array into multiple chunks.
getBatch(db ,siloId );
getBatch will find all the records associated with siloId and reconstitute them into a single object, the same as the one that was passed to putBatch().
function test() {

  var db = ScriptDb.getMyDb(), siloId = "batchTest", TESTSIZE = 100,bigArray = [];
 // make a big array
 for (var i = 0; i < TESTSIZE ; i++ ) {
   // add some random data of random size
   bigArray.push ({data:arbritraryString(randBetween(100,2000)),moreData:arbritraryString(randBetween(100,400))});
// lets say the data item consists of some static data, plus an array, and we'll provide a siloId to find it again
 var stat = {someObject:"somevalue",another:"somethingelse"};
 var results = putBatch(db, siloId, stat, bigArray);
 // get it back
 var results = getBatch(db ,siloId );
 // check it matches the original
 for (var i = 0; i < bigArray.length; i++ ) {
   for (var k in bigArray[i]) {
     if (bigArray[k] != results.bigArray[k]) throw ("oops");


function putBatch (db,siloId,stat,bigArray) {
  var small = [];
  var maxSize = 4000 - siloId.length ;
  // TODO some error handling
  bigArray.forEach (function (e,idx) {
    if (JSON.stringify(e).length + JSON.stringify(small).length  >= maxSize ) {
      if (!small.length) throw ("array element too big");{siloId:siloId, idx:idx,  bigArray: small}); 
      small = [];
    small.push (e);
  if (small.length){siloId:siloId, idx:bigArray.length+1, bigArray: small}); 
  // now the static ( {siloId:siloId,idx:0,static:stat,len:bigArray.length} );

function getBatch (db,siloId){
  var bigArray = [];
  var result = db.query ({siloId:siloId,idx:0});
  if (result.getSize() != 1) throw ("wrong number of header records - should be 1 but was " + result.getSize());
  var o =;
  // now get the array portion
  if (o.len) {
    var arrayResult = db.query ( {siloId:siloId,idx:db.greaterThan(0)} ).sortBy('idx', db.ASCENDING);
    while (arrayResult.hasNext()) {
      // append the array
  if (bigArray.length != o.len) throw ("didnt get right number of chunks " + bigArray.length + " " + o.len);
  o.bigArray = bigArray;
  return o;

function arbritraryString (length) {
    var s = '';
    for (var i = 0; i < length; i++) {
        s += String.fromCharCode(randBetween ( 33,125));
    return s;

function randBetween(min, max) {
  return Math.floor(Math.random() * (max - min + 1)) + min;
For more like this, see  Using scriptDB. Why not join our forum, follow the blog or follow me on Twitter to ensure you get updates when they are available.