Like most Google Services, there are various quotas that apply to scriptDb. One of these is the maximum object size. Here’s how to split up large arrays to help get round that.
In this case, I’m not using the mcpher library (Using the mcpher library in your code) functions to write to scriptdb, so this code has no dependencies, since you may have to make some adaptions.

The data

I’m assuming your data has some static properties, plus a big array. Something like this
{static:{a:”something a”,b:”something b”},bigArray:[{something:’xxx’,another:’yyy’},{}…]}
The objective is to write the static data only once, but to split the array into multiple parts less than the maximum size permitted. The maximum object size is 4k, but it’s hard to get the size of an object, so I’ll use the stringified version of the object to judge its size for now. You may want to tweak that down a bit to give more size estimate room.

The code

In this example, I’m generating some random data of random length to populate an array that will likely be more than the maximum object size.  I read it back in again later and compare against the original to make sure it worked. Notice that I’m using a siloId property to identify each object. Every chunk of data associated with this object will have this siloid, plus some index, so that it can be reconstructed.
The 2 functions of interest are
where db is your scriptDb, siloId is some unique key to identify this object, stat is the static part of your object, and bigArray is the array that is too big to fit. putBatch() will split up the array into multiple chunks.
and
getBatch will find all the records associated with siloId and reconstitute them into a single object, the same as the one that was passed to putBatch().
For more like this, see  Using scriptDB. Why not join our forum, follow the blog or follow me on Twitter to ensure you get updates when they are available.