Update Nov2017
UPDATE:  parse.com as now been closed and moved to parseplatform.org. I will update the content of this page an move the back-end to this platform

Writing to parse.com was fairly complex, with all the asynchronicity and limiting. Writing the workbook to scriptDB should be much easier. We’ll use exactly the same approach as for Updating parse.com , namely get the data, get the existing contents of the DB, update with any additions or modifications, delete nothing.

I already have a silo class  for scriptDB partitioning – which would be kind of analogous to the parse Class, and caching  to abstract getting data from sheets- like the Google visualization data fetching , so the structure can be more or less the same.
Here’s the whole application – a whole lot simpler than the parse version. But it takes so long to run. I’ll do some benchmarks of all the variations at the end of this, but as usual, I hit the dreaded “execution time exceeded“, so I have to run it a couple of times to get all the rows done.
function loadToScriptDB () {
// get the color map
// duplicate the process as in https://ramblings.mcpher.com/parse-com/updating-parse-com/ for parse.com
  var maxLoad = 0, batchSize = 1000, wcount = 0;
// get the sheet data, and the silo to store it in, and the current silo contents
  var data = mcpher.sheetCache("colorTable");
  var silo = mcpher.scriptDbSilo("colorSchemes",colorschemer.publicStuffDb());
  var all = silo.queryArray({},0);
  Logger.log (all.length + " loaded from scheme");
// find the column headers
  var nr = maxLoad || data.getRowCount()-1 ,nc = data.getColumnCount();
  Logger.log (data.getRowCount()-1 + " data rows loaded from workbook");
  var colHeaders = {};
  for ( var i = 0; i < nc; i ++ ){
    colHeaders[mcpher.LCase(data.getValue(1,i+1))] = i+1;
  //now traverse the data and write it to the silo - note that row numbering starts at 1
  for (var i = 2  ; i <= nr; i++) {
    var key = data.getValue(i,colHeaders.name).toString();
    var ob = 
      {key: key, 
       scheme : data.getValue(i,colHeaders.scheme).toString(),   
       label : data.getValue(i,colHeaders.label).toString(), 
       code : data.getValue(i,colHeaders.code).toString(), 
       hex : data.getValue(i,colHeaders.hex).toString() }  ;
      var a = findInDb (all,ob);
      var dirty = true;
      if (a) {
      // has it changed ?
        dirty = false;
        for (var k in ob) {
          dirty = dirty || ob[k] != a[k];
      // write or update?
      if (dirty) {
        if (silo.batchLength() >= batchSize) { 
  if (silo.batchLength()) silo.saveBatch();
  Logger.log ("written " + wcount + " rows to db");
function findInDb (all,ob) {
  return mcpher.binarySearch(all,ob,0,all.length-1,obCompare);
function obCompare(a,b) {
  if (a.key < b.key)
     return -1;
  if (a.key > b.key)
    return 1;
  return 0;


I am using the batching built in to the silo class for scriptDB , and buffering to the same level as in the parse version. I haven’t bothered throttling as it’s not required for scriptDB.

For more on parse.com see Parse.com

For help and more information join our forum,follow the blog or follow me on Twitter .