This series of articles will work through how to create a connector for Data Studio in Apps Script. For an introduction into the structure I’ll be following see Creating a connector for Data Studio

This article will go through coding up the functions required to feed Data Studio, but we won’t actually connect to Data Studio till a later article. The source data will be the github apps script project catalog from Every Google Apps Script project on Github visualized. We’ll be using the Apps Script library that knows how to get the data from Apps Script projects on github. The live scrviz app can be found here. Creating a Data Studio connector will allow a more detailed , customizable analysis of the 4000+ projects scrviz has cataloged.

Example report

What I’m after initially is a simple Data Studio report by owner on the scrviz data, like this, but first we’ll code up the connector and get it working as an Apps Script library.

scviz datastudio report

Project layout

Starting with the main namespaces, and exporting the mandatory functions that’ll be needed by Data Studio, which will be implemented in the Connector namespace. I’ll build up the detail of this code as we go through the article.

// generic test to see if we're allowed to use cache
const _cacheService = CacheService.getScriptCache();

// This namespace is all about getting and formatting data
const dataManip = (() => {
//....
})();

// this namespace defines and exports all the required methods for a datastudio connector
var Connector = (() => {
//.....
})();

// export these globally so that datastidio can see them
var getConfig = () => Connector.getConfig(),
isAdminUser = () => Connector.isAdminUser(),
getSchema = () => Connector.getSchema(),
getAuthType = () => Connector.getAuthType(),
getData = (request) => Connector.getData(request),
getVizzy = () => Connector.getVizzy()
basic structure of main code file

We’ll also need some code to reduce the data coming from the scrviz API to rows at the level required for my Data Studio report.

const flattenVizzyOwners = (data) => {
//....
}
reduce to owner level

And finally, since we’re supporting caching using the Apps Script CacheService, we’ll need a way of compressing the data and spreading it over multiple cache entries, as it’ll be bigger than the allowed amount.

var Digestive = (() => {

})()
handling cache

Why sometimes var and sometime const?

Since this will be a library, some functions might need to be exposed from the library, whereas others are purely local. Apps Script has no module support, so we need to use var for exposable functions (var declarations are still ‘hoisted’ in V8, which will guarantee they’ll be visible in the correct order via the library), and const for locally accessed functions.

Why structure like this?

The next time I create a Data Studio connector, much of this code can be reused with just field names changed, and even the dataManip namespace will remain the same shape, but with the details of data access and manipulation tailored for the specific data source. So what I’m after is a template that can be used by me (and possibly others) to plug in to the next time. That’s the intention in any case – I’ll let you know if it worked when I create my next connector.

Connector namespace

This contains the data specific details for the connector and implements the madatory functions required for Data Studio. This namespace should be reusable between connectors with only minimal changes.

Connector local variables and functions

To generalize the Connector namespace, I’m importing various functions from the specific dataManip namespace that are specific to this project (and will probably be the same for other similar projects).

  const { fetchIt, getVizzy, cacheStudioSetter, cacheStudioGetter } = dataManip;
const cc = DataStudioApp.createCommunityConnector();
const _fromCacheStudio = (request) => !(!request || !_cacheService || (request.scriptParams && request.scriptParams.noCacheStudio))
Connector local vars

Connector.getConfig

I’ve implemented various levels of caching in this connector, as the source data set is pretty big – so if we can avoid reprocessing, it’s going to help. Also there’s a pretty strict rate limit on unauthenticated github access so I need to minimize how many times we hit that API. These user parameters will allow modification of caching behavior for unusual circumstances

  const getConfig = () => {
const config = cc.getConfig();

config
.newCheckbox()
.setId('noCacheStudio')
.setName('disable formatted data caching')
.setHelpText('Data may already be available from recently run report')


config
.newCheckbox()
.setId('noCache')
.setName('disable catalog caching')
.setHelpText('Data may be available from recently used scrviz access')

return config.build();
};
getConfig

Connector.getFields

These are all the fields I’m planning to present from this connector.

  const getFields = () => {
var fields = cc.getFields();
var types = cc.FieldType;
var aggregations = cc.AggregationType;

fields
.newDimension()
.setId("ownerName")
.setName("Developer")
.setType(types.TEXT);

fields
.newDimension()
.setId("ownerHireable")
.setName("Hireable")
.setType(types.BOOLEAN);

fields
.newDimension()
.setId("ownerLocation")
.setName("Location")
.setType(types.TEXT);

fields
.newDimension()
.setId("ownerId")
.setName("Owner Id")
.setType(types.NUMBER);

fields
.newMetric()
.setId("ownerFollowers")
.setName("Followers")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerLibraries")
.setName("Libraries")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerLibraryReferences")
.setName("All References")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerLibraryDependencies")
.setName("Library dependencies")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);


fields
.newMetric()
.setId("ownerProjects")
.setName("Projects")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerAppsScriptRepos")
.setName("GAS repos")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerPublicRepos")
.setName("Public repos")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerClaspProjects")
.setName("Clasp projects")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);

fields
.newMetric()
.setId("ownerLibrariesUnknown")
.setName("Libraries not on github")
.setType(types.NUMBER)
.setAggregation(aggregations.MAX);


fields
.newDimension()
.setId("ownerTwitter")
.setName("Twitter handle")
.setType(types.TEXT);

fields
.newDimension()
.setId("ownerEmail")
.setName("Email")
.setType(types.TEXT);

fields
.newDimension()
.setId("ownerGithub")
.setName("Github handle")
.setType(types.TEXT);

fields
.newDimension()
.setId("ownerBlog")
.setName("Blog")
.setType(types.TEXT);
return fields;
};
getFields

Connector.getData

This function will be called by datastudio to get the rows of data populated with the fields mentioned in getFields

  const getData = (request) => {

// whether to cache is passed in the request from datastudio
const c = _fromCacheStudio(request) && cacheStudioGetter(request)
if (c) {
console.log('Studio data was from cache ', new Date().getTime() - c.timestamp)
return c.data
}

// need to calculate it all
const requestedFields = getFields().forIds(
request.fields.map(field => {
return field.name
})
);

try {
const schema = requestedFields.build()
const data = fetchIt(request, requestedFields, schema);
const response = {
schema,
rows: data,
};
cacheStudioSetter(response, request)
return response
} catch (e) {
console.log(e)
cc.newUserError()
.setDebugText("Error fetching data from API. Exception details: " e)
.setText(
"The connector has encountered an unrecoverable error. Please try again later, or file an issue if this error persists."
)
.throwException();
}
};
getData

Connector exports

These functions are all exposed to pass on via the connector. They’re not all required by the Connector, but may be useful when it’s being used as a library.

  // these are called by datastudio
return {
// https://developers.google.com/datastudio/connector/reference#getdata
getData,

// https://developers.google.com/datastudio/connector/reference#getconfig
getConfig,

// https://developers.google.com/datastudio/connector/reference#getauthtype
getAuthType: () =>
cc.newAuthTypeResponse().setAuthType(cc.AuthType.NONE).build(),

// https://developers.google.com/datastudio/connector/reference#getschema
getSchema: () => ({ schema: getFields().build() }),

// https://developers.google.com/datastudio/connector/reference#isadminuser
isAdminUser: () => true,

getVizzy
};
Connector exports

dataManip namespace

This contains code that specific to this API and specific to converting it into a format usable by datastudio. I just reproduce the entire namespace here, but won’t go into the detail as by definition it’s specific to this dataset. However it might provide some guidance on how to format data for use by getData() and on using the caching algorithms in the Digestive namespace. Much of this will be reusable, with only API access and specific data wrangling and formatting needing attention.

// This namespace is all about getting and formatting data
const dataManip = (() => {

// we should cache as there will be lots of accesses when setting up datastudio report
// and scrviz doesn't run very often

const EXPIRE = 3000
const CACHE_KEYS = ['bmScrviz', 'items']
const ITEMS = ['types', 'owners', 'repos', 'shaxs', 'files']
const CACHE_STUDIO_KEYS = ['bmScrviz', 'studio']
const MANIFEST_ITEMS = ['libraries', 'timeZones', 'webApps', 'runtimeVersions', 'addOns', 'oauthScopes', 'dataStudios']
const EXPIRE_STUDIO = 100
const _fromCache = (request) => !(!request || !_cacheService || (request.scriptParams && request.scriptParams.noCache))

/**
* cache handling/crushing etc is all delegated to Digestive namespace
*/
const cacheGetter = () => Digestive.cacheGetHandler(_cacheService, CACHE_KEYS)
const cacheSetter = (data) => Digestive.cacheSetHandler(_cacheService, data, EXPIRE, CACHE_KEYS)
const cacheStudioGetter = (request) => Digestive.cacheGetHandler(_cacheService, CACHE_STUDIO_KEYS, request)
const cacheStudioSetter = (data, request) => Digestive.cacheSetHandler(_cacheService, data, EXPIRE_STUDIO, CACHE_STUDIO_KEYS, request)

const _compareLabels = (a,b) => {
// ingnore case/
const alab = a.toLowerCase();
const blab = b.toLowerCase();
return alab === blab ? 0 : (alab > blab ? 1 : -1)
}

const _compare = (a, b) => _compareLabels (a.label, b.label)

const _looserCompare = (a,b) => (a,b) => {
// ingnore case and -/
const alab = a.toLowerCase().replaceAll('-','');
const blab = b.toLowerCase().replaceAll('-','');
return alab === blab ? 0 : (alab > blab ? 1 : -1)
}


/**
* try to sort out the libraries
*/
const sortOutLibraries = (data) => {

// we need to optimize mapping shaxs to files to do this only once
const msf = new Map (data.shaxs.map(f=>[
f.fields.sha,
data.files.filter(g=>f.fields.sha === g.fields.sha)
]))

// we also need to know which shaxs have lib dependencies multiple times
const s = new Map (data.shaxs.map(f=>[
f.fields.sha,
f.fields.content &&
f.fields.content.dependencies &&
f.fields.content.dependencies.libraries &&
f.fields.content.dependencies.libraries.map(g=> g.libraryId)
]).filter(([k,v])=>v && v.length))

// ssf is a map shaxs which reference a given libraryID
const ssf = Array.from(s).reduce((p,[k,v])=> {
v.forEach(g=>{
if(!p.has(g)) p.set(g,[])
p.get(g).push(k)
})
return p
}, new Map())

// special clues from those with multiple projects in a repo
const mReps = data.repos.map(g=>({
repo: g,
multiples:data.files.filter(h=>h.fields.repositoryId === g.fields.id).map(h=> ({
repo: h,
projectName: h.fields.path
.replace('src/appscript.json','appsscript.json')
.replace('dist/appscript.json','appsscript.json')
.replace(/.*\/(.*)\/appsscript.json$/,"$1")
}))
})).filter(g=>g.multiples.length>1)

// now we look at all the known libraries
// libraries only have an id a list of versions in use, and a label
// we have to try to see if we somehow match then up to known files
// however we don't have a scriptID for each file
return data.libraries.sort(_compare)

.map(f => {

const file = data.files.find(g=>f.id === g.fields.scriptId)

// otherwise its all a bit flaky
let repo = data.repos.find(g => file && g.fields.id === file.fields.repositoryId)
const owner = repo && data.owners.find(g => g.fields.id === repo.fields.ownerId)
const referencedBy = ssf.get(f.id)
const ob = {
...f,
repoId: repo && repo.fields.id,
ownerId: owner && owner.fields.id,
repo: repo && repo.fields.name,
repoLink: repo && repo.fields.html_url,
owner: owner && owner.fields.name,
claspProject: (file && file.fields.claspHtmlUrl && file.fields.claspHtmlUrl.replace('/.clasp.json', '')) || false,
referencedBy
}
return ob
})
}

/**
* gets the stats from the scrviz repo
*/
const getVizzy = (request) => {

// whether to cache is passed in the request from datastudio
const c = _fromCache(request) && cacheGetter()

if (c) {
console.log('Scrviz data was from cache ', new Date().getTime() - c.timestamp)
return c.data
} else {
const { gd, mf } = bmVizzyCache.VizzyCache.fetch(UrlFetchApp.fetch)
const data = ITEMS.reduce((p, c) => {
p = gd.items(c)
return p
}, {})

MANIFEST_ITEMS.reduce((p, c) => {
if (mf._maps) p = Array.from(mf._maps.values())
return p
}, data)

// now let's see if we can find the libraries referred to
data.libraries = (data.libraries && sortOutLibraries(data)) || []
cacheSetter(data)
return data
}
}

/**
* Gets response for UrlFetchApp.
*
* @param {Object} request Data request parameters.
* @returns {object} Response from vizzycache library
*/
const fetchDataFromApi = (request) => {
return getVizzy(request)
};

// selects all the fields required for the connector
const normalizeResponse = (data) => flattenVizzyOwners(data)

// formats the selected fields
const getFormattedData = (response, requestedFields, schema) =>
response.map(item => formatData(requestedFields, item, schema))



/**
* Formats a single row of data into the required format.
*
* @param {Object} requestedFields Fields requested in the getData request.
* @param {Object} item
* @returns {Object} Contains values for requested fields in predefined format.
*/
const formatData = (requestedFields, item, schema) => {

var row = requestedFields.asArray().map((requestedField, i) => {
const v = item[requestedField.getId()]

// no formatting required, except to clean up nulls/udefined in boolean values
switch (schema[i].dataType) {
case "BOOLEAN":
return Boolean(v)
case "STRING":
return v === null || typeof v === typeof undefined ? '' : v.toString()
default:
return v
}
})
return { values: row };
};

return {
/**
* fetchit just combines the gettinf and formatting of datastudio response
*/
fetchIt: (request, requestedFields, schema) => {
const apiResponse = fetchDataFromApi(request);
const normalizedResponse = normalizeResponse(apiResponse);
return getFormattedData(normalizedResponse.result, requestedFields, schema);
},
getVizzy,
cacheStudioSetter,
cacheStudioGetter
};
})();
dataManip namespace

FlattenVizzy namespace

I’ve kept this separate from the dataManip namespace because it’s about reducing the formatted data to a particular level – in this case aggregation by owner. If I add other aggregations, then this is the only change that’s needed other than to specify the fields for the schema. This namespace is specific to the data source and the level at which it will be consumed.

const flattenVizzyOwners = (data) => {

const { owners, repos, files, libraries } = data

const result = owners.map(({ fields }) => {
const { id } = fields
const ownedFiles = files.filter(file => file.fields.ownerId === id)
const ownedRepos = repos.filter(repo => repo.fields.ownerId === id)
const ownedClaspFiles = ownedFiles.filter(file => file.fields.claspHtmlUrl)
const ownedLibraries = libraries.filter(library => library.ownerId === id)

return {
ownerName: fields.name,
ownerLocation: fields.location,
ownerHireable: fields.hireable,
ownerPublicRepos: fields.public_repos,
ownerFollowers: fields.followers,
ownerId: id,
ownerAppsScriptRepos: ownedRepos.length,
ownerTwitter: fields.twitter_userName,
ownerEmail: fields.email,
ownerGithub: fields.login,
ownerBlog: fields.blog,
ownerProjects: ownedFiles.length,
ownerLibraries: ownedLibraries.length,
ownerLibraryReferences: ownedLibraries.reduce((p, c) => p c.referencedBy.length, 0),
ownerClaspProjects: ownedClaspFiles.length,
ownerLibraryDependencies: libraries.reduce((p,c)=>{
return c.referencedBy.reduce((xp,xc)=> ownedFiles.filter(g=>g.fields.sha===xc.sha).length p ,p)
},0)
}
})
// unknown libraries where the library hasnt been found on scrviz

const unknownLibraries = libraries.map((f,i)=>({
...f,
index:i
})).filter(f=>!f.ownerId)

return {
result: result.map(f=>{
f.ownerLibrariesUnknown = unknownLibraries.length
return f
}),
unknownLibraries
}
}
FlattenVizzy

Digestive namespace

I’ve written about getting more out of cache services elsewhere. This is an implementation using the Apps Script cache service, along with zip to compress the data, and various techniques to circumvent the size limit on Cacheservice items. This namespace should be reusable with little or no changes.

var Digestive = (() => {

const DIGEST_PREFIX = '@mild@'
const MAX_CACHE_SIZE = 100 * 1024


const digest = (...args) => {
// conver args to an array and digest them
const t = args.concat([DIGEST_PREFIX]).map(d => {
return (Object(d) === d) ? JSON.stringify(d) : (typeof d === typeof undefined ? 'undefined' : d.toString());
}).join("-")

const s = Utilities.computeDigest(Utilities.DigestAlgorithm.SHA_1, t, Utilities.Charset.UTF_8)
return Utilities.base64EncodeWebSafe(s)
};

/**
* zip some content - for this use case - it's for cache, we're expecting string input/output
* @param {string} crushThis the thing to be crushed
* @raturns {string} the zipped contents as base64
*/
const crush = (crushThis) => {
return Utilities.base64Encode(Utilities.zip([Utilities.newBlob(crushThis)]).getBytes());
}

/**
* unzip some content - for this use case - it's for cache, we're expecting string input/output
* @param {string} crushed the thing to be uncrushed - this will be base64 string
* @raturns {string} the unzipped and decoded contents
*/
const uncrush = (crushed) => {
return Utilities.unzip(Utilities.newBlob(Utilities.base64Decode(crushed), 'application/zip'))[0].getDataAsString();
}

/**
* gets and reconstitues cache from a series of compressed entries
*/
const cacheGetHandler = (cacheService, ...args) => {
// call the cache get function and make the keys
const d = digest.apply(null, args)
const h = cacheService.get(d)
if (!h) return null;
const header = JSON.parse(h)

// we have to reconstitute all the entries
const str = header.subs.reduce((p, c) => {
const e = cacheService.get(c)
// and entry has disappeared, so give up
if (!e) return null
return p e
}, '')

return {
...header,
data: JSON.parse(uncrush(str))
}

}

const chunker = (str, len) => {
const chunks = [];
let i = 0
const n = str.length;
while (i < n) {
chunks.push(str.slice(i, i = len));
}
return chunks;
}

/**
* this will not only compress, but also spread result across multiple cache entries
*/
const cacheSetHandler = (cacheService,...args) => {
const [data, expiry, ...keys] = args
const d = digest.apply(null, keys)
const strif = JSON.stringify(data)
const crushed = crush(strif)
const subs = chunker(crushed, MAX_CACHE_SIZE).map((f, i) => {
const key = digest(d, i)
cacheService.put(key, f, expiry)
return key
})

const pack = {
timestamp: new Date().getTime(),
digest,
subs
}
// always want the header to expire before the trailers
cacheService.put(d, JSON.stringify(pack), Math.max(0, expiry - 1))
return pack
}
return {
cacheGetHandler,
cacheSetHandler
}
})()
Digestive namespace

Exposing required functions

Finally we need to expose and hoist some functions from the Connector namespace

// export these globally so that datastidio can see them
var getConfig = () => Connector.getConfig(),
isAdminUser = () => Connector.isAdminUser(),
getSchema = () => Connector.getSchema(),
getAuthType = () => Connector.getAuthType(),
getData = (request) => Connector.getData(request),
getVizzy = () => Connector.getVizzy()
exposing connector functions

Testing

I find the simplest way to test the connector is to use it as a library from another script that creates a spreadsheet from the data served up from getData(). For examples of this see Creating a connector for Data Studio

What’s next

This article was an introduction to the coding of a Connector. Next we’ll go through plugging it in to Data Studio

Links

bmScrvizConnector

github: https://github.com/brucemcpherson/bmScrvizConnector

library key: 1sEEcPeh7GZ6QoGIRFP6rbbFU89SIM9DxPTO_bKbDIYWNFD1cZ5n6T3tK

consumeScrvizconnector

github: https://github.com/brucemcpherson/consumeScrvizconnector

Every Google Apps Script project on Github visualized

sharing scrviz data studio report
This series of articles will work through how to create a connector for Data Studio in Apps Script. For an ...
This series of articles will work through how to create a connector for Data Studio in Apps Script. For an ...
scviz datastudio report
This series of articles will work through how to create a connector for Data Studio in Apps Script. The source ...
1000 pages and counting Most years I do a post on 'a year in Apps Script', looking back over the ...
Apps Script Vizzy update Every Google Apps Script project on Github visualized introduces a way of visualizing  public Apps Script ...
github appsscript activity
A few years ago I released a visualizer of all the scripts that Google Apps Script developers have shared on ...
vizzycache project
In Every Google Apps Script project on Github visualized  I demonstrated an app that could be used to explore what every ...
scrviz -vizzy repo owners
Motivation There are so many Apps Script projects out there where the source code is published on Github, but it's ...
This page is still being written. In Pseudo binding in HTML service I showed how to simulate Sheets cell binding in an ...
Both Apps and Office offer the capability of adding extra functionality by bringing up some client side JavaScript which can ...
sharing scrviz data studio report
This series of articles will work through how to create a connector for Data Studio in Apps Script. For an ...
info card on hover or click
Info Card customization By default the info card appears when you hover over a node in the scrviz visualization. Although ...
copy the library id
You want to include an Apps Script library, and you know it's name, but not its id. A pain right? ...
scrviz profiles on github
A few scrviz updates today, with more flexibility around the depth of the repo visualization and more options to enrich ...
In Enrich your developer profile on scrviz I showed how scrviz could be used to show off your Apps Script ...
In Showcase your Apps Script work and get hired via vizzy I showed how scrviz could be used to show ...
Motivation You'll know from Every Google Apps Script project on Github visualized that you can get a very large diagram of ...
Apps script github to ide
Motivation Every Google Apps Script project on Github visualized describes how to use https://scrviz.web.app to find and visualize public Apps Script ...
github appsscript activity
A few years ago I released a visualizer of all the scripts that Google Apps Script developers have shared on ...
This piece of work to create inline libraries was both challenging and a lot of fun to produce, and it's ...
1000 pages and counting Most years I do a post on 'a year in Apps Script', looking back over the ...
This series of articles will work through how to create a connector for Data Studio in Apps Script. For an ...
scviz datastudio report
This series of articles will work through how to create a connector for Data Studio in Apps Script. The source ...
vizzycache project
In Every Google Apps Script project on Github visualized  I demonstrated an app that could be used to explore what every ...
exchange apps script foriegn currency
This article will cover the translation of the Sheets workbook filter type functions for Apps Script. All of them will ...
This article will cover the translation of the Sheets workbook database type functions for Apps Script. All of them will ...
This article will cover the translation of the Sheets workbook Array type functions for Apps Script. Most of them will ...
apps script drive pile of files
The method for doing this is actually part of the bmFolderFun library documented in A handier way of accessing Google ...
In Blistering fast file streaming between Drive and Cloud Storage using Cloud Run I showed how you could use Cloud ...
apps script skewed distribution
Sometimes you need to generate some fake data for a spreadsheet. In this post I'll cover a few utilities in ...
Motivation You've written a great Apps Script library and you want to know how many people are using it, and ...
crusher files on drive
Another quick demo of data sharing Here's a challenge that shares the data in a spreadsheet with node, set up ...
admin.google.com
This is part of the series on sharing data between Apps Script and Node on various backends, Apps script library ...
Another quick demo of data sharing There are many ways to accomplish this of course, but using Caching across multiple ...
Quick demo of data sharing There are many ways to accomplish this of course, but using Caching across multiple Apps ...
Quick demo of data sharing There are many ways to accomplish this of course, but using Caching across multiple Apps ...
apps script crusher on github
Github as an Apps Script cache platform Another plugin available for Apps script library with plugins for multiple backend cache ...
upstash graphql explorer
Upstash as an Apps Script cache platform Upstash is a brand new service offering a serverless redis over https via ...
upstash graphql console
Apps Script, Redis and GraphQL - together I'm a great fan of both Redis and GraphQL. You'll find plenty of ...
Cache data on Microsoft OneDrive
OneDrive as Cache platform In Apps script library with plugins for multiple backend cache platforms I covered a way to ...
crusher on google cloud storage
Google Cloud Storage as Cache platform In Apps script library with plugins for multiple backend cache platforms I covered a ...
cache drive apps script
Motivation This library used to be part of my cUseful collection, but I've decided to pull out into a library ...
gapi and vuex
Motivation JavaScript authentication with Gapi is both impressive and frustrating. Frustrating because in most of the examples you come across, ...
Apps Script Vizzy update Every Google Apps Script project on Github visualized introduces a way of visualizing  public Apps Script ...
vizzy profile info
Get yourself found as an Apps Script consultant Every Google Apps Script project on Github visualized  introduced this vizzy app  ...
scrviz - vizzy - manifests
In Every Google Apps Script project on Github visualized I introduced Vizzy which uses GitHub data as its source. That ...
scrviz - vizzy - libraries
In Every Google Apps Script project on Github visualized I demonstrated an app that could be used to explore what ...
scrviz -vizzy repo owners
Motivation There are so many Apps Script projects out there where the source code is published on Github, but it's ...