Twt is a SuperFetch plugin to easily access to the Twitter v2 API.  SuperFetch is a proxy for UrlFetchApp with additional features – see SuperFetch – a proxy enhancement to Apps Script UrlFetch for how it works and what it does.

This is another in my series on SuperFetch plugins.

Page Content hide
1 Motivation

Motivation

I’m planning to create a SuperFetch plugin for all the APIS I use regularily. The Twitter V2 API is a great improvement on the original one, and now uses OAuth2 for authorization. See Apps Script Oauth2- a Goa Library refresher for how this works.

I wanted to keep the SuperFetch structure and approach so although this uses the Twitter REST API, the call structure is a bit different to, for example, the TypeScript Twitter client.

The usual Superfetch goodies like rate limiting, depaging and caching are all built in. This article will cover a subset of the methods available – there are many. I’ll post another few of articles in due course to cover the others.

Script and Twitter Preparation

You’ll need the bmSuperFetch and cGoa libraries and if you want to run some tests, the bmUnitTest library – details at the end of the article.

You’ll also need to set up a Twitter application in the Twitter developer console, and use the Goa library to handle OAuth2 authentication and authorization for you. This article lays out how to do that in detail. I suggest you go there when you are ready to start coding.

Instantiation

There’s a few things you need to initialize before you can start running your script

Goa

First you’ll need a Goa to provide a token service – The detail is in this  article or for a simpler kind of ‘app only’ oauth see OAuth2 and Twitter API – App only flow for Apps Script

  // we'll use this token service
  const goa = makeTwitterGoa()
get a Goa

I’ll do a very quick recap on Goa for Twitter later in the article.

SuperFetch

First you need to include the plugins you need, and an instaniated SuperFetch instance. If you don’t need caching, then just omit the cacheService property (but I highly recommend caching with this API).

  // import required modules
  const { Plugins, SuperFetch } = bmSuperFetch
  const { Twt } = Plugins

  const superFetch = new SuperFetch({
    fetcherApp: UrlFetchApp,
    tokenService: goa.getToken,
    cacheService: CacheService.getUserCache()
  })
superfetch instance

 

Twt instance

​This will be the handle for accessing Twitter.

    
   const twt = new Twt({
    superFetch,
    max: 200,
    showUrl: false,
    noCache: true
  })
twt instance

There are various other standard SuperFetch Plugin parameters that I’ll deal with later, but the ones shown are of special interest to this article.

superFetch (required)

The instance you created earlier

noCache property (optional)

In this instance, I’m turning caching off for now.

max property (optional)

This is the default maximum number of items to return for a query. You can change this for individual queries of course. Note that the Twitter API insists on a minimum of 10, and that max should be a multiple of 10.

showUrl property (optional)

Normally this would be false, but if you want to see you how your calls to the twt client are translated to native Rest calls, you can add this and set it to true.

Calling structure

The twt plugin uses closures heavily. A closure is a function which carries the references to its state (the lexical environment) with it.  This encapsulation is very handy for creating reusable short cuts to standard queries, as we’ll cover later.

Domains

The Plugin is divided up into ‘domains’. This article will cover searching and getting tweets and users. Here’s how you’d create a shortcut to each of the domains referenced in this article.

  // domain shortcuts
  const {tweets, users} = twt
domain shortcuts

Tweets

The tweets domain returns 1 item for each qualifying tweet, plus various expansions which I’ll cover as we go through the article

Searching

Here’s how you’d get the most recent set of tweets. The query is constructed using the same syntax as you’d use in the normal web  client. The full details are here.

  // get lastest tweets about apps script
  const bunch = tweets.search('Google Apps Script')
simple search
Responses

All responses have exactly the same format. You’ll get back a standard SuperFetch Response, which looks like this

/**
 * This is the response from the apply proxied function
 * @typedef PackResponse
 * @property {boolean} cached whether item was retrieved from cache
 * @property {object||null} data parsed data
 * @property {number} age age in ms of the cached data
 * @property {Blob|| null} blob the recreated blob if there was one
 * @property {boolean} parsed whether the data was parsed
 * @property {HttpResponse} response fetch response
 * @property {function} throw a function to throw on error
 * @property {Error || string || null} the error if there was one
 * @property {number} responseCode the http response code 
 * @property {string} url the url that provoked this response
 * @property {string} pageToken optional restart point passed back in the page paramter
 */
 console.log(Object.keys(bunch))
[ 'response',
  'data',
  'error',
  'parsed',
  'blob',
  'age',
  'cached',
  'responseCode',
  'url',
  'throw',
  'pageToken' ]
Pack response
Twitter API data

The response from the Twitter API will be in the data property of the SuperFetch response.

console.log(Object.keys(bunch.data))	
[ 'items', 'expansions' ]
results from twitter API

Actually the twitter response will have been massaged a bit so that every API call has exactly the same format. There are 2 properties of interest

  • items – an array of standard data from the Twitter API
  • expansions – some API methods return enhanced data – we’ll look at that later
Data items

The items are an array of basic data matching the search criteria

  // typical item
  console.log(bunch.data.items[0])
/*
{ id: '1541787265432752128',
  text: 'Did you know that you can get data from an #API with Google Apps Script and write it into a #GoogleSheets?\n\nLearn how to do so in our latest video tutorial: https://t.co/RnQGlY20tA\n\nAnd make sure to read @benlcollins blog post I mention in the video!\n\n#GoogleAppsScript #saperis https://t.co/dStfN5ZcQO' }
*/
  // how many items
  console.log(bunch.data.items.length) // 200
  
typical basic tweet data
Paging

The Twitter API supports paging (the default page size is 10, and the maximum is 100). You’ll notice though that we got 200 items without bothering to handle paging at all. For a web app you’ll probably want paging to render results, but in Apps Script you’ll seldom want to bother with that – you just want all the results (up to ‘max’) in one shot.

SuperFetch ‘depages’ queries like this so that you get the ‘max’ number of results consolidated into a single list of items. It will automatically optimize the pageSizes it requests from the Twitter API to minimize the number of external API fetches it does.

If you do need to handle paging yourself, then there is a paging mechanism too, which we’ll cover later.

Fields

You can enhance the results from the plugin by specifying fields. The complete list of fields available as parameters is here.

The Plugin provides a number of ways of specifying fields – the most reusable one being to create a query closure. Let’s say we want to enhance the tweets data with the user data for the author and the tweet created time

  const fields = {
    'tweet.fields': 'created_at',
    expansions: 'author_id'
  }
  const queryString = "Google Apps Script"
  const fielded = tweets.query({fields}).search(queryString)

  console.log(fielded.data.items[0])
 
  
/*
 { author_id: '1710909806',
  created_at: '2022-06-28T14:15:03.000Z',
  text: 'Did you know that you can get data from an #API with Google Apps Script and write it into a #GoogleSheets?\n\nLearn how to do so in our latest video tutorial: https://t.co/RnQGlY20tA\n\nAnd make sure to read @benlcollins blog post I mention in the video!\n\n#GoogleAppsScript #saperis https://t.co/dStfN5ZcQO',
  id: '1541787265432752128' }
*/
  
 console.log(fielded.data.expansions.includes.users[0])
/*
{ id: '1710909806',
  name: 'Chanel Greco',
  username: 'ChanelGreco' }
*/
fields and expansions

You’ll notice that the expansions property is now populated with that extra data about the author, and we have an extra field, created_at in the tweet data.

Expansions

The reason that expansions are separate from to the basic data, is to avoid repetition. For example, there may be 100 tweets by the same author. In that case we’d have 100 tweet data items, but only 1 expansions.include.users entry. These can be matched on author_id if you need to do that.

Field closure shortcuts

You’ll notice that we created a closure for the query in the previous example. If you are doing multiple queries and want to keep the same fields definition, you can just use the closure like this and you’ll get the same shaped results for each query

  const fieldClosure = tweets.query({fields})
  const aResult = fieldClosure.search("JavaScript")
  const bResult = fieldClosure.search("Google Apps Script")
  const cResult = fieldClosure.search("Java")
using closure shortcuts
Query and Field closure shortcuts

You can also add the query into the closure like this

  const queryClosure = tweets.query({query: "Google Apps Script", fields})
  const dResult = queryClosure.search()
query and field closure
Compound Queries

You can combine a query closure with a regular search querystring like this. Query strings from both places are combined so will give the same result.

  // compound queries
  const eResult = fieldClosure.search("Google Apps Script VBA")
  // is the same as
  const fResult = queryClosure.search("VBA")
compound query
Fields in the search method

Finally you can do all of it in the search method, which may seem the most straightforward, but misses out on some of the reusability of the other approaches

  // all in the search method
  const gResult = tweets.search("Google Apps Script VBA", fields)
all in the search method
Using .throw

We haven’t handled errors in the examples up to now. If SuperFetch encounters an error it’ll be in the error property of the response

  • you can handle this yourself by checking the response error property.
  • The plugin can automatically throw an error on your behalf if you can add the .throw() method to any request. All SuperFetch plugins have the same error handling approach. Think of it as a built in try/catch.
  // error handling
  // data will be in result.data
  const hResult = tweets.search (queryString)
  if (hResult.error) {
    // handle it yourself
    console.log(hResult.error)
  }
  // throw an error if there is one
  // data will still be in result.data
  const iResult = tweets.search (queryString).throw()
throw() error handling

Ref

SuperFetch plugins always have a ref method – this allows you to create a new instance of the plugin which inherits all the characteristics of the source instance. You can pass any constructor options changes via .ref(). Any constructor options can be passed, but the most usual one would be to create a noCache instance.

In the examples so far we’ve had caching enabled, but we may want to do some queries avoiding cache.

  // uncached tweets ref
  const utweets = twt.ref({noCache: true}).tweets
create a ref instance

Paging

As mentioned previously, you usually don’t need to bother handling paging, but you may want to. Twt will return up to ‘max’ items. A pageToken in the response indicates that there are more results available.  If handling paging yourself, it’s best to use a noCache version of the Plugin as old pageTokens would not be valid, so they are not cached.



    // paging
  const jResult = utweets.search("JavaScript").throw()
  console.log(jResult.data.items.length) // 200
  console.log(jResult.pageToken) // b26v89c19zqg8o3fpz2krt14q3trzy352wc757ma6nptp
  
  const kResult = utweets.query({query:"JavaScript"})
    .page({startToken:jResult.pageToken, max: 100})
    .search().throw()
  console.log(kResult.data.items.length) // 100
pageToken if there's more

Provide both the pageToken, and optionally a different max using the page method to get the next set of results.

Using closures, this can be better written as

  // paging closures
  const pQuery = utweets.query({query:"JavaScript"})
  const lResult = pQuery.search().throw()
  if (lResult.pageToken) {
    const mResult = pquery.page({pageToken:lResult.pageToken, max: 100}).search().throw()
  }
paging closures
max

You’ve seen how the .page method can be used with the pageToken property to start a search at a particular point. You can use the max property in its own to control how items many you want. By default, max is set to 100 – I decided on a low amount because the Twitter API has a bunch of rate limits and caps, so it’s best to try to be cautious. I’ll deal with how Superfetch can help to deal with these rate limits in another article.

Here’s an individual query with no limit on the number of items

  // query with infinite number of items
  const qResult = queryClosure.page({max:Infinity}).search().throw().data
  console.log(qResult.items.length) // 303
no limit on number of items

You can also create a twt instance that has no limit on searches using it.

  const rResult = twt.ref({max: Infinity})
    .tweets.query({query: "Google Apps Script", fields})
    .search()
    .throw()
    .data
  console.log(rResult.items.length) // 303
instance with infinite max

Caching

Caching is built into SuperFetch  (for details see SuperFetch – a proxy enhancement to Apps Script UrlFetch) so you get caching out of the box.

However, there are a couple of general things to be aware of

  • Caching is particularily important with this API as currency is probably less important than with more transactional APIS, especially given Twitter Caps and Rate limits. Caching is also faster than hitting the API
  • You’ll have seen there are a number of ways of constructing the same query. However you construct it, the caching normalization mechanism will notice that two queries trying to access the same data are the same as each other
  • Searches that return more than 100 items (the twitter API maximum page size), will automatically make multiple API calls and consolidate the results to the maximum number of items you have set. If such a query is found in cache, it only has to make 1 cache transaction to get all of it. However if you handle paging yourself (using pageToken), it will have to hit the API directly and bypass cache.
  • An uncached query clears the cached version of that search. The logic here is that if you are deliberately making an uncached call, it follows that any cached version will probably be stale
Forced decaching

It’s possible (unlikely, but possibly for repeatable testing), that you’ll want to specifically clear any cached entries for a given query. To do this just repeat the query exactly, and replace the search method with deCache

queryClosure.page({max:Infinity}).deCache()
remove a query from cache
Changing cache parameters

Cache parameters are set when you create the superFetch instance – here’s a typical one

  const superFetch = new SuperFetch({
    fetcherApp: UrlFetchApp,
    tokenService: goa.getToken,
    cacheService: CacheService.getUserCache()
  })
basic superfetch instantiation

There are 3 cache parameters available when you create the superFetch instance.

  • cacheService – Usually this would be the userProperties service, but if you want to cache queries across all users of your script you could user scriptProperties
  • prefix – This can be used to partition cache into groups – if one set of queries should somehow be completely separate from another set. The prefix can be any string you want and the cache entries will be shared with any superFetch instance sharing the same prefix
  • expiry – this is the number of seconds to keep cache entries in place. The default is 1 hour after which they expire

Here’s an example of initializing a superFetch with different values

  const superFetch2 = new SuperFetch({
    fetcherApp: UrlFetchApp,
    tokenService: goa.getToken,
    cacheService: CacheService.getUserCache(),
    expiry: 120,
    prefix: 'Group A'
  })
more complex superFetch

We can then get a ref of the existing twt instance, but with this new superFetch

  const twt2 = twt.ref({
    superFetch: superFetch2
  })
change the superFetch

Notice now that the cache entries for the new and old twt instances are now unaware of each other’s cache

  // seed the new query
  const query2 = twt2.tweets.query({query: "Google Apps Script", fields}).page({max:10})
  const tResult = query2.search().throw()
  // its not cached because it's in prefix Group A
  console.log(tResult.cached, tResult.age,tResult.data.items.length) // false, null , 10
  
  // repeat the query
  const uResult = query2.search().throw()  
  
  // this time it is cached, and it was written to cache 150ms ago
  console.log(uResult.cached, uResult.age,uResult.data.items.length) // true, 156 , 10
  
  // using the same query with the original twt instance doesn't see the new cache entry as it's in a different cache group
  const vResult = twt.tweets.query({query: "Google Apps Script", fields}).page({max:10}).search()
  console.log(vResult.cached, vResult.age,vResult.data.items.length) // false, null , 10
seperate cache groups
Recent and All
There are 2 levels of searching available depending on your account permissions
  • recent – tweets from the last 7 days
  • all – all the twitter archive

The plugin supports both, but – from the API docs regarding the ‘all’ endpoint.

This endpoint is only available to those users who have been approved for Academic Research access.

The full-archive search endpoint returns the complete history of public Tweets matching a search query; since the first Tweet was created March 26, 2006.

Recent

This is the default, and you never need to specify it. For completeness it looks like this

 console.log(twt.tweets.recent.search("Google Apps Script").throw().data.items.length) // 200
.recent is the default
All

Only Academic researchers have access to this, and most of us will get this error

 

 console.log(twt.tweets.all.search("Google Apps Script").throw().data.items.length) // error
/*
Error: {
  "title": "Unsupported Authentication",
  "detail": "Authenticating with OAuth 2.0 User Context is forbidden for this endpoint.  Supported authentication types are [OAuth 2.0 Application-Only].",
  "type": "https://api.twitter.com/2/problems/unsupported-authentication",
  "status": 403
}
*.
all returns an error for non researcher permission accounts

Getting

Tweets can also be retrieved by a single or list of ids. The response format and fields are exactly the same as for searching, but this time instead of providing a search query you provide an array of ids. There is no paging for getting by id, and the maximum number that can be retrieved in one shot is the twitter API maximum – 100.

Getting a single basic tweet
 
  // we'll use a prvious closure to get some ids
  const seed = queryClosure.page({max:20}).search().data
  const tweetIds = seed.items.map(f=>f.id)

  // get a single tweet
  const nResult = tweets.get(tweetIds[0]).throw()
  console.log(nResult.data.items[0])
  
  /*
{ id: '1541817311593582599',
  text: '@codingfess Kalo script sederhana kyk ajax trs datanya disimpan + cron job, coba pake google apps script (+ google sheet utk penyimpanannya)' }
  */
get a single tweet by id
Get a list of tweets by id
  // get a list of ids
  const oResult = tweets.get(tweetIds).throw()
  console.log(oResult.data.items.length) // 20
a list of tweets by id
Closures

Get can have all the same field and query closures as search – for example

  // query closure with ids
  const idsClosure = tweets.query({ids: tweetIds, fields})
  
  // these are all equivalent 
  // use whichever fits your reusability requirements
  idsClosure.get()
  tweets.get(tweetIds, fields)
  tweets.query({fields}).get(tweetIds)
  tweets.query({ids: tweetIds, fields}).get()
id closure
compound queries

Get also supports compound queries, where the lists of ids are concatented

  // compound get query
  const zResult = tweets.query({ids: tweetIds.slice(0,10), fields}).get(tweetIds.slice(10))
  console.log(zResult.data.items.length) // 20
  
  // this will produce the same result as
  tweets.get(tweetIds, fields)
  tweets.query({ids: tweetIds , fields}).get()
  tweets.query({fields}).get(tweetIds)
  //... etc
compound get

 

 

Paging

Get supports a list of ids, so the max returned is the number in the list, so there’s no paging support. If you need to get a list of more than 100, then split it and make several fetches. The 100 is the Twitter API limit – actually it’s probably driven by the maximum length of a url – it takes the ids as url parameters.

Caching

Works exactly the same way as for searches.

Users

There is no search available in users – instead we get by id or by username. I the response, there will be one item for each id, plus some expansion data if you’ve requested it.

get – Getting by user ids

The Users domains for getting by Ids is exactly the same as for the Tweets domain, so I won’t repeat it all. Everywhere you see twt.tweets, replace it with twt.users.

Of course the fields and expansions available from the users domains are different from the tweets domain. See the API documentation for which fields you can use.

example

Here’s a complete example  getting the pinned tweets and profile image url of a few users who have recently tweeted about Google apps script

const articleTwtUsers = () => {

  // get a Goa
  const goa = makeTwitterGoa()

  // Instantiate
  const { Plugins, SuperFetch } = bmSuperFetch
  const { Twt } = Plugins

  const superFetch = new SuperFetch({
    fetcherApp: UrlFetchApp,
    tokenService: goa.getToken,
    cacheService: CacheService.getUserCache()
  })

  // twt handle
  const twt = new Twt({
    superFetch,
    max: 200,
    showUrl: false,
    noCache: false
  })

  // domain shortcuts
  const { tweets, users } = twt


  // get some tweets as a baseline
  const tClosure = tweets.query({
    fields: {
      expansions: 'author_id'
    }
  })
  const { data: tweetsData } = tClosure.page({ max: 20 }).search("Google Apps Script").throw()

  // these are some user Ids
  const userIds = tweetsData.expansions.includes.users.map(f => f.id)

  // now get the users
  const uClosure = users.query({
    fields: {
      'user.fields': 'created_at,description,profile_image_url,pinned_tweet_id',
      expansions: 'pinned_tweet_id'
    }
  })
  const { data: usersData } = uClosure.get(userIds).throw()
  // these are the users
  console.log(usersData.items)
  /* eg...
{ description: 'Google Workspace Trainer, Founder & CEO of @saperis_io | @GoogleDevExpert in #GoogleAppsScript\nYouTube channel: https://t.co/EhZXNSp2Hz',
    created_at: '2013-08-29T20:21:43.000Z',
    username: 'ChanelGreco',
    id: '1710909806',
    name: 'Chanel Greco',
    profile_image_url: 'https://pbs.twimg.com/profile_images/1420076229365993473/lbM1w5pU_normal.jpg',
    pinned_tweet_id: '1490267940364640258' }
  */
  // these are the pinned tweets
  console.log(usersData.expansions.includes.tweets)
  /* eg...
  { id: '1490267940364640258',
    text: 'The most watched video on our YouTube channel is a beginner friendly Google Apps Script tutorial.\n\nKnow anyone starting out with Apps Script? Then make sure to share this video with them. \n\n#GoogleAppsScript #YouTube #GoogleWorkspace #saperis \n\nhttps://t.co/RT43sxFKn4' },
    */

  // they can be matched by the pinned tweet id
  const addPinned = usersData.items
    .map(u => ({
      ...u,
      pinnedTweet: usersData.expansions.includes.tweets.find(t => t.id === u.pinned_tweet_id)
    }))
  console.log(addPinned)
  /*
  eg
   { description: 'Google Workspace Trainer, Founder & CEO of @saperis_io | @GoogleDevExpert in #GoogleAppsScript\nYouTube channel: https://t.co/EhZXNSp2Hz',
    created_at: '2013-08-29T20:21:43.000Z',
    username: 'ChanelGreco',
    id: '1710909806',
    name: 'Chanel Greco',
    profile_image_url: 'https://pbs.twimg.com/profile_images/1420076229365993473/lbM1w5pU_normal.jpg',
    pinned_tweet_id: '1490267940364640258',
    pinnedTweet: 
     { id: '1490267940364640258',
       text: 'The most watched video on our YouTube channel is a beginner friendly Google Apps Script tutorial.\n\nKnow anyone starting out with Apps Script? Then make sure to share this video with them. \n\n#GoogleAppsScript #YouTube #GoogleWorkspace #saperis \n\nhttps://t.co/RT43sxFKn4' } }
   */
}
example look up user ids and find and match their pinned tweet

Getting by user names

I hesitated in naming this method, as getByUsernames is a bit of a mouthful. In the end I decided to stick with the name of the Twitter API endpoint which is oddly, but simply, “by”

by – Getting by usernames

This works pretty much the same as get by id, except of course we replace all references to ‘ids’ by ‘usernames’. The usernames can be either a single username or an array or usernames, and is limited to 100 just as ids fetches.

Example

Let’s simply redo the Get by user example, except find the individuals by username.

In fact there’s only 2 lines that need changed, and since we are using closures, the changes are very trivial

    // change this
	const userIds = tweetsData.expansions.includes.users.map(f => f.id)
	// to
	const userNames = tweetsData.expansions.includes.users.map(f => f.username)
	
	// change this
	const { data: usersData } = uClosure.get(userIds).throw()
	// to
	const { data: usersData } = uClosure.by(userNames).throw()
	
changes between .get and .by

Getting your own account

Allows you to get your own user object

me – Getting your own user account

Although this will return only 1 item, the response is exactly the same as all other methods. ie. an array of of  items (with just 1 member) plus an expansions property.

 

console.log(twt.users.me().throw().data.items)
/*
	[ { id: '17517365',
    name: 'Bruce McPherson 🇪🇺🇫🇷🏴󠁧󠁢󠁳󠁣󠁴󠁿',
    username: 'brucemcpherson' } ]
*/
me

Goa

Goa is covered in detail elsewhere, and you’ll need to set that up before you can access the API – see Apps Script Oauth2- a Goa Library refresher

However, in this article I’ve used a couple of shortcuts for the token service. This is all the code you’ll need to implement and use goa as a twitter token service.

const SETTINGS = {

  twitter: {
    propertyService: PropertiesService.getUserProperties(),
    name: 'twitter',
    goaPackage: {
      clientId: "xxx",
      clientSecret: "xxx
      scopes: ["tweet.read", "users.read"],
      service: 'twitter',
      packageName: 'twitter'
    }
  
}
	
const makeTwitterGoa = (e) => {
  const { twitter } = SETTINGS
  return cGoa.make(
    twitter.name,
    twitter.propertyService,
    e
  )
}

const oneoffTwitter = () => {
  const { twitter } = SETTINGS
  cGoa.GoaApp.setPackage(twitter.propertyService, twitter.goaPackage)
}

function doGet(e) {
  return doGeTwitter(e)
}

// run this once off to get authorized
function doGeTwitter(e) {

  const goa = makeTwitterGoa(e)

  // it's possible that we need consent - this will cause a consent dialog
  if (goa.needsConsent()) {
    return goa.getConsent();
  }

  // get a token
  const token = goa.getToken()

  // if we get here its time for your webapp to run and we should have a token, or thrown an error somewhere
  if (!goa.hasToken()) throw 'something went wrong with goa - did you check if consent was needed?';

  // now we can use the token in a query or just leave it there registered for future server side use
  return HtmlService.createHtmlOutput(`Got this access token ${token}`)

}
	
goa

Most of the searching in this article applies to public data, so it’s possible that ‘app only’ oauth will suit you better if you don’t plan to do any user specific operations – it’s still enabled by Goa, but doesn’t need user consent. The detail is in  OAuth2 and Twitter API – App only flow for Apps Script

Unit testing

I’ll use Simple but powerful Apps Script Unit Test library to demonstrate calls and responses. It should be straightforward to see how this works and the responsese to expect from calls. These tests demonstrate in detail each of the topics mentioned in this article, and a few others, and could serve as a useful crib sheet for the plugin

Warning – It’s a big read and many tests.



const testTwt = ({ force = false, unit } = {}) => {

  // manage skipping individual tests
  force = true
  const skipTest = {
    tweetsCache: true && !force, // ok
    tweetExpansions: true && !force, // ok
    tweetGet: true && !force, //
    tweetSearch: true && !force, // ok
    tweetPaging: true && !force, // ok
    userGet: false && !force
  }

  // get a testing instance (or use the one passed over)
  unit = unit || new bmUnitTester.Unit({
    showErrorsOnly: true,
    maxLog: 200,
    showValues: true
  })

  // we'll use this token service
  const goa = makeTwitterGoa()

  // import required modules
  const { Plugins, SuperFetch } = bmSuperFetch
  const { Twt } = Plugins

  const superFetch = new SuperFetch({
    fetcherApp: UrlFetchApp,
    tokenService: goa.getToken,
    cacheService: CacheService.getUserCache(),
    missingPropertyIsFatal: false
  })

  const twt = new Twt({
    superFetch,
    noCache: true,
    max: 30
  })


  unit.section(() => {

    const queryString = "Google Apps Script"
    // short cut to this query

    const fields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id'
    }
    const t = twt.tweets
    const query = t.query({ fields })
    const { actual } = unit.not(null, query.search(queryString).throw().data, {
      description: 'default query isnt null'
    })

    unit.is(twt.max, actual.items.length, {
      description: `default api limit matches `
    })


    unit.is(actual,
      t.query({ fields, query: queryString }).page({ max: twt.max }).search().throw().data, {
      description: 'limit matches default'
    })


    unit.is(actual.items, t.search(queryString, fields).throw().data.items, {
      description: 'used fields as get parameters'
    })

    const { actual: actual20 } = unit.not(
      null,
      t.query({ fields, query: queryString }).page({ max: 20 }).search().throw().data, {
      description: 'limit worked with different pageSize'
    })


    unit.is(actual20.items,
      t.query({ fields, query: queryString }).page({ max: 20 }).search().throw().data.items, {
      description: 'limit worked again for items'
    })

    unit.is(t.recent.search(queryString).throw().data.items, t.search(queryString).throw().data.items, {
      description: 'check recent endpoint returns same result as default search endppoint',
    })

    unit.is('Unsupported Authentication',
      unit.threw(() => t.all.search(queryString).throw()).title, {
      description: 'should throw as we dont have a research account',
    })

    unit.is('string', typeof actual.items[0].author_id, {
      description: 'expansion field contains an author_id'
    })

    unit.is(true, actual.expansions.includes.users[0].hasOwnProperty('username'), {
      description: 'we got a users expansion'
    })

    unit.is(
      query.search('"Apps Script" VBA').throw().data,
      t.query({ query: '"Apps Script"', fields }).search("VBA").throw().data, {
      description: "split params ok"
    })

  }, {
    description: 'searching tweets',
    skip: skipTest.tweetSearch
  })


  unit.section(() => {
    const queryString = "Google Apps Script"
    const fields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id'
    }

    // short cut to this query
    const t = twt.ref({ noCache: false }).tweets

    // the query has cache enabled
    const query = t.query({ fields })

    // use this fixture for testing paging
    const testLimit = {
      max: 20
    }

    // get rid of any previous cached
    query.deCache(queryString)
    t.query({ fields }).page(testLimit).deCache(queryString)

    const { actual: baseline } =
      unit.not(null, twt.ref({ noCache: true }).tweets.query({ fields }).search(queryString).throw(), {
        description: 'get a baseline - not cached'
      })

    unit.is(false, baseline.cached, {
      description: 'initial was not cached'
    })

    unit.is(twt.max, baseline.data.items.length, {
      description: `default api limit matches `
    })

    const actualMax = query.page({ max: twt.max }).search(queryString).throw()
    unit.is(false, actualMax.cached, {
      description: 'cache seeding wasnt cached'
    })

    unit.is(baseline.data.items.length, actualMax.data.items.length, {
      description: 'cache seeding correct length'
    })

    unit.is(baseline.data.items, actualMax.data.items, {
      description: 'limit matches default - seed cache'
    })

    const cached = query.page({ max: twt.max }).search(queryString).throw()

    unit.is(
      true, Reflect.has(actualMax, 'pageToken'), {
      description: 'non cache has a pagetoken'
    })

    unit.not(
      true, Reflect.has(cached, 'pageToken'), {
      description: 'cached has no pagetoken'
    })

    unit.is(
      baseline.data.items, cached.data.items, {
      description: 'items matches default - cached version'
    })


    unit.is(true, cached.cached, {
      description: 'list came from cache'
    })

    const cached2 = query.page().search(queryString).throw()
    unit.is(true, cached2.cached, {
      description: 'default max still works from cache'
    })
    unit.is(cached.data, cached2.data, {
      description: 'default max data still matches from cache'
    })
    unit.is(
      baseline.data.items,
      twt.ref({ noCache: true }).tweets.query({ fields }).page({ max: twt.max }).search(queryString).throw().data.items, {
      description: 'uncached max matches baseline'
    })

    const limited = query.page(testLimit).search(queryString).throw()
    unit.is(testLimit.max, limited.data.items.length, {
      description: 'limited correct length'
    })

    unit.is(30, query.page({ max: 29 }).search(queryString).throw().data.items.length, {
      description: 'limited rounds up to nearest 10'
    })

    unit.is(true, t.query({ fields }).page(testLimit).search(queryString).throw().cached, {
      description: 'got it from cache with a differnt max'
    })

  }, {
    description: 'searching tweets with caching',
    skip: skipTest.tweetsCache
  })


  unit.section(() => {
    const queryString = "Google Apps Script"
    const fields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id'
    }

    // short cut to cached version
    const t = twt.ref({ noCache: false }).tweets
    const u = twt.ref({ noCache: true }).tweets

    // the query has cache enabled
    const query = t.query({ fields })

    // this one doesnt
    const uQuery = u.query({ fields })

    // use this fixture for testing paging
    const testLimit = {
      max: 10
    }

    // get rid of any previous cached
    query.deCache(queryString)
    query.page(testLimit).deCache(queryString)

    const { actual: baseline } =
      unit.not(null, uQuery.search(queryString).throw(), {
        description: 'get a baseline - not cached'
      })

    unit.is(false, baseline.cached, {
      description: 'initial was not cached'
    })

    unit.is(twt.max, baseline.data.items.length, {
      description: `default api limit matches `
    })

    const actualMax = query.page({ max: twt.max }).search(queryString).throw()
    unit.is(false, actualMax.cached, {
      description: 'cache seeding wasnt cached'
    })

    unit.is(baseline.data.items.length, actualMax.data.items.length, {
      description: 'cache seeding correct length'
    })

    unit.is(baseline.data.items, actualMax.data.items, {
      description: 'limit matches default - seed cache'
    })

    const cached = query.page({ max: twt.max }).search(queryString).throw()

    unit.is(
      true, Reflect.has(actualMax, 'pageToken'), {
      description: 'non cache has a pagetoken'
    })

    unit.is(
      'string', typeof actualMax.pageToken, {
      description: 'non cache has a pagetoken'
    })

    unit.is(
      false, Reflect.has(cached, 'pageToken'), {
      description: 'cached has no pagetoken'
    })

    unit.is(
      baseline.data.items, cached.data.items, {
      description: 'items matches default - cached version'
    })

    const { actual: page1 } = unit.is(
      10, uQuery.page({ max: 10 }).search(queryString).throw(), {
      description: 'got a first page of 10',
      compare: ((e, a) => e === a.data.items.length)
    })

    unit.is(false, page1.cached, {
      description: 'page1 wasnt cached'
    })

    const { actual: page2 } = unit.is(
      10, uQuery.page({ max: 10, startToken: page1.pageToken }).search(queryString).throw(), {
      description: 'got a second page of 10',
      compare: ((e, a) => e === a.data.items.length)
    })

    const { actual: page1plus2 } = unit.is(
      20, uQuery.page({ max: 20 }).search(queryString).throw(), {
      description: 'got a first page of 20',
      compare: ((e, a) => e === a.data.items.length)
    })

    unit.is(page1plus2.data.items, page1.data.items.concat(page2.data.items), {
      description: 'starttoken worked'
    })

  }, {
    description: 'paging',
    skip: skipTest.tweetPaging
  })

  unit.section(() => {
    const queryString = "Google Apps Script"
    const fields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id,geo.place_id'
    }
    // short cut to cached version
    const t = twt.ref({ noCache: false }).tweets
    const u = twt.ref({ noCache: true }).tweets

    // the query has cache enabled
    const query = t.query({ fields })
    const page = query.page({ max: 200 })

    // this one doesnt
    const uQuery = u.query({ fields })

    const { actual: baseline } =
      unit.not(null, page.search(queryString).throw(), {
        description: 'get a baseline for expansion tests'
      })

    unit.is(true, Array.isArray(baseline.data.expansions.includes.users), {
      description: 'there is an expansion array of users'
    })

    const authorIds = baseline.data.items.map(f => f.author_id).filter((f, i, a) => a.indexOf(f) === i)
    const userIds = baseline.data.expansions.includes.users.map(f => f.id)
    const placeIds = (baseline.data.expansions.includes.places || []).map(f => f.id)
    const geos = baseline.data.items.map(f => f.geo && f.geo.place_id).filter((f, i, a) => f && a.indexOf(f) === i)

    unit.is(placeIds.length, placeIds.filter(f => geos.indexOf(f) !== -1).length, {
      description: 'have a geo placeid for every placeid'
    })
    unit.is(authorIds.length, authorIds.filter(f => userIds.indexOf(f) !== -1).length, {
      description: 'have an userid for every author'
    })



  }, {
    description: 'tweet expansion consistency',
    skip: skipTest.tweetExpansions
  })

  unit.section(() => {
    const queryString = "Google Apps Script"
    const fields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id,geo.place_id'
    }
    // short cut to cached version
    const t = twt.ref({ noCache: false }).tweets
    const u = twt.ref({ noCache: true }).tweets

    const { actual: baseline } =
      unit.not(null, u.page({ max: 20 }).search(queryString).throw(), {
        description: 'get a baseline for expansion tests'
      })

    const { actual: baseFields } =
      unit.not(null, u.query({ fields }).page({ max: 20 }).search(queryString).throw(), {
        description: 'get a baseline for expansion tests'
      })

    const ids = baseline.data.items.map(f => f.id)
    unit.is(ids, baseFields.data.items.map(f => f.id), {
      description: 'ids with fields match'
    })

    const { actual: firstGet } = unit.not(null, u.get(ids[0]).throw(), {
      description: 'simple id.get single item'
    })

    unit.is(ids[0], firstGet.data.items[0].id, {
      description: 'got the correct item by id'
    })

    unit.is(
      baseline.data.items, u.get(ids).throw().data.items, {
      description: 'matched a bunch of ids'
    })
    const { data } = u.query({ fields }).get(ids).throw()

    unit.is(
      baseFields.data.items, data.items, {
      description: 'fields match by bunch of ids'
    })

    unit.is(
      baseFields.data, data, {
      description: 'expansions match'
    })

    const { actual: seed } =
      unit.is(baseFields, t.query({ fields }).get(ids).throw(), {
        description: 'seed cache',
        compare: (e, a) => unit.deepEquals(e.data.items, a.data.items)
      })

    const { actual: cached } =
      unit.is(seed, t.query({ fields }).get(ids).throw(), {
        description: 'check seeded cache',
        compare: (e, a) => unit.deepEquals(e.data, a.data)
      })

    unit.is(true, cached.cached, {
      description: 'cached as expected'
    })

    unit.is(false, seed.cached, {
      description: 'uncached as expected'
    })

    unit.is(seed, t.query({ fields, ids }).get().throw(), {
      description: 'ids as query parameter',
      compare: (e, a) => unit.deepEquals(e.data, a.data)
    })

    unit.is(seed, u.query({ fields, ids: ids.slice(0, 2), query: ids.slice(2, 4) }).get(ids.slice(4)).throw(), {
      description: 'combination of ids, query and get',
      compare: (e, a) => unit.deepEquals(e.data, a.data)
    })

    //const b= bunch.map(f=>f.id)
    //console.log(bunch.length, ids.length, b,ids, ids.filter(f=>b.indexOf(f)===-1))

  }, {
    description: 'get tweets by id',
    skip: skipTest.tweetGet
  })

  unit.section(() => {
    const queryString = "Google Apps Script"
    const fields = {
      'user.fields': 'created_at,description,profile_image_url,pinned_tweet_id',
      expansions: 'pinned_tweet_id'
    }
    const tweetFields = {
      'tweet.fields': 'created_at',
      expansions: 'author_id,geo.place_id'
    }
    // short cut to cached version
    const tweets = twt.ref({ noCache: true }).tweets
    const t = twt.ref({ noCache: false }).users
    const u = twt.ref({ noCache: true }).users
    const uQuery = u.query({ fields })


    const { actual: baseFields } =
      unit.not(null, tweets.query({ fields: tweetFields }).page({ max: 20 }).search(queryString).throw(), {
        description: 'get a baseline for expansion tests'
      })

    const ids = baseFields.data.expansions.includes.users.map(f => f.id)

    const { actual: users } = unit.not(null, uQuery.get(ids).throw(), {
      description: 'user by ids'
    })

    unit.is(true, Reflect.has(users.data.items[0], 'username'), {
      description: 'username property exists'
    })

    unit.is(true, Reflect.has(users.data.expansions.includes, 'tweets'), {
      description: 'pinned_tweet_id expansion exists'
    })

    // dedup and not all users have a pinned tweet
    const pinned = Array.from(new Set(users.data.items.map(f => f.pinned_tweet_id).filter(f => f)))
    const includedTweets = users.data.expansions.includes.tweets.map(f => f.id)
    const missingTweets = pinned.filter(f=>!includedTweets.includes(f))


    if (missingTweets.length) {
      const getMissing = tweets.get(missingTweets).throw().data
      unit.is (missingTweets.length, getMissing, {
        compare: (e,a) => a.expansions.errors.length === e && !a.items.length,
        description: `${missingTweets.length} missing pinned tweets are deleted`
      })
      unit.is(missingTweets, getMissing.expansions.errors.map(f=>f.value), {
        description: 'all missing tweets were accounted for as deleted'
      })
    }
    unit.is(pinned.filter(f=>includedTweets.includes(f)), includedTweets, {
      description: 'all the pinned tweets are included in expansion - except deleted ones'
    })

    const usernames = users.data.items.map(f=>f.username)
    unit.is(usernames, u.by(usernames).throw().data.items.map(f=>f.username), {
      description: 'get by usernames matches by id'
    })

    
    unit.is ("brucemcpherson", t.me().throw().data.items[0].username, {
      description: 'username me matches'
    })

    unit.is ("#gde", t.query({fields}).me().throw().data, {
      compare: (e,a) => a.items[0].description.match(a),
      description: "me extra fields work"
    })


  }, {
    description: 'get users by id,username,me',
    skip: skipTest.userGet
  })
  unit.report()
}
tests

Links

bmSuperFetch: 1B2scq2fYEcfoGyt9aXxUdoUPuLy-qbUC2_8lboUEdnNlzpGGWldoVYg2

IDE

GitHub

bmUnitTester: 1zOlHMOpO89vqLPe5XpC-wzA9r5yaBkWt_qFjKqFNsIZtNJ-iUjBYDt-x

IDE

GitHub

cGoa library 1v_l4xN3ICa0lAW315NQEzAHPSoNiFdWHsMEwj2qA5t9cgZ5VWci2Qxv2

IDE

Github

Related

Twitter API docs https://developer.twitter.com/en/docs/twitter-api

Twitter Developer profile https://developer.twitter.com/en/portal

superfetch drive plugin logo

SuperFetch plugin – Google Drive client for Apps Script – Part 1

Drv is a SuperFetch plugin to access the Google Drive API. SuperFetch is a proxy for UrlFetchApp with additional features ...
Read More
Superfetch plugin twitter

SuperFetch – Twitter plugin for Apps Script – Get Follows, Mutes and blocks

Twt is a SuperFetch plugin to easily access to the Twitter v2 API.  SuperFetch is a proxy for UrlFetchApp with ...
Read More
Superfetch plugin twitter

SuperFetch plugin – Twitter client for Apps Script – Counts

Twt is a SuperFetch plugin to easily access to the Twitter v2 API.  SuperFetch is a proxy for UrlFetchApp with ...
Read More
goa twitter oauth2 apps script

OAuth2 and Twitter API – App only flow for Apps Script

I covered how to handle the somewhat more complex OAUTH2 authorization flow for the Twitter v2 API (OAuth 2.0 Authorization ...
Read More
Superfetch plugin twitter

SuperFetch plugin – Twitter client for Apps Script – Search and Get

Twt is a SuperFetch plugin to easily access to the Twitter v2 API.  SuperFetch is a proxy for UrlFetchApp with ...
Read More
Goa Oauth2 for Apps Script

Apps Script Oauth2 library Goa: tips, tricks and hacks

Motivation Goa is a library to support OAuth2 for Apps Script connecting to a variety of services, using a variety ...
Read More
goa twitter oauth2 apps script

Apps Script Oauth2 – a Goa Library refresher

It's been a few years since I first created the Goa library. Initially it was mainly to provide OAuth2 authorization ...
Read More
SuperFetch

SuperFetch plugin – Firebase client for Apps Script

Frb is a SuperFetch plugin to easily access a Firebase Real time database. SuperFetch is a proxy for UrlFetchApp with ...
Read More
SuperFetch

SuperFetch plugin – iam – how to authenticate to Cloud Run from Apps Script

SuperFetch is a proxy for UrlFetchApp with additional features - see SuperFetch - a proxy enhancement to Apps Script UrlFetch for ...
Read More
SuperFetch

SuperFetch – a proxy enhancement to Apps Script UrlFetch

I've written a few articles about JavaScript proxying on here, and I'm a big fan. I also use a lot ...
Read More