I am supporting CandidateX

CandidateX is a startup that focuses on creating inclusion-focused hiring solutions, designed to increase access to job opportunities for underestimated talent. Check them out if you have a few minutes to spare. They need visibility!

This article is all about extending an object by proxying and adding new features. As far as the user is concerned the interface is the same, however underneath we do lots of fancy things. We’ll use the redis client for this example, as cache is a great help in defeating rate limits and other annoyances when working with APIs, especially in the context of the Gassypedia – public Google Apps Script on Github project, but the same principles can be applied to any such client.

The proxy is written so that other cache clients can be plugged in – in a later article I’ll write up an Apps Script Cacheservice version. Not all the code is given in this article – just where it needs explanation. You can find the entire code on github.

I’ll also spend a bit of time on how to manage secrets stored in GCP Secret manager and how to consume them in your local shell and Node app.

Here’s how it works.

Objective

To create a proxy for the redis client which:

  • Accepts primitives or objects as a key
  • Stringifies payloads on on set and parses back to a result on get
  • Adds an optional prefix to the key to allow partitioning of the same database witout key collision
  • Keeps the client interface the same despite the additional functionality
  • Allows for spreading items across many physical records if the payload is bigger than the platform’s maximum
  • Automatically compresses the payload (unless it’s a very small payload)
  • Can optionally be initialized to not do any of that so the same code can be used natively if required
  • Limits only to a subset of redis commands that take (key[,value]) as args – mostly these are what I use anyway

Examples

Here’s a few examples of what I’m aiming for in addition to the usual redis (key, stringValue) behavior. Typically you would want to key API values by such things as params, token, method – so I’d set up a redisProxy with a prefix of the api name, and a key of those variables which identify the call.

// whatever can uniquely identify the item to cache
// the key will be hashed to a string on the way to redis
const key = {params: [{foo:1,bar:2}] , method: 'get', token: '123'}

// data will be stringified/parsed automatically
const data = {
id: 'xyz',
items: ['a','b']
}

// this returns "OK"
const p = await cProxy (key, data)

// this returns an object
// {value: the original data, hashedKey: the converted key, timestamp: when written}
const result = cProxy.get(key)

// this returns 0 - nothing deleted or 1 - item was deleted
const d = await cProxy.del (key)
some examples

Tip: – see the test folder on github for an extensive set of examples.Cache

Cache Configuration

The idea is that you include this project as a folder in yours, and add any configuration data for your environment – For this article, I’ve simply extracted and published the caching pieces from my Gassypedia – public Google Apps Script on Github project.

You can of course supply configuration however you like, but here’s how I do it. Configuration is in 2 parts – secret and public. I’ll be using GCP secret manager to hold any secret info.

Cache Secrets

Typically these kind configuration objects are dynamically populated from the secrets apis such as kubernetes, gcp or doppler (see Sharing secrets between Doppler, GCP and Kubernetes) and are exposed only as environment variables. In this example, secrets such as redis passwords are in GCP secret manager as a JSON string.

Here’s an example of a secrets configuration file, where I’ve set up multiple redis environments from which I can select. This data never exists as a file locally or on Github.

{
"redisConfigs": {
"databases": {
"gitsplit": {
"host": "my-rediscloud-host",
"password": "my-rediscloud-password",
"port": "my-rediscloud-port"
},
"local": {
"host": "127.0.0.1",
"password": my-local-password,
"port": 6379
}
}
}
}
Configuration file – from gcp-secrets

Getting secrets into your environment

Once signed in to the GCP project hosting your secret in your local shell, the commands below will bring in the values into your environment.

. ./shells/getsecrets.sh
# alternatively on some platforms,
# you might instead use
# source ./getsecrets.sh
# to execute
shell to bring secrets into environment

The shell for this looks like this. You should edit getsecrets.sh with your project id and secret name.

PROJECT="my-gcp-project"
SECRET=redis_secrets
P=$(gcloud config get project)
if [ "$P" = "$PROJECT" ] ; then
REDIS_SECRETS=$(gcloud secrets versions access latest --secret=${SECRET})
export REDIS_SECRETS
else
echo "current project ${P} doesnt match required project ${PROJECT}"
fi

Consuming secrets

You can consume them via the process.env object in your node app.

export const getSecrets = ({name}) => {
const secrets = process.env[name]
if (!secrets) {
console.log('.. did run . ./shells/getsecrets.sh in your shell first')
throw `${name} not set`
}
return JSON.parse (secrets)
}
getsecrets.mjs

Platform specific configuration

Because we’re writing this to be transportable across platforms, we’ll need to configure things that are specific to a given platform. Here’s what we’ll need to support Node. An Apps Script version will have an entirely different way of providing these functions.

import gz from "node-gzip";
const { gzip, ungzip } = gz;
import hash from "object-hash";

export const platformSpecific = {
zipper: gzip,
unzipper: ungzip,
toBase64: (str) => str.toString("base64"),
fromBase64: (str) => Buffer.from(str, "base64"),
makeKey: ({ prefix = "", key }) =>
hash(
{ prefix, key },
{
encoding: "base64",
}
)
}
platformspecific.mjs

Cache specific configuration

Similarily we’ll need configuration that are specific to the cache being used – in this case Redis. A couple of notes:

  • The propmap is to map specific commands to how the proxy is going to recognize them. For example, in Apps Script the property name for “set” is “put” whereas in redis it’s “set”.
  • The zip parameters are whether or not to zip large objects. The threshold defines the size at which the overhead of compression makes it worthwhile. Anything less than 150 chars will actually take up more space once zipped. I set it a bit higher to avoid the zipping overhead if I’m not gaining much.
  • The expiration parameters are how long by default (in seconds) to keep items alive for. This is automatically applied to every “set” operation, unless an explicit expiration argument is provided.
  • maxChunk defines how large to allow a compressed item to be before splitting it into multiple items.
  • Different configurations are possible, as you can see from this example. For testing, I want to keep the size of cache items down with maxChunk to ensure that I test spreading data across multiple physical items, and of course I can set their default expiry to some smaller time
import {platformSpecific} from './platformsettings.mjs'

// this could vary depending on platform
const redisPropMap = {
set: 'set',
get: 'get',
del: 'del'
}

// configs specific to redis
const redisPlatform = {
propMap: redisPropMap,
...platformSpecific,
gzip: true,
gzipThreshold: 800
}

export const cacheSettings = {

redis: {
expiration: 28 * 24 * 60 * 60,
prefix: 'prod',
maxChunk: Infinity,
...redisPlatform
},

test: {
expiration: 2 * 60 * 60,
prefix: 'test',
maxChunk: 999,
...redisPlatform
}
}
cachesettings.mjs

Getting a proxied client

Now we can get a proxy for the redis client. I also like to do a connectivity test when getting a client – so this configuration will set/get and delete a test item before returning the proxy.

  // get a redis client and check connecivity
import { default as cache } from "../index.mjs";
const { getcProxy, fetchConfig } = cache;
import { getRedisConfigs } from "./helpers/redisconfig.mjs";

const cProxy = await getcProxy({
database: "local",
extraConfig: "test",
testConnectivity: true,
redisConfigs: getRedisConfigs()
})})
Get a proxy cliet for database "local", configuration "test"

And getRedisConfigs sets up the secret data specific to the the chosen environment.

import {getSecrets} from "./getsecrets.mjs"
export const getRedisConfigs = () => getSecrets({name: "REDIS_SECRETS"}).redisConfigs
redisconfig.mjs

How proxying works

When you create a proxy for an object, you can ‘intercept’ calls to that object. So for example, you could intercept a call to a “get” property an return a different thing than the native object will return. Similarily, you can intercept an ‘apply’ when an attempt to execute an object’s method is made.

We intercept access to a number of properties which resolve to methods, and instead of the native function, we return an alternative function which does all the fancy enhancements to the basic method. When the altered function is executed (applied), our alternative function is applied instead.

Proxy code

Here’s the code for a proxied version of the client. Some notes:

  • The proxyExports set defines various potentially useful helper functions, exposed so they can be accessed directly outside the proxy if necessary – I won’t cover examples of that in this article, but may do in a later one.
  • hashProps contain the list of properties to intercept to create hashedKeys out of. Any other properties accessed will use the key supplied without modification and will return the native version of the property/function. Attempts to access properties in the hashProps set will return a modified apply handler – in other words an entirely different function than the one the redis client would normally return.
  // so this is the vanilla client
const client = new Redis({
password: config.password,
host: config.host,
port: config.port,
});
// these function can be exported as part of the proxy so more complex redis commands are avail
const proxyExports = {
proxyKey: makeKey,
proxyUnpack: payUnpack,
proxyPack: payPack,
proxySetPack: setPack,
proxyUnsetPack: unsetPack,
};
// we don't hash every property - just these for now
const hashProps = new Set([
"set",
"get",
"exists",
"expire",
"ttl",
"persist",
"del",
]);

// make the proxy
const proxy = new Proxy(client, {
// we'll be called here on every get to the client
get(target, prop, receiver) {
// the caller is after some of the proxy functions to use them independently
if (Reflect.has(proxyExports, prop)) return proxyExports[prop];

// if we get a fetch call, we'd like to send it back with the endpoint encapsulated
// so that when it's applied, it will execute my version of the function
if (typeof target[prop] === "function" && hashProps.has(propMap[prop])) {
return makeApplyHandler(target, prop);
} else {
// not a function we want to intercept
return Reflect.get(target, prop, receiver);
}
},
});
get a proxy for a redis client

Returning the modified handler

You can see from above that if a one of the target properties is requested an alternative handler is returned. Actually, what is returned is another proxy – this time a proxy for the requested method. In other words each time one of the target methods is requested in the client, a proxy for that method is returned – so we can make it do whatever we want.

  // generates a proxy with an apply handler
const makeApplyHandler = (target, prop) => {
return new Proxy(target[prop], {
apply(func, thisArgs, args) {
return applyHandler(prop, func, thisArgs, args);
},
});
};
makeApply Handler

The apply handler

Now we can look at how these enhanced features are implemented. Some notes:

  • We’ll just look at the structure here. The full code is on Github.
  • If there are no arguments we can simply apply the native function unmodified.
  • The key will be the first argument to all our target functions. The first step is hash the key (which could be pretty much anything) with the prefix allocated for this instance of the client.
  • There could be a whole list of additional arguments that we need to sustain. The second argument will always be the data to write in a set operation, so we have to handle that by converting it to a string, compressing it if required, and even spreading it across several cache items if the resultant string is too long.
  • exArgs is the default expiry time – if there is already an expiry time mentioned in the arguments, we use that, otherwise we need to add an expiry command to the argument list which will apply the default expiry time.
  • The commit function is the native function with the modified arguments and any additional arguments preserved as a closure.
  • We need to handle set, get and delete to compress/uncompress the payload. Since we also have the possibility that a single cache item will be spread over multiple cache records, the del methods potentially needs to delete multiple records – so in addition to a del, it also needs a get handler to find them.
  • Any other methods just need the key to be hashed, plus any additional arguments preserved.
  /**
* apply handler for fixing up the keys and data
*/
const applyHandler = async (prop, func, thisArg, args) => {
// if there are no args, we'll just apply the function as is
if (!args.length) return func.apply(thisArg, args);

// the first arg for handled functions will always be the key
// so we'll hash that to a b64 value
const [key] = args;
const hashedKey = makeKey(key);

// the rest of the args will start with the value if we're doing a set
const [value] = args.slice(1);
const otherArgs = args.slice(2);

// construct default expiration
// if there's already an EX arg we dont need to specify it again
const exArgs = getExArgs({ prop, otherArgs });

// this applies the selected method
const commit = async (hashedKey, packedValue) => {
const fargs = [hashedKey]
.concat(packedValue ? [packedValue] : [], exArgs, otherArgs)
.slice(0, args.length exArgs.length);
return func.apply(thisArg, fargs);
};

// special handling for packing/unpacking
switch (propMap[prop]) {
case "set":
// this will pack/zip/chunk etc as required
return setPack(hashedKey, value, commit);

// in this case we potentially need to get multiple items
case "get":
return unsetPack(hashedKey, commit);

/// delete may actually have to delete multiple recs so it needs a getter
case "del":
const getProp = Reflect.ownKeys(propMap).find(
(f) => propMap[f] === "get"
);
if (!getProp) throw `couldnt find get prop for get in propMap`;
const getter = async (hashedKey) =>
client[getProp](hashedKey, ...args.slice(1));
return delPack(hashedKey, commit, getter);

// everything else is vanilla
default:
return commit(hashedKey, value);
}
};
applyHandler proxy

The rest of the code

This article is mainly about how to implement a proxy like this. For all the code for the compression and splitting across multiple items is handled, see redis.mjs and proxyUtils.mjs on github.

Tests

There is a test folder with a series of caching tests implemented – I use ava for testing so you’ll need to install that if you want to run the tests.

Links

github.

Gassypedia – public Google Apps Script on Github