Editor’s Note: This blog post is authored by Blair Kutzman, who developed the Gmail Delay Send script. - Eric Koleda
Update: To start using this script simply open this page and follow the instructions.
In today’s connected world, when you get your email could be just as important what’s in it. In 2011 over 107 trillion emails were sent to 1.97 billion internet users. Studies have shown that the average person can only effectively process 50 emails in a day. That leaves 100 emails per person per day that are not processed effectively. How can you be sure that the emails you send fall into the former category and not the latter?
Luckily, there are tools to assist with email overload. One such tool is Gmail Delay Send.
Gmail Delay Send is a Google Apps Script that allows you to schedule emails to be delivered on a specified date and time. Using this tool you can ensure your email is sent to its destination at a time when you can capture your recipient’s full attention. For example, receiving an email at 4:59 PM on friday might not receive the same attention as an email received on Monday at 10:00 AM.
A primary requirement of Gmail Delay Send was that it needed to work everywhere Gmail is available. There are already many browser add-ons and services available to enhance Gmail with similar functionality, so the purpose was not to duplicate that work. In order for the service to be available on all platforms, it needed to utilize native Gmail features.
We needed a native way that Gmail could:
Gmail already contains a 'Draft' folder which is exactly what is required for item 1. The real problem was where and how to store the metadata for item 2, without any new Gmail functions. I chose to encode the metadata in the subject of the message because it would contain less text, which would mean a smaller chance of mis-parsing. Text such as “5 hours” and “next tuesday” were turned into a date-time using an open source library called datejs and a few modifications. See below for details of how this potentially cumbersome process was improved.
The script works as follows:
Although using datejs to parse the dates from the subject line was easy to implement, it introduced some usability issues. First, how would a user know if a certain string can be parsed by datejs (eg. is “5 minutes before 4PM” parsable)? To assist the user in knowing which dates datejs supports, the script offers a mechanism to test a given string directly in the spreadsheet that Gmail Delay Send is installed inside of. In this way a user can test various strings to see if they are valid and, if so, when they would be sent. A wiki page is dedicated to helping people through this process.
Another possibly confusing part of using Gmail Delay Send was setting up triggers. Thanks to some recent improvements of the Script Services, this is now done automatically for users as they install.
Adding retry logic to the script was another important step in improving its reliability and user experience. Occasionally, users were getting emails from their trigger informing them that a certain Google Script API could not be contacted. Some retry logic was required to make things work correctly. As shown in the snippet below, the function executeCommand() takes any function and will try to execute it a specified number of times, retrying if an error is thrown:
function executeCommand(fp) { var msg; var ret_val; var last_error; for(var retries = NUM_RETRIES; retries > 0; retries -= 1) { try { ret_val = fp(); break; } catch(err) { last_error = err; msg = "Exception:" + err + " thrown executing function:" + fp; debug_logs.push(msg); Logger.log(msg); Utilities.sleep(SLEEP_TIME); } } if(retries == 0) { msg = "Attempted to execute command:" + fp + " " + NUM_RETRIES + " times without success. Error message: " + last_error + ". Aborting :-("; Logger.log(msg); throw(msg); } return ret_val; }
Using this method, statements like those below will automatically retry if the service is not available.
executeCommand( function() { GmailApp.send( … ) }); executeCommand( function() { UrlFetchApp.urlFetch(url) } );
Gmail Delay Send was a fantastic project for learning about Google Apps Script and I hope that it will continue to prove useful to its users. If you’re interested in using Gmail Delay Send or just interested in the development process please check out the homepage or source.
Have you ever written a particular piece of code over and over again? Or used scripts to do something that you thought others might want to do as well? Starting today, you’ll be able to share and reuse those scripts as libraries, right from inside Google Apps Script.
I often write scripts which check the National Weather Service for relevant weather-related information. This allows me to send myself an email if it’s going to rain, reminding me to bring an umbrella to work, or to annotate my spreadsheet of running workouts with the temperature of the day.
Remembering how to query the National Weather Service every time I write a script is a daunting task, however. They have a complicated XML format that is tricky to parse. As a result, I end up just copying and pasting code each time. This is not only error-prone, but also has the big disadvantage that I have to fix all of my scripts one by one whenever the Weather Service’s XML format changes.
The code I use to query the National Weather Service is a perfect use case for a library. By using a library, I no longer have to copy and paste code in my script project. Since logic is centralized, updates need to be applied just once. And now I am able to share my library with other developers who can benefit from the work I’ve already done.
Libraries are written just like any other Apps Script project. A good library has a clean API which is also well documented. Here’s a code snippet from my WeatherService library:
/** * Queries the National Weather Service for the weather * forecast of the given address. Example: * * <pre> * var chances = WeatherService * .getPrecipitation("New York, NY"); * var fridayChance = chances[“Friday”]; * Logger.log(fridayChance + “% chance of rain on Friday!”); * </pre> * * @param {String} address The address to query the * temperature for, in any format accepted by * Google Maps (can be a street address, zip * code, city and state, etc) * * @returns {JsonObject} The precipitation forecast, as * map of period to percentage chance of * precipitation. Example: * * <pre> * { Tonight: 50, Friday: 30, Friday Night: 40, ... } * </pre> */ function getPrecipitation(address) { // Code for querying weather goes // here... }
Notice how detailed the documentation is. We know that good documentation makes for a great library. So, for every library Apps Script will also auto-generate a documentation page based on the code comments using the JSDoc format. If you want a method in your code to not be exposed to users, simply end its name with an underscore.
Before code can be used as a library, a version of it needs to be saved. Versions are a new concept in Apps Script, and they represent a snapshot of your project which won’t change even as changes are made to the script code. Versions are useful because they allow you to change your library code without breaking existing users. Once you’re happy with the changes you’ve made, you can then save a new version. Please see the user guide for saving a version and sharing your code as a library is easy.
Using a library only takes a few steps. To be able to use a library, the owner of the library must share the library and its project key with you. You can follow these instructions to then use a library. To use this National Weather Service library, please visit this page for project key.
Script Libraries come with three interesting features.
To get started on using Script Libraries, you can find a list of useful libraries contributed by two of our top contributors - James Ferreira and Romain Vialard. You can also find a detailed user guide on managing versions and libraries. We hope you enjoy using libraries.
Editor’s note: This is a guest post by Laura Bârlădeanu, lead programmer at MindMeister. -- Steve Bazyl
MindMeister is a market innovator for providing collaborative online mind mapping solutions. Launched in May 2007, our site has since attracted hundreds of thousands of businesses, academic institutions and creative consumers who have mapped over 100 million ideas online. We were one of a few web applications invited to take part in the Google Drive launch earlier this year.
The goal was to provide users with an intuitive integration between Google Drive and Mindmeister that would cover all the cases provided by the Google Drive guidelines at that time:
Aside from these main integration points, we wanted to make use of the SDK and provide many useful Google Drive features, so we added a few more requirements to the list:
Google Drive applications are required to use OAuth 2.0 as an authorization mechanism, and are recommended to use OpenID Connect for login. The authorization scope for Drive files is added by default for all registered drive apps. Additionally, the application can require extra scopes that would fit its needs. For our requirements, we needed the following scopes:
https://www.googleapis.com/auth/drive.file
https://www.google.com/m8/feeds/
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/userinfo.email
However, we didn’t want the user to turn away from our application by being asked for too many scopes straight from the beginning. Instead, we defined sets of actions that required a subset of these scopes:
[‘drive’, ‘profile’, ‘email’]
[‘profile’, ‘email’]
[‘contacts’, ‘profile’, ‘email’]
Whenever the user wanted to execute an action that would require more scopes than they initially provided, we redirected them to a Google authorization dialog that requested the extra scope. Upon authorization, we stored the individual refresh tokens for each combination of scopes in a separate model (UserAppTokens).
UserAppTokens
Whenever the application needed the refresh token for a set of scopes (eg. for [‘profile’, ‘email’]) it would fetch the refresh token from the database which corresponded to a superset of the required scopes (eg. [‘drive’, ‘profile’, ‘email’] would fit for the required [‘profile’, ‘email’]). The access token would then be obtained from Google and stored in the session for future requests.
The main challenge we encountered during design and implementation was dealing with the special cases of multiple users (Google users or internal users) editing on the same map which is a Google Drive file, as well as dealing with the special cases of the map being edited in multiple editors. We also had to find a solution for mapping the Google Drive user’s permissions (owner, reader, or writer) to the MindMeister’s existing permission mechanism.
owner
reader
writer
The MindMeister application is registered for opening four types of files: our own .mind format, MindManager’s .mmap format, Freemind’s .mm format, as well as .xmind. However, since these formats are not 100% compatible with each other, there is always a chance of losing more advanced features when opening a file in a format other than .mind. We wanted to provide the user with the possibility to chose whether the opened file would be saved in its original format, thus risking some features loss, or saving the file in MindMeister format. This option should be per user, per file and with the possibility to be remembered for future files.
.mind
.mmap
.mm
.xmind
After analyzing the requirements and the use cases, we designed the following architecture:
Using the revision fields in both Map and DriveData we always know if the map has been edited on MindMeister’s side without it being synced with the corresponding file on Google Drive. On the other hand, the token field from DriveData represents the file’s MD5 checksum at the moment of the last update and is supplied via the Google Drive SDK. So if the file is edited externally using another application than MindMeister, we have a mechanism in place for detecting this issue and presenting the user with a few courses of action.
Upon opening a file that has a different format than .mind, the user is prompted with an option dialog where they can chose if they want the file to be saved back in the same format or in MindMeister’s own format. These options are then remembered in the current session and the per map settings are stored in the extension (the original format) and save_extension (the format to save back in) fields present in the DriveData model.
extension
save_extension
DriveData
A map on MindMeister can always be shared with other MindMeister users and the collaborators can have reading or writing access to the map. However, only some of these users will have a corresponding Google account with access to the MindMeister Google Drive application and furthermore, only some of them will have access to the same file on Google Drive with writing permission. This is why it is important for us to know which users can write back to the file and the solution for these access levels was achieved with the help of the permission field in the DriveDataRight model.
DriveDataRight
Now more than two weeks on from the Google Drive launch and we can confidently say that our integration was successful, with more than 14,000 new users using Google login and with over 7,000 users that have enabled the Google Drive integration. All in all, the Google Drive SDK was very easy to use and well documented. The developer support, especially, was always there to help and our contacts were open to our suggestions.
My role in Google Docs is to help manage many projects across Google Docs/Drive. As a part of my job, I ask for a fair amount of data from all of those teams and generate reports on project/feature status. To make this much simpler for everyone involved, I have created a lot of simple tools using Google Spreadsheets and Apps Script (as well as a lot of complex tools) that make it easier for collaborators to enter data and for me to collect that data and create reports. Below is a pair of foundational techniques that I include in nearly every Spreadsheet/Apps Script tool I build.
I have a dozens of scripts generating reports. I use a technique where I set up a dedicated sheet for script configuration and read values from the sheet during script execution. A simple configuration sheet makes this much more straightforward.
With a globally accessible array, globals, you can then load the “settings” from the configuration (sheet SHT_CONFIG, here) at any entrypoint to the script.
// globally accessible variables var SHT_CONFIG = 'Config'; var globals = new Array(); function entryPoint() { globals = (globals.length == 0) ? LoadGlobals( SpreadsheetApp.getActive(), SHT_CONFIG) : globals; // your code goes here }
The LoadGlobals function, below, parses the data in the first three columns of the workbook and sheet name passed to it. You can even include a fourth column (or more!) explaining what the variables do, and they’ll just be ignored - though hopefully not by your users!
// Generate gloabal variables to be loaded into globals array function LoadGlobals_(wb, configSheet) { var configsheet = wb.getSheetByName(configSheet); var tGlobals = new Array(); // Config data is structured as VARIABLE, ISARRAY, VALUE(S) // and includes that as the header row var cfgdata = configsheet.getDataRange().getValues(); for (i = 1; i < cfgdata.length; i++) { switch (cfgdata[i][1]) { case 'ARRAY': // treat as an array - javascript puts a null value in the // array if you split an empty string... if (cfgdata[i][2].length == 0) { tGlobals[cfgdata[i][0]] = new Array(); } else { tGlobals[cfgdata[i][0]] = cfgdata[i][2].split(','); } break; // Define your own YOURDATATYPE using your customTreatment function (or // just perform the treatment here) case 'YOURDATATYPE': tGlobals[cfgdata[i][0]] = customTreatment(cfgdata[i][2]); break; default: // treat as generic data (string) tGlobals[cfgdata[i][0]] = cfgdata[i][2]; } } return tGlobals }
As long as you’ve loaded the global values during the script execution, you can refer to any of the values with, for example, globals.toList. For instance:
function getToList() { return globals.toList.join(‘,’); // or return globals[‘toList’].join(‘,’); }
Asking colleagues to enter tracking data so they don’t have to report their own statuses is one thing. Asking them to enter tracking data in a specific format, within a specific column layout, in a way that doesn’t mesh with their existing processes is entirely different. So, I use the following technique, where I rely on column names and not column ordering. The code below lets me do just that by fetching a key-value object for column headings and their position in a worksheet.
// Returns key-value object for column headings and their column number. // Note that these are retrieved based on the array index, which starts at 0 // the columns themselves start at 1... // pass header row of data (array) and an array of variables/column names: // eg: BUG_COL_ARRAY['id'] = 'Id'; function ColNumbers(hArray, colArray) { for (oname in colArray) { this[oname] = getColIndex(hArray, colArray[oname]); } } // ----------------------------------------------------------------------------- function getColIndex(arr, val) { for ( var i = 0; i < arr.length; i++) { if (arr[i].toLowerCase() == val.toLowerCase()) { return i; } } return -1; }
With the associative array defined, below, I can ask Apps product managers to add (or rename) columns to their feature tracking sheets and then extract features from every apps product team in one fell swoop (a future post). Each product team can set their columns up in whatever order works best for them.
// key columns in the app feature sheets var COLS_KEYAPPCOLS = new Array(); COLS_KEYAPPCOLS[‘feature’] = ‘Feature Title’; COLS_KEYAPPCOLS[‘desc’] = ‘Description’; COLS_KEYAPPCOLS[‘visible’] = ‘Visible’; COLS_KEYAPPCOLS[‘corp’] = ‘Corp Date’; COLS_KEYAPPCOLS[‘prod’] = ‘Prod Date’;
What does this do for me, really? I reuse this code for every project of this sort. The steps to reuse are then:
var curFeatures = curSheet.getDataRange().getValues(); var curCols = new ColNumbers(curFeatures[0], COLS_KEYAPPCOLS);
I can, from now on, refer to the Description column using something like curCols.desc when referencing any of the products’ data. The Spreadsheets team may list new feature descriptions in the second column, and the Documents team may list new feature descriptions in the fourth column. I no longer worry about that.
As a bonus, I can define the columns and ordering to be used in a report in my config sheet (see above). If I’ve defined reportcols as feature, desc, prod in my config sheet, I can generate a report very simply:
// Iterate through the rows of data, beginning with 1 (0 is the header) for ( var fnum = 1; fnum < curFeatures.length; fnum++) { // Iterate through each of the fields defined in reportcols for ( var cnum = 0; cnum < globals.reportcols.length; cnum++) { outputvalue = curFeatures[fnum][curCols[globals.reportcols[cnum]]]; // outputvalue is what you want to put in your report. } }
You could do that a lot more simply, but if we want to use the ‘Corp Date’ I only need to change the value in the config sheet to feature, desc, corp and I’m done - you’d have to change the code.
Collecting and crunching data in a Google Spreadsheet becomes a lot easier if you use Apps Script. Trust me, it makes your life a lot easier. Try it now by copying this spreadsheet
Editor’s note: This is a guest post by Ben Dilts, CTO & Co-founder of Lucidchart. -- Steve Bazyl
The release of Drive SDK allowing deep integration with Google Drive shows how serious Google is about making Drive a great platform for third parties to develop.
There are a handful of obvious ways to use the SDK, such as allowing your users to open files from Drive in your application, edit them, and save them back. Today, I'd like to quickly cover some less-obvious uses of the Drive API that we’re using at Lucidchart.
Applications have the ability to create new files on Google Drive. This is typically used for content created by applications. For example, an online painting application may save a new PNG or JPG to a user's Drive account for later editing.
One feature that Lucidchart has long provided to its users is the ability to download their entire account's content in a ZIP file, in case they (or we!) later mess up that data in some way. These backups can be restored quickly into a new folder by uploading the ZIP file back to our servers. (Note: we’ve never yet had to restore a user account this way, but we provided it because customers said it was important to them.)
The problem with this arrangement is that users have to remember to do regular backups, since there's no way for us to automatically force them to download a backup frequently and put it in a safe place. With Google Drive, we now have access to a reliable, redundant storage mechanism that we can push data to as often as we would like.
Lucidchart now provides automated backups of these ZIP files to Google Drive on a daily or weekly basis, using the API for creating new files on Drive.
Another use for the files.create call is to publish finished content. Lucidchart, like most applications, stores its editable files in a custom format. When a user completes a diagram or drawing, they often download it as a vector PDF, image, or Microsoft Visio file to share with others.
files.create
Lucidchart is now using the create file API to export content in any supported format directly to a user's Google Drive account, making it easy to sync to multiple devices and later share those files.
Google Drive can't automatically index content created by Lucidchart, or any other application that saves data in a custom format, for full-text search. However, applications now have the ability to explicitly provide HTML content to Google Drive that it can then index.
Indexable text provided to the Drive API is always interpreted as HTML, so it is important to escape HTML entities. And if your text is separated into distinct pieces (like the text in each shape in Lucidchart), you can improve full-text phrase searching by dividing your indexable text into one div or paragraph element per piece. Both the files.create and files.update calls provide the ability to set indexable text.
files.update
We hope that this overview helps other developers implement better integrations into the Google Drive environment. Integrating with Drive lets us provide and improve a lot of functionality that users have asked for, and makes accessing and using Lucidchart easier overall. We think this is a great result both for users and web application developers and urge you to check it out.
Apps Script developers have consistently expressed the need to monitor the health of various Apps Script services. Additionally, at every forum, event, hackathon or hangout, we have heard you express a need to know and understand the quota limits in Apps Script.
We heard your message loud and clear, so we started working on a dashboard for Apps Script. Today, we are launching the Google Apps Script Dashboard. This experimental dashboard can be used to monitor the health of 10 major services. It also provides a detailed view into the quota restrictions in Apps Script.
Did you know that consumer accounts (for example @gmail.com accounts) have quota of 1 hour of CPU time per day for executing triggers? Imagine the extent of automation that can happen for each user with triggers. And how about 20,000 calls to any external APIs. Now that packs in a lot of 3rd party integration with the likes of Salesforce.com, Flickr, Twitter and other APIs. So, if you are thinking of building extensions in Google Apps for your product, then don’t forget to leverage the UrlFetch Service which has OAuth built-in. Event managers can create 5,000 calendar events per day and SQL aficionados get 10,000 JDBC calls a day.
Check out the Dashboard for more.
Editor’s note: This is a guest post by Martin Böhringer, Co-Founder and CEO of Hojoki. -- Steve Bazyl
Hojoki integrate productivity cloud apps into one newsfeed and enables sharing and discussions on top of the feed. We’ve integrated 17 apps now and counting, so it’s safe to say that we’re API addicts. Now it's Time to share with you what we learned about the Google Apps APIs!
Our initial reason for building Hojoki was because of the fragmentation we experience in all of our cloud apps. And all those emails. Still, there was this feeling of “I don’t know what’s going on” in our distributed teamwork. So we decided to build something like a Google+ where streams get automatically filled by activities in the apps you use.
This leads to a comprehensive stream of everything that’s going on in your team combined with comments and microblogging. You can organize your stream into workspaces, which are basically places for discussions and collaboration with your team.
To build this, we first need some kind of information on recent events. As we wanted to be able to aggregate similar activities and to provide a search, as well as splitting up the stream in workspaces, we also had to be able to sort events as unique objects like files, calendar entries and contacts.
Further, it’s crucial to not only know what has changed, but who did it. So providing unique identities is important for building federated feeds.
Google’s APIs share some basic architecture and structure, described in their Google Data Protocol. Based on that, application-specific APIs provide access to the application’s data. What we use is the following:
The basic call for Google Contacts for example looks like this:
https://www.google.com/m8/feeds/contacts/default/full
This responds with a complete list of your contacts. Once we have this list all we have to do is to ask for the delta to our existing knowledge. For such use cases, Google’s APIs support query parameters as well as sorting parameters. So we can set “orderby” to “lastmodified” as well as “updated-min” to the timestamp of our last call. This way we are able to keep the traffic low and get quick results by only asking for things we might have missed.
If you want to develop using those APIs you should definitely have a look at the SDKs for them. We used the Google Docs SDK for an early prototype and loved it. Today, Hojoki uses its own generic connection handler for all our integrated systems so we don’t leverage the SDKs anymore.
If you’re into API development, you’ve probably already realized that our information needs don’t fit into many of the APIs out there. Most of the APIs are object centric. They can tell you what objects are included in a certain folder, but they can’t tell you which object in this folder has been changed recently. They just aren’t built with newsfeeds in mind.
Google Apps APIs support most of our information needs. Complete support of OAuth and very responsive APIs definitely make our lives easier.
However, the APIs are not built with Hojoki-like newsfeeds in mind. For example, ETags may change even if nothing happened to an object because of asynchronous processing on Google’s side (see Google’s comment on this). For us this means that, once we detect an altered ETag, in some cases we still have to check based on our existing data if there really have been relevant activities. Furthermore, we often have trouble with missing actors in our activities. For example, up to now we know when somebody changed a calendar event, but there is no way to find out who this was.
Another issue is the classification of updates. Google’s APIs tell us that something changed with an object. But to build a nice newsfeed you also want to know what exactly has been changed. So you’re looking for a verb like created, updated, shared, commented, moved or deleted. While Hojoki calls itself an aggregator for activities, technically we’re primarily an activity detector.
You can think of Hojoki as a multi-layer platform. First of all, we try to get a complete overview on your meta-data of the connected app. In Google Docs, this means files and collections, and we retrieve URI and name as well as some additional information (not the content itself). This information fills a graph-based data storage (we use RDF, read more about it here).
At the moment, we subscribe to events in the integrated apps. If detected, they create a changeset for the existing data graph. This changeset is an activity for our newsfeed and related to the object representation. This allows us to provide a very flexible aggregation and filtering on the client side. See the following screenshot. You can filter the stream for a certain collection (“Analytics”) or only for the file history or for the Hojoki workspace where this file is added (“Hojoki Marketing”).
What’s really important in terms of such heavy API processing is to use asynchronous calls. We use the great Open Source project async-http-client for this task.
When I wrote that “we subscribe to events” this is a very nice euphemism for “we’re polling every 30s to see if something changed”. This is not really optimal and we’d love to change it. If Google Apps APIs would support a feed of user events modelled in a common standard like ActivityStrea.ms, combined with reliable ETags and maybe even a push API (e.g. Webhooks) this would also make life easier for lots of developers syncing their local files with Google and help to reduce traffic on both sides.