Editor’s Note: Kevin Winter is a guest author from the AdWords API Developer Relations team. He implemented these features in his 20% time. - Jan Kleinert
Google Spreadsheets provides powerful charting functionality to let you analyze your data many different ways. We recently added the ability to programmatically create, modify and delete Spreadsheet charts using Google Apps Script.
Charts that a user creates in a Google Spreadsheet are called Embedded Charts. Apps Script can be used to manage these types of Spreadsheet Charts. Each EmbeddedChart belongs to a Sheet. Embedded charts are named this way because they are embedded onto a Sheet within the spreadsheet. Embedded charts align to the upper left hand corner of the specified column and row in the spreadsheet.
EmbeddedChart
Sheet
Let’s say I want to track my gas mileage using Google Spreadsheets. First, I create a Spreadsheet, and set up columns and formulas. Next, I add a form interface to make it easy to add new entries and a chart to visualize my mileage. Now I can start recording data - but each time I add enough entries to go past the ranges the chart uses, I need to manually increase them. How about we use some Apps Script magic to solve this?
The script below iterates through all charts on this sheet and determines if any of the ranges need to be expanded (i.e. if there are more rows with data to display). It then updates the title, builds the new EmbeddedChart object and saves it to the sheet. It could also add a menu interface or a trigger to execute this periodically or when the spreadsheet is edited.
function expandCharts() { var sheet = SpreadsheetApp.getActiveSheet() // Get a list of all charts on this Sheet. var charts = sheet.getCharts(); for (var i in charts) { var chart = charts[i]; var ranges = chart.getRanges(); // Returns an EmbeddedChartBuilder with this chart’s settings. var builder = chart.modify(); for (var j in ranges) { var range = ranges[j]; // rangeShouldExpand_ is defined later. if (rangeShouldExpand_(range)) { // Removes the old range and substitutes the new one. builder.removeRange(range); var newRange = expandRange_(range); builder.addRange(newRange); } } // Update title. builder.setOption('title', 'Last updated ' + new Date().toString()); // Must be called to save changes. sheet.updateChart(builder.build()); } } function rangeShouldExpand_(range) { var s = range.getSheet(); var numColumns = range.getNumColumns() var values = s.getSheetValues(range.getLastRow(), range.getColumn(), 2, numColumns); for (var i = 0; i < numColumns; i++) { // If the next row has the same pattern of values, // it’s probably the same type of data and should be expanded. if (!values[0][i] && !values[1][i] || !!values[0][i] && !!values[1][i]) { continue; } else { return false; } } return true; } function expandRange_(range) { var s = range.getSheet() var startRow = range.getRow(); var startCol = range.getColumn(); var numRows = range.getNumRows(); var numCols = range.getNumColumns(); while (rangeShouldExpand_(range)) { numRows++; range = s.getRange(startRow, startCol, numRows, numCols); } return range; }
What if you wanted to create a new chart from scratch? You can do that too!
var sheet = SpreadsheetApp.getActiveSheet(); var chart = sheet.newChart() .setPosition(5, 6, 5, 5) .setChartType(Charts.ChartType.AREA) .addRange(sheet.getActiveRange()) .build(); sheet.insertChart(chart);
In the above code example, we’ve gotten a reference to an EmbeddedChartBuilder, set its position within the sheet, change the chartType, add the currently selected range and insert the new chart.
EmbeddedChartBuilder
The EmbeddedChartBuilder allows you to modify the chart in a couple ways:
addRange
removeRange
setOption
setChartType
setPosition
Embedded charts use more options than regular Apps Script charts, so options are handled slightly differently. Developers can pass a dotted field path to change options. For example, to change the animation duration, you can do this:
builder.setOption("animation.duration", 1000);
Now that we’ve created a bunch of charts, your spreadsheet is probably pretty cluttered. Want to clear it and start afresh with charts? Just remove them all:
var sheet = SpreadsheetApp.getActiveSheet(); var charts = sheet.getCharts(); for (var i in charts) { sheet.removeChart(charts[i]); }
Just like standalone charts, you can use embedded charts elsewhere. You can add them to a UIApp or a sites page as well as sending them as an email attachment:
// Attach all charts from the current sheet to an email. var charts = SpreadsheetApp.getActiveSheet().getCharts(); MailApp.sendEmail( "recipient@example.com", "Income Charts", // Subject "Here's the latest income charts", // Content {attachments: charts });
We hope you found this blog post useful. Enjoy editing embedded charts using Google Apps Script!
At the end of last year we launched the UserManager Apps Script service, allowing Google Apps domain administrators to write scripts to programmatically create, delete and edit their user accounts.
We are now extending the family of Domain services with two new additions: NicknameManager and GroupsManager.
The NicknameManager service allows domain administrators to define alternate email addresses (i.e. “nicknames”) with a single line of code, as in the following example:
var nickname = NicknameManager.createNickname("user", "nick");
With the GroupsManager service, Google Apps domain administrators can create and delete groups, and manage their members and owners. The following example shows how to list all members of a group given its unique groupId:
groupId
function listMembers(groupId) { var members = GroupsManager.getGroup(groupId).getAllMembers(); for (var i in members) { var member = members[i]; Logger.log(i + ": " + member); } }
With the same service, one line of code is enough to add a member to a group:
GroupsManager.getGroup(groupId).addMember(memberId);
If you want to know more about the new NicknameManager and GroupsManager services, please check our documentation, and don’t hesitate to get in touch with us if you have questions or suggestions.
The Google Picker API provides developers with an easy-to-use file dialog that can be used to open Google Drive files directly from their web app independently of the Drive UI. The Drive SDK documentation includes an example showing how to incorporate the Google Picker with just a few lines of JavaScript.
Another powerful use case for the Picker API is to allow users to upload files to Drive with the same consistent UI. A single Picker dialog can incorporate multiple views and users can switch from one to another by clicking on a tab on the left:
The following code sample opens the Picker dialog and registers a simple callback function to handle the completed upload event:
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Google Picker Example</title> <!-- The standard Google Loader script. --> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> // Use the Google Loader script to load the google.picker script. google.setOnLoadCallback(createPicker); google.load('picker', '1'); // Create and render a Picker object for searching images // and uploading files. function createPicker() { // Create a view to search images. var view = new google.picker.View(google.picker.ViewId.DOCS); view.setMimeTypes('image/png,image/jpeg'); // Use DocsUploadView to upload documents to Google Drive. var uploadView = new google.picker.DocsUploadView(); var picker = new google.picker.PickerBuilder(). addView(view). addView(uploadView). setAppId(YOUR_APP_ID). setCallback(pickerCallback). build(); picker.setVisible(true); } // A simple callback implementation. function pickerCallback(data) { if (data.action == google.picker.Action.PICKED) { var fileId = data.docs[0].id; alert('The user selected: ' + fileId); } } </script> </head> <body> </body> </html>
There is an important difference between this upload example and the code used to open files: in addition to the standard view, an instance of DocsUploadView is added to the Picker object, thus providing upload capability.
DocsUploadView
For more information about this component and all other available views, please refer to the Google Picker Reference Guide.
Updated to add links to the #gappschallenge hashtag and to Google Apps Script.
In the past year, the Google team has been engaging with local developers by running various Google conferences and Google+ Hackathons, showcasing creative applications, and supporting Tech Hubs. Since we are always looking for opportunities to encourage (and challenge!) you, we are looking forward to giving developers the opportunity to take on this year’s challenge, which will focus on Google Apps Script, Google Apps and Google Drive APIs.
With the Google Apps Developer Challenge, we hope developers across the globe will find new and innovative ways to use Apps Script, Apps and Drive APIs to build cool apps. This challenge is particularly unique as the APIs are available to a large community of developers who code in a variety of languages that include Java, PHP, Python, and .Net.
We will be working in collaboration with our Google Developer Groups (also known as GTUGs) and Google Business Groups to organize events and prepare for this challenge. Make sure to join your local community so that you are aware of meet ups.
How familiar are you with the various Google Apps and Drive APIs? If you aren’t familiar, make sure to read up about Google Apps Script, Google Apps and Drive APIs on Google Developers. Use the Chrome Web Store as a source of inspiration. Create an innovative application using Google Apps Script, Google Apps, and Drive APIs. If your application is the best within one of the three categories defined below in your region, you could win a prize of $20,000 dollars! Google is also committed to nurturing the next generation of computer scientists as well as encouraging more women to get into coding, so we have special prizes for all-student or all-female teams that make the second round — $1,000 dollars.
The first round of submissions will start on the 24th of August 2012. The categories are
Make sure you read all the details about the competition on the Google Apps Developer Challenge page and follow the hashtag #gappschallenge on Google+ for any additional updates.
What are you waiting for? Get coding!
Editor’s Note: This blog post is authored by Blair Kutzman, who developed the Gmail Delay Send script. - Eric Koleda
Update: To start using this script simply open this page and follow the instructions.
In today’s connected world, when you get your email could be just as important what’s in it. In 2011 over 107 trillion emails were sent to 1.97 billion internet users. Studies have shown that the average person can only effectively process 50 emails in a day. That leaves 100 emails per person per day that are not processed effectively. How can you be sure that the emails you send fall into the former category and not the latter?
Luckily, there are tools to assist with email overload. One such tool is Gmail Delay Send.
Gmail Delay Send is a Google Apps Script that allows you to schedule emails to be delivered on a specified date and time. Using this tool you can ensure your email is sent to its destination at a time when you can capture your recipient’s full attention. For example, receiving an email at 4:59 PM on friday might not receive the same attention as an email received on Monday at 10:00 AM.
A primary requirement of Gmail Delay Send was that it needed to work everywhere Gmail is available. There are already many browser add-ons and services available to enhance Gmail with similar functionality, so the purpose was not to duplicate that work. In order for the service to be available on all platforms, it needed to utilize native Gmail features.
We needed a native way that Gmail could:
Gmail already contains a 'Draft' folder which is exactly what is required for item 1. The real problem was where and how to store the metadata for item 2, without any new Gmail functions. I chose to encode the metadata in the subject of the message because it would contain less text, which would mean a smaller chance of mis-parsing. Text such as “5 hours” and “next tuesday” were turned into a date-time using an open source library called datejs and a few modifications. See below for details of how this potentially cumbersome process was improved.
The script works as follows:
Although using datejs to parse the dates from the subject line was easy to implement, it introduced some usability issues. First, how would a user know if a certain string can be parsed by datejs (eg. is “5 minutes before 4PM” parsable)? To assist the user in knowing which dates datejs supports, the script offers a mechanism to test a given string directly in the spreadsheet that Gmail Delay Send is installed inside of. In this way a user can test various strings to see if they are valid and, if so, when they would be sent. A wiki page is dedicated to helping people through this process.
Another possibly confusing part of using Gmail Delay Send was setting up triggers. Thanks to some recent improvements of the Script Services, this is now done automatically for users as they install.
Adding retry logic to the script was another important step in improving its reliability and user experience. Occasionally, users were getting emails from their trigger informing them that a certain Google Script API could not be contacted. Some retry logic was required to make things work correctly. As shown in the snippet below, the function executeCommand() takes any function and will try to execute it a specified number of times, retrying if an error is thrown:
function executeCommand(fp) { var msg; var ret_val; var last_error; for(var retries = NUM_RETRIES; retries > 0; retries -= 1) { try { ret_val = fp(); break; } catch(err) { last_error = err; msg = "Exception:" + err + " thrown executing function:" + fp; debug_logs.push(msg); Logger.log(msg); Utilities.sleep(SLEEP_TIME); } } if(retries == 0) { msg = "Attempted to execute command:" + fp + " " + NUM_RETRIES + " times without success. Error message: " + last_error + ". Aborting :-("; Logger.log(msg); throw(msg); } return ret_val; }
Using this method, statements like those below will automatically retry if the service is not available.
executeCommand( function() { GmailApp.send( … ) }); executeCommand( function() { UrlFetchApp.urlFetch(url) } );
Gmail Delay Send was a fantastic project for learning about Google Apps Script and I hope that it will continue to prove useful to its users. If you’re interested in using Gmail Delay Send or just interested in the development process please check out the homepage or source.
Have you ever written a particular piece of code over and over again? Or used scripts to do something that you thought others might want to do as well? Starting today, you’ll be able to share and reuse those scripts as libraries, right from inside Google Apps Script.
I often write scripts which check the National Weather Service for relevant weather-related information. This allows me to send myself an email if it’s going to rain, reminding me to bring an umbrella to work, or to annotate my spreadsheet of running workouts with the temperature of the day.
Remembering how to query the National Weather Service every time I write a script is a daunting task, however. They have a complicated XML format that is tricky to parse. As a result, I end up just copying and pasting code each time. This is not only error-prone, but also has the big disadvantage that I have to fix all of my scripts one by one whenever the Weather Service’s XML format changes.
The code I use to query the National Weather Service is a perfect use case for a library. By using a library, I no longer have to copy and paste code in my script project. Since logic is centralized, updates need to be applied just once. And now I am able to share my library with other developers who can benefit from the work I’ve already done.
Libraries are written just like any other Apps Script project. A good library has a clean API which is also well documented. Here’s a code snippet from my WeatherService library:
/** * Queries the National Weather Service for the weather * forecast of the given address. Example: * * <pre> * var chances = WeatherService * .getPrecipitation("New York, NY"); * var fridayChance = chances[“Friday”]; * Logger.log(fridayChance + “% chance of rain on Friday!”); * </pre> * * @param {String} address The address to query the * temperature for, in any format accepted by * Google Maps (can be a street address, zip * code, city and state, etc) * * @returns {JsonObject} The precipitation forecast, as * map of period to percentage chance of * precipitation. Example: * * <pre> * { Tonight: 50, Friday: 30, Friday Night: 40, ... } * </pre> */ function getPrecipitation(address) { // Code for querying weather goes // here... }
Notice how detailed the documentation is. We know that good documentation makes for a great library. So, for every library Apps Script will also auto-generate a documentation page based on the code comments using the JSDoc format. If you want a method in your code to not be exposed to users, simply end its name with an underscore.
Before code can be used as a library, a version of it needs to be saved. Versions are a new concept in Apps Script, and they represent a snapshot of your project which won’t change even as changes are made to the script code. Versions are useful because they allow you to change your library code without breaking existing users. Once you’re happy with the changes you’ve made, you can then save a new version. Please see the user guide for saving a version and sharing your code as a library is easy.
Using a library only takes a few steps. To be able to use a library, the owner of the library must share the library and its project key with you. You can follow these instructions to then use a library. To use this National Weather Service library, please visit this page for project key.
Script Libraries come with three interesting features.
To get started on using Script Libraries, you can find a list of useful libraries contributed by two of our top contributors - James Ferreira and Romain Vialard. You can also find a detailed user guide on managing versions and libraries. We hope you enjoy using libraries.
Editor’s note: This is a guest post by Laura Bârlădeanu, lead programmer at MindMeister. -- Steve Bazyl
MindMeister is a market innovator for providing collaborative online mind mapping solutions. Launched in May 2007, our site has since attracted hundreds of thousands of businesses, academic institutions and creative consumers who have mapped over 100 million ideas online. We were one of a few web applications invited to take part in the Google Drive launch earlier this year.
The goal was to provide users with an intuitive integration between Google Drive and Mindmeister that would cover all the cases provided by the Google Drive guidelines at that time:
Aside from these main integration points, we wanted to make use of the SDK and provide many useful Google Drive features, so we added a few more requirements to the list:
Google Drive applications are required to use OAuth 2.0 as an authorization mechanism, and are recommended to use OpenID Connect for login. The authorization scope for Drive files is added by default for all registered drive apps. Additionally, the application can require extra scopes that would fit its needs. For our requirements, we needed the following scopes:
https://www.googleapis.com/auth/drive.file
https://www.google.com/m8/feeds/
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/userinfo.email
However, we didn’t want the user to turn away from our application by being asked for too many scopes straight from the beginning. Instead, we defined sets of actions that required a subset of these scopes:
[‘drive’, ‘profile’, ‘email’]
[‘profile’, ‘email’]
[‘contacts’, ‘profile’, ‘email’]
Whenever the user wanted to execute an action that would require more scopes than they initially provided, we redirected them to a Google authorization dialog that requested the extra scope. Upon authorization, we stored the individual refresh tokens for each combination of scopes in a separate model (UserAppTokens).
UserAppTokens
Whenever the application needed the refresh token for a set of scopes (eg. for [‘profile’, ‘email’]) it would fetch the refresh token from the database which corresponded to a superset of the required scopes (eg. [‘drive’, ‘profile’, ‘email’] would fit for the required [‘profile’, ‘email’]). The access token would then be obtained from Google and stored in the session for future requests.
The main challenge we encountered during design and implementation was dealing with the special cases of multiple users (Google users or internal users) editing on the same map which is a Google Drive file, as well as dealing with the special cases of the map being edited in multiple editors. We also had to find a solution for mapping the Google Drive user’s permissions (owner, reader, or writer) to the MindMeister’s existing permission mechanism.
owner
reader
writer
The MindMeister application is registered for opening four types of files: our own .mind format, MindManager’s .mmap format, Freemind’s .mm format, as well as .xmind. However, since these formats are not 100% compatible with each other, there is always a chance of losing more advanced features when opening a file in a format other than .mind. We wanted to provide the user with the possibility to chose whether the opened file would be saved in its original format, thus risking some features loss, or saving the file in MindMeister format. This option should be per user, per file and with the possibility to be remembered for future files.
.mind
.mmap
.mm
.xmind
After analyzing the requirements and the use cases, we designed the following architecture:
Using the revision fields in both Map and DriveData we always know if the map has been edited on MindMeister’s side without it being synced with the corresponding file on Google Drive. On the other hand, the token field from DriveData represents the file’s MD5 checksum at the moment of the last update and is supplied via the Google Drive SDK. So if the file is edited externally using another application than MindMeister, we have a mechanism in place for detecting this issue and presenting the user with a few courses of action.
Upon opening a file that has a different format than .mind, the user is prompted with an option dialog where they can chose if they want the file to be saved back in the same format or in MindMeister’s own format. These options are then remembered in the current session and the per map settings are stored in the extension (the original format) and save_extension (the format to save back in) fields present in the DriveData model.
extension
save_extension
DriveData
A map on MindMeister can always be shared with other MindMeister users and the collaborators can have reading or writing access to the map. However, only some of these users will have a corresponding Google account with access to the MindMeister Google Drive application and furthermore, only some of them will have access to the same file on Google Drive with writing permission. This is why it is important for us to know which users can write back to the file and the solution for these access levels was achieved with the help of the permission field in the DriveDataRight model.
DriveDataRight
Now more than two weeks on from the Google Drive launch and we can confidently say that our integration was successful, with more than 14,000 new users using Google login and with over 7,000 users that have enabled the Google Drive integration. All in all, the Google Drive SDK was very easy to use and well documented. The developer support, especially, was always there to help and our contacts were open to our suggestions.
My role in Google Docs is to help manage many projects across Google Docs/Drive. As a part of my job, I ask for a fair amount of data from all of those teams and generate reports on project/feature status. To make this much simpler for everyone involved, I have created a lot of simple tools using Google Spreadsheets and Apps Script (as well as a lot of complex tools) that make it easier for collaborators to enter data and for me to collect that data and create reports. Below is a pair of foundational techniques that I include in nearly every Spreadsheet/Apps Script tool I build.
I have a dozens of scripts generating reports. I use a technique where I set up a dedicated sheet for script configuration and read values from the sheet during script execution. A simple configuration sheet makes this much more straightforward.
With a globally accessible array, globals, you can then load the “settings” from the configuration (sheet SHT_CONFIG, here) at any entrypoint to the script.
// globally accessible variables var SHT_CONFIG = 'Config'; var globals = new Array(); function entryPoint() { globals = (globals.length == 0) ? LoadGlobals( SpreadsheetApp.getActive(), SHT_CONFIG) : globals; // your code goes here }
The LoadGlobals function, below, parses the data in the first three columns of the workbook and sheet name passed to it. You can even include a fourth column (or more!) explaining what the variables do, and they’ll just be ignored - though hopefully not by your users!
// Generate gloabal variables to be loaded into globals array function LoadGlobals_(wb, configSheet) { var configsheet = wb.getSheetByName(configSheet); var tGlobals = new Array(); // Config data is structured as VARIABLE, ISARRAY, VALUE(S) // and includes that as the header row var cfgdata = configsheet.getDataRange().getValues(); for (i = 1; i < cfgdata.length; i++) { switch (cfgdata[i][1]) { case 'ARRAY': // treat as an array - javascript puts a null value in the // array if you split an empty string... if (cfgdata[i][2].length == 0) { tGlobals[cfgdata[i][0]] = new Array(); } else { tGlobals[cfgdata[i][0]] = cfgdata[i][2].split(','); } break; // Define your own YOURDATATYPE using your customTreatment function (or // just perform the treatment here) case 'YOURDATATYPE': tGlobals[cfgdata[i][0]] = customTreatment(cfgdata[i][2]); break; default: // treat as generic data (string) tGlobals[cfgdata[i][0]] = cfgdata[i][2]; } } return tGlobals }
As long as you’ve loaded the global values during the script execution, you can refer to any of the values with, for example, globals.toList. For instance:
function getToList() { return globals.toList.join(‘,’); // or return globals[‘toList’].join(‘,’); }
Asking colleagues to enter tracking data so they don’t have to report their own statuses is one thing. Asking them to enter tracking data in a specific format, within a specific column layout, in a way that doesn’t mesh with their existing processes is entirely different. So, I use the following technique, where I rely on column names and not column ordering. The code below lets me do just that by fetching a key-value object for column headings and their position in a worksheet.
// Returns key-value object for column headings and their column number. // Note that these are retrieved based on the array index, which starts at 0 // the columns themselves start at 1... // pass header row of data (array) and an array of variables/column names: // eg: BUG_COL_ARRAY['id'] = 'Id'; function ColNumbers(hArray, colArray) { for (oname in colArray) { this[oname] = getColIndex(hArray, colArray[oname]); } } // ----------------------------------------------------------------------------- function getColIndex(arr, val) { for ( var i = 0; i < arr.length; i++) { if (arr[i].toLowerCase() == val.toLowerCase()) { return i; } } return -1; }
With the associative array defined, below, I can ask Apps product managers to add (or rename) columns to their feature tracking sheets and then extract features from every apps product team in one fell swoop (a future post). Each product team can set their columns up in whatever order works best for them.
// key columns in the app feature sheets var COLS_KEYAPPCOLS = new Array(); COLS_KEYAPPCOLS[‘feature’] = ‘Feature Title’; COLS_KEYAPPCOLS[‘desc’] = ‘Description’; COLS_KEYAPPCOLS[‘visible’] = ‘Visible’; COLS_KEYAPPCOLS[‘corp’] = ‘Corp Date’; COLS_KEYAPPCOLS[‘prod’] = ‘Prod Date’;
What does this do for me, really? I reuse this code for every project of this sort. The steps to reuse are then:
var curFeatures = curSheet.getDataRange().getValues(); var curCols = new ColNumbers(curFeatures[0], COLS_KEYAPPCOLS);
I can, from now on, refer to the Description column using something like curCols.desc when referencing any of the products’ data. The Spreadsheets team may list new feature descriptions in the second column, and the Documents team may list new feature descriptions in the fourth column. I no longer worry about that.
As a bonus, I can define the columns and ordering to be used in a report in my config sheet (see above). If I’ve defined reportcols as feature, desc, prod in my config sheet, I can generate a report very simply:
// Iterate through the rows of data, beginning with 1 (0 is the header) for ( var fnum = 1; fnum < curFeatures.length; fnum++) { // Iterate through each of the fields defined in reportcols for ( var cnum = 0; cnum < globals.reportcols.length; cnum++) { outputvalue = curFeatures[fnum][curCols[globals.reportcols[cnum]]]; // outputvalue is what you want to put in your report. } }
You could do that a lot more simply, but if we want to use the ‘Corp Date’ I only need to change the value in the config sheet to feature, desc, corp and I’m done - you’d have to change the code.
Collecting and crunching data in a Google Spreadsheet becomes a lot easier if you use Apps Script. Trust me, it makes your life a lot easier. Try it now by copying this spreadsheet
Editor’s note: This is a guest post by Ben Dilts, CTO & Co-founder of Lucidchart. -- Steve Bazyl
The release of Drive SDK allowing deep integration with Google Drive shows how serious Google is about making Drive a great platform for third parties to develop.
There are a handful of obvious ways to use the SDK, such as allowing your users to open files from Drive in your application, edit them, and save them back. Today, I'd like to quickly cover some less-obvious uses of the Drive API that we’re using at Lucidchart.
Applications have the ability to create new files on Google Drive. This is typically used for content created by applications. For example, an online painting application may save a new PNG or JPG to a user's Drive account for later editing.
One feature that Lucidchart has long provided to its users is the ability to download their entire account's content in a ZIP file, in case they (or we!) later mess up that data in some way. These backups can be restored quickly into a new folder by uploading the ZIP file back to our servers. (Note: we’ve never yet had to restore a user account this way, but we provided it because customers said it was important to them.)
The problem with this arrangement is that users have to remember to do regular backups, since there's no way for us to automatically force them to download a backup frequently and put it in a safe place. With Google Drive, we now have access to a reliable, redundant storage mechanism that we can push data to as often as we would like.
Lucidchart now provides automated backups of these ZIP files to Google Drive on a daily or weekly basis, using the API for creating new files on Drive.
Another use for the files.create call is to publish finished content. Lucidchart, like most applications, stores its editable files in a custom format. When a user completes a diagram or drawing, they often download it as a vector PDF, image, or Microsoft Visio file to share with others.
files.create
Lucidchart is now using the create file API to export content in any supported format directly to a user's Google Drive account, making it easy to sync to multiple devices and later share those files.
Google Drive can't automatically index content created by Lucidchart, or any other application that saves data in a custom format, for full-text search. However, applications now have the ability to explicitly provide HTML content to Google Drive that it can then index.
Indexable text provided to the Drive API is always interpreted as HTML, so it is important to escape HTML entities. And if your text is separated into distinct pieces (like the text in each shape in Lucidchart), you can improve full-text phrase searching by dividing your indexable text into one div or paragraph element per piece. Both the files.create and files.update calls provide the ability to set indexable text.
files.update
We hope that this overview helps other developers implement better integrations into the Google Drive environment. Integrating with Drive lets us provide and improve a lot of functionality that users have asked for, and makes accessing and using Lucidchart easier overall. We think this is a great result both for users and web application developers and urge you to check it out.
Apps Script developers have consistently expressed the need to monitor the health of various Apps Script services. Additionally, at every forum, event, hackathon or hangout, we have heard you express a need to know and understand the quota limits in Apps Script.
We heard your message loud and clear, so we started working on a dashboard for Apps Script. Today, we are launching the Google Apps Script Dashboard. This experimental dashboard can be used to monitor the health of 10 major services. It also provides a detailed view into the quota restrictions in Apps Script.
Did you know that consumer accounts (for example @gmail.com accounts) have quota of 1 hour of CPU time per day for executing triggers? Imagine the extent of automation that can happen for each user with triggers. And how about 20,000 calls to any external APIs. Now that packs in a lot of 3rd party integration with the likes of Salesforce.com, Flickr, Twitter and other APIs. So, if you are thinking of building extensions in Google Apps for your product, then don’t forget to leverage the UrlFetch Service which has OAuth built-in. Event managers can create 5,000 calendar events per day and SQL aficionados get 10,000 JDBC calls a day.
Check out the Dashboard for more.
Editor’s note: This is a guest post by Martin Böhringer, Co-Founder and CEO of Hojoki. -- Steve Bazyl
Hojoki integrate productivity cloud apps into one newsfeed and enables sharing and discussions on top of the feed. We’ve integrated 17 apps now and counting, so it’s safe to say that we’re API addicts. Now it's Time to share with you what we learned about the Google Apps APIs!
Our initial reason for building Hojoki was because of the fragmentation we experience in all of our cloud apps. And all those emails. Still, there was this feeling of “I don’t know what’s going on” in our distributed teamwork. So we decided to build something like a Google+ where streams get automatically filled by activities in the apps you use.
This leads to a comprehensive stream of everything that’s going on in your team combined with comments and microblogging. You can organize your stream into workspaces, which are basically places for discussions and collaboration with your team.
To build this, we first need some kind of information on recent events. As we wanted to be able to aggregate similar activities and to provide a search, as well as splitting up the stream in workspaces, we also had to be able to sort events as unique objects like files, calendar entries and contacts.
Further, it’s crucial to not only know what has changed, but who did it. So providing unique identities is important for building federated feeds.
Google’s APIs share some basic architecture and structure, described in their Google Data Protocol. Based on that, application-specific APIs provide access to the application’s data. What we use is the following:
The basic call for Google Contacts for example looks like this:
https://www.google.com/m8/feeds/contacts/default/full
This responds with a complete list of your contacts. Once we have this list all we have to do is to ask for the delta to our existing knowledge. For such use cases, Google’s APIs support query parameters as well as sorting parameters. So we can set “orderby” to “lastmodified” as well as “updated-min” to the timestamp of our last call. This way we are able to keep the traffic low and get quick results by only asking for things we might have missed.
If you want to develop using those APIs you should definitely have a look at the SDKs for them. We used the Google Docs SDK for an early prototype and loved it. Today, Hojoki uses its own generic connection handler for all our integrated systems so we don’t leverage the SDKs anymore.
If you’re into API development, you’ve probably already realized that our information needs don’t fit into many of the APIs out there. Most of the APIs are object centric. They can tell you what objects are included in a certain folder, but they can’t tell you which object in this folder has been changed recently. They just aren’t built with newsfeeds in mind.
Google Apps APIs support most of our information needs. Complete support of OAuth and very responsive APIs definitely make our lives easier.
However, the APIs are not built with Hojoki-like newsfeeds in mind. For example, ETags may change even if nothing happened to an object because of asynchronous processing on Google’s side (see Google’s comment on this). For us this means that, once we detect an altered ETag, in some cases we still have to check based on our existing data if there really have been relevant activities. Furthermore, we often have trouble with missing actors in our activities. For example, up to now we know when somebody changed a calendar event, but there is no way to find out who this was.
Another issue is the classification of updates. Google’s APIs tell us that something changed with an object. But to build a nice newsfeed you also want to know what exactly has been changed. So you’re looking for a verb like created, updated, shared, commented, moved or deleted. While Hojoki calls itself an aggregator for activities, technically we’re primarily an activity detector.
You can think of Hojoki as a multi-layer platform. First of all, we try to get a complete overview on your meta-data of the connected app. In Google Docs, this means files and collections, and we retrieve URI and name as well as some additional information (not the content itself). This information fills a graph-based data storage (we use RDF, read more about it here).
At the moment, we subscribe to events in the integrated apps. If detected, they create a changeset for the existing data graph. This changeset is an activity for our newsfeed and related to the object representation. This allows us to provide a very flexible aggregation and filtering on the client side. See the following screenshot. You can filter the stream for a certain collection (“Analytics”) or only for the file history or for the Hojoki workspace where this file is added (“Hojoki Marketing”).
What’s really important in terms of such heavy API processing is to use asynchronous calls. We use the great Open Source project async-http-client for this task.
When I wrote that “we subscribe to events” this is a very nice euphemism for “we’re polling every 30s to see if something changed”. This is not really optimal and we’d love to change it. If Google Apps APIs would support a feed of user events modelled in a common standard like ActivityStrea.ms, combined with reliable ETags and maybe even a push API (e.g. Webhooks) this would also make life easier for lots of developers syncing their local files with Google and help to reduce traffic on both sides.
Editor's note: This post is cross-posted from the Google Developers Blog.
Editor’s Note: This blog post is authored by Dito’s Steve Webster who is a Google Apps Script Top Contributor - Saurabh Gupta
Recently a company who operates retail stores throughout a few states reached out to Dito. When their associates conduct business discussions with vendors or customers where monetary exchanges are involved, their Ethics Disclosure policy requires a log for each event, a supervisor approval, and committee review.
The customer’s existing disclosure review process was cumbersome and time consuming. The employees would add a disclosure review request to a spreadsheet with 29 columns. The new review request would then be reviewed by their supervisor. The supervisor and the employee would go back and forth exchanging emails and making changes to a dense spreadsheet until an approval was granted. It was clear that the customer needed a workflow solution. They decided to hire Dito to build a workflow solution based on Google Apps Script.
To make the process more user friendly and productive, Dito decided to build a user interface to collect ethics disclosure events, make updates, and automate the routing of email notifications. Writing a Google Apps Script to create a user interface (UI), enabled associates to interact with their contacts to select their supervisor’s email address and simplify the data collection with list boxes. The script sends approval emails with HTML form radio buttons, text box, approve/decline buttons, and a “Post” command to invoke other workflow scripts. Below are some of the main design points for this Approval Workflow script.
The Disclosure Review workflow requires (a) Associates to fill out the Ethics Disclosure form. (b) Supervisor to either approve or decline the submission. (c) If supervisor approves, the Ethics Disclosure Committee is notified. (d) If supervisor declines, the associate is notified to make corrections. (e) After improving the submission, the workflow repeats itself.
Dito developed a custom review request form. A form was developed using Google Apps Script’s UI Services. The form provides the ability to look up data to populate such things as a drop-down list box. This form allowed real-time email look-ups by using the Apps Script’s Contacts service. First efforts included per character look-ups in a list box, but since they have over 1,000 users, it was best to enter the first and/or last name of their supervisor before initiating the look-up (see code snippet below).
var byName = ContactsApp.getContactsByName(searchKey); for (var i in byName) { var emailStr = byName[i].getPrimaryEmail(); // If there is no 'primary' email, try again for the next email if (emailStr == null) var emailStr = byName[i].getEmails()[0].getAddress(); // If emailStr is still null, try again by getting the next email if (emailStr == null) var emailStr = byName[i].getEmails()[1].getAddress(); }
Another dynamic field was the “activity type”. Depending on the selection more form fields are displayed. For example, if the activity type is “Meals”, display a list box to select lunch or dinner.
When an associate submits his/her review request by using the custom form within a spreadsheet, their supervisor receives an email with easy-to-read HTML formatted results. The approval decision, as well as a comment field (e.g. decline reason), is made within the email. This is more productive and prevents unnecessary back and forth into the spreadsheet.
If the request is declined by the supervisor, the associate who submitted the review request receives an email and can review the details. The email also contains a “Continue” button which opens the form in a new browser tab. After corrections are submitted, the supervisor receives another email and the workflow repeats itself.
When approved, the Ethics Disclosure Committee is notified by sending a group email within the script.
Since history records existed in their original spreadsheet form and they wanted to insert these records into the new work flow spreadsheet as a one-time task, an Apps Script was used to copy the data. Of course their columns did not match the new spreadsheet. By using a mapping approach and a “read once” and “write once” technique, the Apps Script quickly made the changes.
function myFunction() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName('Sheet 1'); var rowsWithData = sheet.getRange(2, 1, sheet.getLastRow(), sheet.getLastColumn()).getValues() var sheet1Data = []; var sheet2Data = []; for (var i = 0; i < rowsWithData.length; i++) { switch (rowsWithData[i][4]) // This is the activity type { ... case "Gift": sheet1Data.push([rowsWithData[i][12], rowsWithData[i][13], rowsWithData[i][14]]); sheet2Data.push([rowsWithData[i][15]]); continue; ... default: continue; } } sheet.getRange(2, 6, sheet1Data.length, 3).setValues(sheet1Data); sheet.getRange(2, 12, sheet2Data.length, 1).setValues(sheet2Data); }
Google Apps Script is very powerful and Dito uses it to build interesting solution for its customers. If you are using Google Apps then be sure to use Google Apps Script. You’ll be amazed with what you can build with it.
Crowd sourcing has been growing substantially in popularity. More and more businesses and individuals are interested in gathering data from the general public for real-time data analysis and visualization. The concept is being adopted in several fields, including journalism, public health and safety, and business development. During this election year, for example, a journalist might be interested in learning what candidate his or her readers support, and the reasons why they support this candidate.
Google Forms, Fusion Tables, and Apps Script make both data collection and analysis super simple! Using Google Forms, a journalist can quickly create an HTML form for readers to submit their opinions and feedback. Fusion Tables make data analysis easy with several cool data visualization options. Apps Script acts as the glue between Google Forms and Fusion Tables, enabling the Form to send data directly to Fusion Tables.
Let’s take a look at how our journalist friend would use all these tools to collect her reader’s candidate preferences.
Google Forms provides a simple UI tool to develop forms perfect for collecting data from readers. Here’s an example of a simple form the journalist can create to get information from her readers:
Once the form has been created, it can be embedded directly into the journalist’s website or blog using the embeddable HTML code provided by Google Forms.
Google Fusion Tables makes data analysis simple with its visualization capabilities. Using Fusion Tables, the journalist can create maps and charts of the collected data with just a few clicks of the mouse!
Using some fake data as an example, here’s a pie chart that can be created using Fusion Tables to show the the results of the survey:
With Fusion Tables, it’s also easy to filter data and create a pie chart visualization showing why people like Mitt Romney:
These visualizations can also be embedded in the journalist’s website or blog, as Fusion Tables provides embeddable HTML code for all its visualizations. Now, any time someone visits the webpage with the embedded visualization, they will see the current poll result!
Finally, Apps Script acts as the glue between the Google Form and the Fusion Table, since there is currently no direct way to send Google Form submissions to a Fusion Table. During a hack event last year, I took some time to write an Apps Script script that submits the form data to Fusion Tables. The script uses the onFormSubmit Apps Script functionality as described in this blog post. The Fusion Tables code is based on the code described in this blog post.
To learn how to set up your own Google Form to collect data and save that data in a Fusion Table, please see these instructions.
Triggers in Apps Script allow you to schedule your script’s execution in response to an event, at a specific time and date, or at given time intervals. Publishing scripts as services allows you to deploy your web apps with a click of a button. The new Script service lets you perform both of these tasks programmatically. With the Script service at your disposal, you can create smooth install flows, chain your scripts’ executions, generate summaries of current triggers, and even programmatically publish the scripts as services.
You can now set up your triggers from a menu. Here is an example of how you would schedule function foo() to run one year from now via a custom menu in Google Spreadsheets:
foo()
function onOpen() { SpreadsheetApp.getActive() .addMenu("Setup", [{name: "Create Trigger", functionName: "createTrigger"}]); } function createTrigger() { var now = new Date(); ScriptApp.newTrigger("foo") .timeBased() .atDate(parseInt(now.getFullYear())+1, now.getMonth(), now.getDate()) .create(); }
By programmatically setting triggers, you are able to schedule a future script’s executions from the script that is currently running. Here is an example of a function that schedules another function to execute two hours after it completes:
// this function is run to send the initial reminder and schedule a subsequent one function sendRequest() { GmailApp.sendEmail("class@example.edu", "Assignment #1 is due in 2 and a half hours!", "Your first assignment is due in 2 and a half hours!"); var now = new Date(); var later = new Date(now.getTime() + 2 * 60 * 60 * 1000); ScriptApp.newTrigger("sendReminder").timeBased().at(later).create(); } // this function will execute two hours later, sending another reminder function sendReminder() { GmailApp.sendEmail("class@example.edu", "Assignment #1 is due in 30 minutes!", "Your first assignment is due in half an hour!"); }
With the ability to schedule triggers programmatically, it is important to keep track of the existing ones. The function below logs all the triggers that are associated with the current script. Alternatively, you could also write this summary to a spreadsheet or email it to yourself.
function logMyTriggers() { var triggers = ScriptApp.getScriptTriggers(); for(i in triggers) { Logger.log("Trigger ID: " + triggers[i].getUniqueId() + "\nTrigger handler function: " + triggers[i].getHandlerFunction() + "\nTrigger type: " + triggers[i].getEventType() + "\n----------------------"); } }
You can now let your users publish your scripts through a Google Spreadsheets custom menu as a part of the script’s set up process. Below is an illustration of this process using an example of a simple note-taking web app. Once a user clicks the menu item, the script becomes published as a service and the URL is presented back to the user via a dialog box.
function doGet(e) { var app = UiApp.createApplication(); createApp(app, false); return app; } function doPost(e) { var now = new Date(); MailApp.sendEmail(Session.getEffectiveUser(), "Notes taken at " + now, e.parameter.notes); var app = UiApp.getActiveApplication(); createApp(app, true); return app; } function createApp(app, isSubmitted) { app.setWidth(550) .setHeight(700); var vp = app.createVerticalPanel(); var title = app.createHTML("Enter your notes into the text area below." + "<br>Hit submit to email them to yourself!") .setStyleAttribute("fontSize", "25px"); var lbl = app.createLabel("Notes submitted!") .setStyleAttribute("fontSize", "17px"); var form = app.createFormPanel(); var notes = app.createTextArea() .setWidth("500px") .setHeight("500px") .setName("notes"); var sb = app.createSubmitButton("Submit"); vp.add(title) .add(notes) .add(sb); if (isSubmitted) vp.add(lbl); form.add(vp); return app.add(form); } function onInstall() { onOpen(); } function onOpen() { SpreadsheetApp.getActive() .addMenu("Setup", [{name: "Publish as a service.", functionName: "setup"}]); } function setup() { var url = ""; try { ScriptApp.getService() .enable(ScriptApp.getService().Restriction.MYSELF); url = ScriptApp.getService().getUrl(); Browser.msgBox("Your web app is now accessible at the following URL:\n" + url); } catch (e) { Browser.msgBox("Script authorization expired.\nPlease run it again."); ScriptApp.invalidateAuth(); } }
The Script service provides you with the means of creating even more powerful applications and makes the development and deployment process smooth.
For more information, please visit the Script service Reference Documentation and User Guide.
The Apps Script team held a hackathon in Washington DC on March 7. Over 80 developers attended and we had some great demos at the end of the evening. One of the demos was from Rusty Mellinger, who explains his script in this blog post. If you missed the DC hackathon, sign up for our next one in Chicago on April 19. -Jan Kleinert
I was lucky enough to attend Google’s Apps Script Hackathon at their office in DC, recently, and got a chance to play with Apps Script. After a quick walk-through tutorial, Jan gave us a couple of hours to hack around with it.
Scripts in Apps Script are written in JavaScript and stored, edited, and run on Google's servers, interfacing with a big list of included services. You can call the scripts from spreadsheets, Google Sites, or from hits to a generated URL.
My roommates and I keep a spreadsheet on Google Docs to track who owes what, but since we’re a house full of software engineers, it’s gotten pretty complicated. Each row records the details of a single transaction: who paid, the total, and what percentages of the payment are on behalf of which roommates. All these interpersonal debts are added up into the (J5:M8) matrix, cancelled out across the diagonal into (P5:S8) to get a single debt for each roommate pairing, and then those are totalled into the final "Shake Out", (F4:F7), which says whether you owe or are owed. Maybe Apps Script could make my life simpler here?
First, I’m currently owed a fair amount of money, so I set up automated reminder emails to the roommates who are behind:
// Send emails to everybody with their current status. function emailDebtors() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var results = ss.getRange( "G4:G7" ).getValues(); var emails = ss.getRange( "O3:R3" ).getValues(); var numUsers = 4; for(var i = 0; i != numUsers; i++) { var val = Math.round(results[i][0]); if (val > 0) { // This guy owes money in the shake-out. MailApp.sendEmail( emails[0][i], "You're a deadbeat!", "You owe $" + val + ". Please pay it!"); } } }
This just pulls the current totals from the (G4:G7) "Shake Out", as well as their respective email addresses from (O3:R3). When this function is called, if any of them owe more than $0, they get a friendly reminder!
I could set that up to trigger daily or weekly, but it only really needs to happen when somebody needs to collect what they’re owed, so I’ve added it as an option to the sheet’s menu on start-up.
function onOpen() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var menuEntries = [ { name: "Email debtors", functionName: "emailDebtors"}]; ss.addMenu( "SkyCastle", menuEntries ); }
Easy! Now when somebody wants to collect, they just click the “SkyCastle -> Email debtors” option and the appropriate reminder emails are sent out, from their own Gmail address.
I still had a couple of hours, and wanted to play with the UI and Google Charts services, so I decided to chart the “Shake Out” values over the history of the spreadsheet. The existing cells are hard-coded to operate on the total sums from the full sheet, so I had to re-implement the math to track it line-by-line. (This isn’t all bad, because I can use it to double check the existing arithmetic, which was sorely needed.)
The basic sketch is as follows:
var data = Charts.newDataTable() .addColumn(Charts.ColumnType.NUMBER, "Row"); for (var i = 0; i != 4; i++) { data.addColumn(Charts.ColumnType.NUMBER, names[i]); } for (var i = 0; i != NUMROWS; i++) { var row = Array(5); // … // Process the current line here, and compute the shake-out. // … data.addRow(row); } data.build();
I’ve omitted the actual calculation, because it’s just a bunch of hacks specific to our spreadsheet formulas. Each row contains the row number, and the accumulated shake-out thus far, and gets added to the `data` table. I break out of the loop once I go off the end of my data and start hitting `NaNs`.
To create the line chart and add it to a new UI window:
var chart = Charts.newLineChart() .setDataTable(data) .setDimensions(700, 400) .setTitle("Debt History") .build(); var uiApp = UiApp.createApplication() .setWidth(700) .setHeight(400) .setTitle("Payment History"); uiApp.add(chart); ss.show(uiApp); return uiApp;
After adding this function as another option in our custom `SkyCastle` menu and clicking it, we see a nice graph. (I’m almost always on the bottom, but that’s because I make the actual rent and utility payments.) The final entries are equal to the original "Shake Out" cells, so our old arithmetic seems correct, too.
The built-in debugger isn’t bad; use the `Select function` dropdown and click the bug icon. I also used Logger.log() liberally while trying to get things working right. (Go to `View -> Logs` in the Script Editor to view that output.)
Apps Script seems to work well, overall, and hooks into a nice and expanding array of Google products and data sources. The GWT-backed UI service is a clever idea, though I barely had a chance to touch it.
Thanks again to Jan and Google for hosting this Hackathon; I can’t wait for the next one!
PATCH requests allow you to perform partial updates on many of our REST APIs and in most cases can save bandwidth.
PATCH
If you have ever tried to do a PATCH request on an App Engine application, you probably realized that it is not possible and that the list of HTTP Methods allowed is whitelisted to the following methods only: GET, POST, HEAD, PUT and DELETE. Trying to perform a PATCH request raises the following Exception:
GET
POST
HEAD
PUT
DELETE
java.net.ProtocolException: PATCH is not one of the supported http methods: [GET, POST, HEAD, PUT, DELETE]
There is a workaround to this. Most of our APIs support the X-HTTP-Method-Override header. This header can be used in a POST request to “fake” other HTTP methods. Simply set the value of the X-HTTP-Method-Override header to the HTTP method you would like to actually perform.
X-HTTP-Method-Override
For example, to make a PATCH request to the Google Tasks API to update only the Notes field of a particular task you could use the following HTTP request:
Notes
POST /tasks/v1/lists/@default/tasks/TASK_ID HTTP/1.1 Host: www.googleapis.com X-HTTP-Method-Override: PATCH Authorization: Bearer Content-Type: application/json Content-Length: 31 {“Notes” : “Patch is working!”}
Which would be equivalent to this HTTP Request, which is not supported on App Engine:
PATCH /tasks/v1/lists/@default/tasks/TASK_ID HTTP/1.1 Host: www.googleapis.com Authorization: Bearer Content-Type: application/json Content-Length: 31 {“Notes” : “Patch is working!”}
For instance, in an App Engine Java environment you could construct and execute this request this way:
URL url = new URL("https://www.googleapis.com/tasks/v1/" + "lists/@default/tasks/" + TASK_ID); HTTPRequest request = new HTTPRequest(url, HTTPMethod.POST); request.addHeader(new HTTPHeader("X-HTTP-Method-Override", "PATCH")); request.addHeader(new HTTPHeader("Authorization", "Bearer " + ACCESS_TOKEN)); request.setPayload("{\"Notes\" : \"Patch is working!\"}".getBytes()); URLFetchService fetchService = URLFetchServiceFactory.getURLFetchService(); HTTPResponse response = fetchService.fetch(request);
This trick can also be used if your application server is behind a firewall, behind a proxy server or in any other environment where HTTP methods other than POST might not be allowed. In that case you could use the X-HTTP-Method-Override header the same way to workaround these limitations.
You may also use our Google APIs Client library for Java or our Google APIs Client library for Python, both of which have support for PATCH requests and use the X-HTTP-Method-Override header when appropriate.