The Google Picker API provides developers with an easy-to-use file dialog that can be used to open Google Drive files directly from their web app independently of the Drive UI. The Drive SDK documentation includes an example showing how to incorporate the Google Picker with just a few lines of JavaScript.
Another powerful use case for the Picker API is to allow users to upload files to Drive with the same consistent UI. A single Picker dialog can incorporate multiple views and users can switch from one to another by clicking on a tab on the left:
The following code sample opens the Picker dialog and registers a simple callback function to handle the completed upload event:
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Google Picker Example</title> <!-- The standard Google Loader script. --> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> // Use the Google Loader script to load the google.picker script. google.setOnLoadCallback(createPicker); google.load('picker', '1'); // Create and render a Picker object for searching images // and uploading files. function createPicker() { // Create a view to search images. var view = new google.picker.View(google.picker.ViewId.DOCS); view.setMimeTypes('image/png,image/jpeg'); // Use DocsUploadView to upload documents to Google Drive. var uploadView = new google.picker.DocsUploadView(); var picker = new google.picker.PickerBuilder(). addView(view). addView(uploadView). setAppId(YOUR_APP_ID). setCallback(pickerCallback). build(); picker.setVisible(true); } // A simple callback implementation. function pickerCallback(data) { if (data.action == google.picker.Action.PICKED) { var fileId = data.docs[0].id; alert('The user selected: ' + fileId); } } </script> </head> <body> </body> </html>
There is an important difference between this upload example and the code used to open files: in addition to the standard view, an instance of DocsUploadView is added to the Picker object, thus providing upload capability.
DocsUploadView
For more information about this component and all other available views, please refer to the Google Picker Reference Guide.
Updated to add links to the #gappschallenge hashtag and to Google Apps Script.
In the past year, the Google team has been engaging with local developers by running various Google conferences and Google+ Hackathons, showcasing creative applications, and supporting Tech Hubs. Since we are always looking for opportunities to encourage (and challenge!) you, we are looking forward to giving developers the opportunity to take on this year’s challenge, which will focus on Google Apps Script, Google Apps and Google Drive APIs.
With the Google Apps Developer Challenge, we hope developers across the globe will find new and innovative ways to use Apps Script, Apps and Drive APIs to build cool apps. This challenge is particularly unique as the APIs are available to a large community of developers who code in a variety of languages that include Java, PHP, Python, and .Net.
We will be working in collaboration with our Google Developer Groups (also known as GTUGs) and Google Business Groups to organize events and prepare for this challenge. Make sure to join your local community so that you are aware of meet ups.
How familiar are you with the various Google Apps and Drive APIs? If you aren’t familiar, make sure to read up about Google Apps Script, Google Apps and Drive APIs on Google Developers. Use the Chrome Web Store as a source of inspiration. Create an innovative application using Google Apps Script, Google Apps, and Drive APIs. If your application is the best within one of the three categories defined below in your region, you could win a prize of $20,000 dollars! Google is also committed to nurturing the next generation of computer scientists as well as encouraging more women to get into coding, so we have special prizes for all-student or all-female teams that make the second round — $1,000 dollars.
The first round of submissions will start on the 24th of August 2012. The categories are
Make sure you read all the details about the competition on the Google Apps Developer Challenge page and follow the hashtag #gappschallenge on Google+ for any additional updates.
What are you waiting for? Get coding!
Editor’s Note: This blog post is authored by Blair Kutzman, who developed the Gmail Delay Send script. - Eric Koleda
Update: To start using this script simply open this page and follow the instructions.
In today’s connected world, when you get your email could be just as important what’s in it. In 2011 over 107 trillion emails were sent to 1.97 billion internet users. Studies have shown that the average person can only effectively process 50 emails in a day. That leaves 100 emails per person per day that are not processed effectively. How can you be sure that the emails you send fall into the former category and not the latter?
Luckily, there are tools to assist with email overload. One such tool is Gmail Delay Send.
Gmail Delay Send is a Google Apps Script that allows you to schedule emails to be delivered on a specified date and time. Using this tool you can ensure your email is sent to its destination at a time when you can capture your recipient’s full attention. For example, receiving an email at 4:59 PM on friday might not receive the same attention as an email received on Monday at 10:00 AM.
A primary requirement of Gmail Delay Send was that it needed to work everywhere Gmail is available. There are already many browser add-ons and services available to enhance Gmail with similar functionality, so the purpose was not to duplicate that work. In order for the service to be available on all platforms, it needed to utilize native Gmail features.
We needed a native way that Gmail could:
Gmail already contains a 'Draft' folder which is exactly what is required for item 1. The real problem was where and how to store the metadata for item 2, without any new Gmail functions. I chose to encode the metadata in the subject of the message because it would contain less text, which would mean a smaller chance of mis-parsing. Text such as “5 hours” and “next tuesday” were turned into a date-time using an open source library called datejs and a few modifications. See below for details of how this potentially cumbersome process was improved.
The script works as follows:
Although using datejs to parse the dates from the subject line was easy to implement, it introduced some usability issues. First, how would a user know if a certain string can be parsed by datejs (eg. is “5 minutes before 4PM” parsable)? To assist the user in knowing which dates datejs supports, the script offers a mechanism to test a given string directly in the spreadsheet that Gmail Delay Send is installed inside of. In this way a user can test various strings to see if they are valid and, if so, when they would be sent. A wiki page is dedicated to helping people through this process.
Another possibly confusing part of using Gmail Delay Send was setting up triggers. Thanks to some recent improvements of the Script Services, this is now done automatically for users as they install.
Adding retry logic to the script was another important step in improving its reliability and user experience. Occasionally, users were getting emails from their trigger informing them that a certain Google Script API could not be contacted. Some retry logic was required to make things work correctly. As shown in the snippet below, the function executeCommand() takes any function and will try to execute it a specified number of times, retrying if an error is thrown:
function executeCommand(fp) { var msg; var ret_val; var last_error; for(var retries = NUM_RETRIES; retries > 0; retries -= 1) { try { ret_val = fp(); break; } catch(err) { last_error = err; msg = "Exception:" + err + " thrown executing function:" + fp; debug_logs.push(msg); Logger.log(msg); Utilities.sleep(SLEEP_TIME); } } if(retries == 0) { msg = "Attempted to execute command:" + fp + " " + NUM_RETRIES + " times without success. Error message: " + last_error + ". Aborting :-("; Logger.log(msg); throw(msg); } return ret_val; }
Using this method, statements like those below will automatically retry if the service is not available.
executeCommand( function() { GmailApp.send( … ) }); executeCommand( function() { UrlFetchApp.urlFetch(url) } );
Gmail Delay Send was a fantastic project for learning about Google Apps Script and I hope that it will continue to prove useful to its users. If you’re interested in using Gmail Delay Send or just interested in the development process please check out the homepage or source.
Have you ever written a particular piece of code over and over again? Or used scripts to do something that you thought others might want to do as well? Starting today, you’ll be able to share and reuse those scripts as libraries, right from inside Google Apps Script.
I often write scripts which check the National Weather Service for relevant weather-related information. This allows me to send myself an email if it’s going to rain, reminding me to bring an umbrella to work, or to annotate my spreadsheet of running workouts with the temperature of the day.
Remembering how to query the National Weather Service every time I write a script is a daunting task, however. They have a complicated XML format that is tricky to parse. As a result, I end up just copying and pasting code each time. This is not only error-prone, but also has the big disadvantage that I have to fix all of my scripts one by one whenever the Weather Service’s XML format changes.
The code I use to query the National Weather Service is a perfect use case for a library. By using a library, I no longer have to copy and paste code in my script project. Since logic is centralized, updates need to be applied just once. And now I am able to share my library with other developers who can benefit from the work I’ve already done.
Libraries are written just like any other Apps Script project. A good library has a clean API which is also well documented. Here’s a code snippet from my WeatherService library:
/** * Queries the National Weather Service for the weather * forecast of the given address. Example: * * <pre> * var chances = WeatherService * .getPrecipitation("New York, NY"); * var fridayChance = chances[“Friday”]; * Logger.log(fridayChance + “% chance of rain on Friday!”); * </pre> * * @param {String} address The address to query the * temperature for, in any format accepted by * Google Maps (can be a street address, zip * code, city and state, etc) * * @returns {JsonObject} The precipitation forecast, as * map of period to percentage chance of * precipitation. Example: * * <pre> * { Tonight: 50, Friday: 30, Friday Night: 40, ... } * </pre> */ function getPrecipitation(address) { // Code for querying weather goes // here... }
Notice how detailed the documentation is. We know that good documentation makes for a great library. So, for every library Apps Script will also auto-generate a documentation page based on the code comments using the JSDoc format. If you want a method in your code to not be exposed to users, simply end its name with an underscore.
Before code can be used as a library, a version of it needs to be saved. Versions are a new concept in Apps Script, and they represent a snapshot of your project which won’t change even as changes are made to the script code. Versions are useful because they allow you to change your library code without breaking existing users. Once you’re happy with the changes you’ve made, you can then save a new version. Please see the user guide for saving a version and sharing your code as a library is easy.
Using a library only takes a few steps. To be able to use a library, the owner of the library must share the library and its project key with you. You can follow these instructions to then use a library. To use this National Weather Service library, please visit this page for project key.
Script Libraries come with three interesting features.
To get started on using Script Libraries, you can find a list of useful libraries contributed by two of our top contributors - James Ferreira and Romain Vialard. You can also find a detailed user guide on managing versions and libraries. We hope you enjoy using libraries.
Editor’s note: This is a guest post by Laura Bârlădeanu, lead programmer at MindMeister. -- Steve Bazyl
MindMeister is a market innovator for providing collaborative online mind mapping solutions. Launched in May 2007, our site has since attracted hundreds of thousands of businesses, academic institutions and creative consumers who have mapped over 100 million ideas online. We were one of a few web applications invited to take part in the Google Drive launch earlier this year.
The goal was to provide users with an intuitive integration between Google Drive and Mindmeister that would cover all the cases provided by the Google Drive guidelines at that time:
Aside from these main integration points, we wanted to make use of the SDK and provide many useful Google Drive features, so we added a few more requirements to the list:
Google Drive applications are required to use OAuth 2.0 as an authorization mechanism, and are recommended to use OpenID Connect for login. The authorization scope for Drive files is added by default for all registered drive apps. Additionally, the application can require extra scopes that would fit its needs. For our requirements, we needed the following scopes:
https://www.googleapis.com/auth/drive.file
https://www.google.com/m8/feeds/
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/userinfo.email
However, we didn’t want the user to turn away from our application by being asked for too many scopes straight from the beginning. Instead, we defined sets of actions that required a subset of these scopes:
[‘drive’, ‘profile’, ‘email’]
[‘profile’, ‘email’]
[‘contacts’, ‘profile’, ‘email’]
Whenever the user wanted to execute an action that would require more scopes than they initially provided, we redirected them to a Google authorization dialog that requested the extra scope. Upon authorization, we stored the individual refresh tokens for each combination of scopes in a separate model (UserAppTokens).
UserAppTokens
Whenever the application needed the refresh token for a set of scopes (eg. for [‘profile’, ‘email’]) it would fetch the refresh token from the database which corresponded to a superset of the required scopes (eg. [‘drive’, ‘profile’, ‘email’] would fit for the required [‘profile’, ‘email’]). The access token would then be obtained from Google and stored in the session for future requests.
The main challenge we encountered during design and implementation was dealing with the special cases of multiple users (Google users or internal users) editing on the same map which is a Google Drive file, as well as dealing with the special cases of the map being edited in multiple editors. We also had to find a solution for mapping the Google Drive user’s permissions (owner, reader, or writer) to the MindMeister’s existing permission mechanism.
owner
reader
writer
The MindMeister application is registered for opening four types of files: our own .mind format, MindManager’s .mmap format, Freemind’s .mm format, as well as .xmind. However, since these formats are not 100% compatible with each other, there is always a chance of losing more advanced features when opening a file in a format other than .mind. We wanted to provide the user with the possibility to chose whether the opened file would be saved in its original format, thus risking some features loss, or saving the file in MindMeister format. This option should be per user, per file and with the possibility to be remembered for future files.
.mind
.mmap
.mm
.xmind
After analyzing the requirements and the use cases, we designed the following architecture:
Using the revision fields in both Map and DriveData we always know if the map has been edited on MindMeister’s side without it being synced with the corresponding file on Google Drive. On the other hand, the token field from DriveData represents the file’s MD5 checksum at the moment of the last update and is supplied via the Google Drive SDK. So if the file is edited externally using another application than MindMeister, we have a mechanism in place for detecting this issue and presenting the user with a few courses of action.
Upon opening a file that has a different format than .mind, the user is prompted with an option dialog where they can chose if they want the file to be saved back in the same format or in MindMeister’s own format. These options are then remembered in the current session and the per map settings are stored in the extension (the original format) and save_extension (the format to save back in) fields present in the DriveData model.
extension
save_extension
DriveData
A map on MindMeister can always be shared with other MindMeister users and the collaborators can have reading or writing access to the map. However, only some of these users will have a corresponding Google account with access to the MindMeister Google Drive application and furthermore, only some of them will have access to the same file on Google Drive with writing permission. This is why it is important for us to know which users can write back to the file and the solution for these access levels was achieved with the help of the permission field in the DriveDataRight model.
DriveDataRight
Now more than two weeks on from the Google Drive launch and we can confidently say that our integration was successful, with more than 14,000 new users using Google login and with over 7,000 users that have enabled the Google Drive integration. All in all, the Google Drive SDK was very easy to use and well documented. The developer support, especially, was always there to help and our contacts were open to our suggestions.
My role in Google Docs is to help manage many projects across Google Docs/Drive. As a part of my job, I ask for a fair amount of data from all of those teams and generate reports on project/feature status. To make this much simpler for everyone involved, I have created a lot of simple tools using Google Spreadsheets and Apps Script (as well as a lot of complex tools) that make it easier for collaborators to enter data and for me to collect that data and create reports. Below is a pair of foundational techniques that I include in nearly every Spreadsheet/Apps Script tool I build.
I have a dozens of scripts generating reports. I use a technique where I set up a dedicated sheet for script configuration and read values from the sheet during script execution. A simple configuration sheet makes this much more straightforward.
With a globally accessible array, globals, you can then load the “settings” from the configuration (sheet SHT_CONFIG, here) at any entrypoint to the script.
// globally accessible variables var SHT_CONFIG = 'Config'; var globals = new Array(); function entryPoint() { globals = (globals.length == 0) ? LoadGlobals( SpreadsheetApp.getActive(), SHT_CONFIG) : globals; // your code goes here }
The LoadGlobals function, below, parses the data in the first three columns of the workbook and sheet name passed to it. You can even include a fourth column (or more!) explaining what the variables do, and they’ll just be ignored - though hopefully not by your users!
// Generate gloabal variables to be loaded into globals array function LoadGlobals_(wb, configSheet) { var configsheet = wb.getSheetByName(configSheet); var tGlobals = new Array(); // Config data is structured as VARIABLE, ISARRAY, VALUE(S) // and includes that as the header row var cfgdata = configsheet.getDataRange().getValues(); for (i = 1; i < cfgdata.length; i++) { switch (cfgdata[i][1]) { case 'ARRAY': // treat as an array - javascript puts a null value in the // array if you split an empty string... if (cfgdata[i][2].length == 0) { tGlobals[cfgdata[i][0]] = new Array(); } else { tGlobals[cfgdata[i][0]] = cfgdata[i][2].split(','); } break; // Define your own YOURDATATYPE using your customTreatment function (or // just perform the treatment here) case 'YOURDATATYPE': tGlobals[cfgdata[i][0]] = customTreatment(cfgdata[i][2]); break; default: // treat as generic data (string) tGlobals[cfgdata[i][0]] = cfgdata[i][2]; } } return tGlobals }
As long as you’ve loaded the global values during the script execution, you can refer to any of the values with, for example, globals.toList. For instance:
function getToList() { return globals.toList.join(‘,’); // or return globals[‘toList’].join(‘,’); }
Asking colleagues to enter tracking data so they don’t have to report their own statuses is one thing. Asking them to enter tracking data in a specific format, within a specific column layout, in a way that doesn’t mesh with their existing processes is entirely different. So, I use the following technique, where I rely on column names and not column ordering. The code below lets me do just that by fetching a key-value object for column headings and their position in a worksheet.
// Returns key-value object for column headings and their column number. // Note that these are retrieved based on the array index, which starts at 0 // the columns themselves start at 1... // pass header row of data (array) and an array of variables/column names: // eg: BUG_COL_ARRAY['id'] = 'Id'; function ColNumbers(hArray, colArray) { for (oname in colArray) { this[oname] = getColIndex(hArray, colArray[oname]); } } // ----------------------------------------------------------------------------- function getColIndex(arr, val) { for ( var i = 0; i < arr.length; i++) { if (arr[i].toLowerCase() == val.toLowerCase()) { return i; } } return -1; }
With the associative array defined, below, I can ask Apps product managers to add (or rename) columns to their feature tracking sheets and then extract features from every apps product team in one fell swoop (a future post). Each product team can set their columns up in whatever order works best for them.
// key columns in the app feature sheets var COLS_KEYAPPCOLS = new Array(); COLS_KEYAPPCOLS[‘feature’] = ‘Feature Title’; COLS_KEYAPPCOLS[‘desc’] = ‘Description’; COLS_KEYAPPCOLS[‘visible’] = ‘Visible’; COLS_KEYAPPCOLS[‘corp’] = ‘Corp Date’; COLS_KEYAPPCOLS[‘prod’] = ‘Prod Date’;
What does this do for me, really? I reuse this code for every project of this sort. The steps to reuse are then:
var curFeatures = curSheet.getDataRange().getValues(); var curCols = new ColNumbers(curFeatures[0], COLS_KEYAPPCOLS);
I can, from now on, refer to the Description column using something like curCols.desc when referencing any of the products’ data. The Spreadsheets team may list new feature descriptions in the second column, and the Documents team may list new feature descriptions in the fourth column. I no longer worry about that.
As a bonus, I can define the columns and ordering to be used in a report in my config sheet (see above). If I’ve defined reportcols as feature, desc, prod in my config sheet, I can generate a report very simply:
// Iterate through the rows of data, beginning with 1 (0 is the header) for ( var fnum = 1; fnum < curFeatures.length; fnum++) { // Iterate through each of the fields defined in reportcols for ( var cnum = 0; cnum < globals.reportcols.length; cnum++) { outputvalue = curFeatures[fnum][curCols[globals.reportcols[cnum]]]; // outputvalue is what you want to put in your report. } }
You could do that a lot more simply, but if we want to use the ‘Corp Date’ I only need to change the value in the config sheet to feature, desc, corp and I’m done - you’d have to change the code.
Collecting and crunching data in a Google Spreadsheet becomes a lot easier if you use Apps Script. Trust me, it makes your life a lot easier. Try it now by copying this spreadsheet
Editor’s note: This is a guest post by Ben Dilts, CTO & Co-founder of Lucidchart. -- Steve Bazyl
The release of Drive SDK allowing deep integration with Google Drive shows how serious Google is about making Drive a great platform for third parties to develop.
There are a handful of obvious ways to use the SDK, such as allowing your users to open files from Drive in your application, edit them, and save them back. Today, I'd like to quickly cover some less-obvious uses of the Drive API that we’re using at Lucidchart.
Applications have the ability to create new files on Google Drive. This is typically used for content created by applications. For example, an online painting application may save a new PNG or JPG to a user's Drive account for later editing.
One feature that Lucidchart has long provided to its users is the ability to download their entire account's content in a ZIP file, in case they (or we!) later mess up that data in some way. These backups can be restored quickly into a new folder by uploading the ZIP file back to our servers. (Note: we’ve never yet had to restore a user account this way, but we provided it because customers said it was important to them.)
The problem with this arrangement is that users have to remember to do regular backups, since there's no way for us to automatically force them to download a backup frequently and put it in a safe place. With Google Drive, we now have access to a reliable, redundant storage mechanism that we can push data to as often as we would like.
Lucidchart now provides automated backups of these ZIP files to Google Drive on a daily or weekly basis, using the API for creating new files on Drive.
Another use for the files.create call is to publish finished content. Lucidchart, like most applications, stores its editable files in a custom format. When a user completes a diagram or drawing, they often download it as a vector PDF, image, or Microsoft Visio file to share with others.
files.create
Lucidchart is now using the create file API to export content in any supported format directly to a user's Google Drive account, making it easy to sync to multiple devices and later share those files.
Google Drive can't automatically index content created by Lucidchart, or any other application that saves data in a custom format, for full-text search. However, applications now have the ability to explicitly provide HTML content to Google Drive that it can then index.
Indexable text provided to the Drive API is always interpreted as HTML, so it is important to escape HTML entities. And if your text is separated into distinct pieces (like the text in each shape in Lucidchart), you can improve full-text phrase searching by dividing your indexable text into one div or paragraph element per piece. Both the files.create and files.update calls provide the ability to set indexable text.
files.update
We hope that this overview helps other developers implement better integrations into the Google Drive environment. Integrating with Drive lets us provide and improve a lot of functionality that users have asked for, and makes accessing and using Lucidchart easier overall. We think this is a great result both for users and web application developers and urge you to check it out.