Editor’s Note: This blog post is authored by Naveen Venkat, from Zoho. We've welcomed many of Zoho's apps in the Google Apps Marketplace and are happy to see them join the Drive developer community as well. -- Steven Bazyl
Zoho is a suite of online applications targeted at small and medium sized businesses. We offer over 25 services ranging from the basic productivity suite all the way up to business applications like CRM, project management, invoicing, custom app building platform and much more. We’ve just rolled out Zoho Office integration with Google Drive.
Zoho's applications can be broadly classified into 3 categories: Productivity, Business and Collaboration apps. Productivity apps are the basic needs of an office environment and include the likes of a word processor, spreadsheet, presentation tool, calendar etc. Business apps include CRM, projects, custom application building platform, invoicing and bookkeeping services. Collaboration tools facilitate real time collaboration across geographical locations. Email, chat, discussions, docs wiki etc are what round out this category of services.
We have many Google and Google Apps customers using Zoho applications through Google Apps Marketplace, Google Chrome Webstore or directly using "Sign in using Google" on our login page. Tightly integrating Zoho's applications with Google's services is very crucial for us. During the launch of Google Drive, Google announced the availability of a SDK and the related Google Drive APIs that would facilitate third party apps to integrate with the service. We used this opportunity and initially started integrating Zoho Show (presentation tool) with Google Drive. At that time, we felt that the APIs are very extensive and pretty straight forward. This inspired us to integrate our entire Zoho Office Suite with Google Drive. In just a couple of weeks, we were able to integrate Zoho Writer (word processor), Zoho Sheet (spreadsheet tool), and Zoho Show (presentation tool) with Google Drive.
Some of the goals that we had in our mind when we started our integration were, to be able to :
Above all, we also focussed on enabling SSO to login into Zoho with Google accounts without having to provide any extra information.
The first step in integrating an app with Google Drive is to register the app in the Google APIs console. Because we have three different apps, we created three different projects - Writer, Sheet and Show so that they would be listed separately in the contextual menu of Google Drive. In the console, we also have to enable Drive APIs and SDK to get access to Drive APIs for the App ID. The console then prompts you to create OAuth 2.0 client ID for authenticating the user with the app. Once this is done, we had to provide Google some basic information about our app such as brand icons, default and secondary mime types, authorization scopes etc.
The next step is to create a Google Chrome Web listing which allows users to install your app in Google Drive. Since we already have Zoho apps listed in the Chrome Web Store it was easy for us to make a few changes in our existing listing. We’ve added the following code in the manifest files to enable Drive for our apps.
"container" : "GOOGLE_DRIVE", "api_console_project_id" : "<APP_ID>",
APP_ID is the application’s ID which can be found in the Google APIs console.
APP_ID
This makes the Chrome Web Store listing Drive enabled, so now users simply have to install our Chrome Web Store apps and the Drive features will be enabled:
Google Drive apps are required to use OAuth 2.0 for authorization and are strongly encouraged to use OpenID Connect for authentication. Our Zoho Office Suite already supported OpenID authentication, so it was easy for us to implement it for the Z Writer, Sheet and Show apps for Google Drive.
To enable authentication for Drive users on our Zoho app we’ve added the following scopes to our Drive apps’ configuration page in the Google APIs Console:
https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
This will make sure access to the authentication information of the user is granted alongside access to the Drive API allowing us to access Google’s OpenID Connect endpoints.
Our previous outings with GData APIs introduced us to OAuth2.0. OAuth 2.0 takes care of the authorization flow for a new user and smoothly transitioning the user to the Zoho application environment.
Users can create a new document from Google Drive’s UI or edit their existing Google Drive documents with Zoho’s editors. And when the user hits save, the modified document will be saved in Google Drive as a new document, thereby leaving the original Document untouched. This new document comes with extensions recognized by Zoho's apps. For example, a text document, Test.doc, from Google drive, when edited in Zoho Writer and saved, will be saved as Test.zdoc in Google Drive.
Users can invite their Google contacts to collaborate on their Google Drive documents and edit them using Zoho editors.
One of the main challenges that we faced during the integration was when multiple users need to collaborate on the same document in Zoho's editors.
In the recently launched Drive version v2 Google supports Permissions Feed to manage resource sharing. Based on the document permissions, we’ll allow the user to access the document for collaboration. Thus the Google users with whom the document has been shared can collaborate with each other in Zoho’s editor, without having to sign-in with Zoho explicitly. Zoho apps support inline collaboration which facilitates the shared users to collaborate seamlessly.
Thanks to Google team for their continued support right through the integration.
Editor’s Note: This blog post is authored by Peldi Guilizzoni, from Balsamiq. As a user of Balsamiq myself, it was great to see them join as one of the first Drive apps! -- Steven Bazyl
Hi there! My name is Peldi and I am the founder of Balsamiq, a small group of passionate individuals who believe work should be fun and that life's too short for bad software.
We make Balsamiq Mockups, a rapid wireframing tool that reproduces the experience of sketching interfaces on a whiteboard, but using your computer, so they’re easier to share, modify, and get honest feedback on. Mockups look like sketches, so stakeholders won’t get distracted by little details, and can focus on what’s important instead.
We sell Mockups as a Desktop application, a web application and as a plugin to a few different platforms. An iPad version is also in the works.
We believe that tools should adapt to the way people like to work, not the other way around. That's why when Google Drive came out, we jumped at the chance to integrate Mockups with it. This is the story of how the integration happened.
First of all, a little disclaimer. Although my job these days is to be CEO and all that, I come from a programming background. I started coding at 12, worked at Macromedia/Adobe for 6 years as a programmer. I'd say I'm a pretty good programmer…just a bit rusty, ok? I realize that the decision to write the code for Mockups for Google Drive myself instead of asking one of Balsamiq's better programmers to do it might have been a bit foolish, but we really wanted to be a launch partner and the programmers were already busy with lots of other stuff, plus I didn't want to pass up on the chance to work on something cool after dinner for a while. ;) OK so now you know the background, let's get started.
Once I got access to the Google Drive API documentation and looked around a bit, I started by following the detailed "sample application" tutorial.
The sample was written in python, used OAuth, the Google API Console, and ran on Google App Engine, all technologies I hadn't been exposed to before.
Following along brought me back to my childhood days of copying programs line by line from PC Magazine, not really understanding what I was doing but loving it nonetheless. :)
The trickiest part was figuring out how OAuth worked: it's a bit of a mess, but after you play with it a little and read a few docs, it starts making sense, stick with it, it's the future! ;) Plus the downloadable sample app had hidden all that stuff in a neat little library, so you don't have to worry about it so much.
Setting up the sample application took around 2-3 hours, easy peasy. Once that was done, I just had to convert it to become Balsamiq Mockups for Google Drive. Because I had done this before for other platforms, this was finally something I was comfortable with doing. The bulk of our code is encapsulated into our Flash-based Mockups editor, so all I had to do was to write a few functions to show the editor to the user and set it up using our internal APIs. Then I had to repurpose the "open with" and "edit" APIs from the sample app to work with the Mockups editor. All and all, this took maybe a day of work. Yay!
Once the proof of concept was up, I started turning the code into a real app. I cleaned up the code, added some comments, created a code repository for it in our bazaar server, set up a staging environment (a parallel Google App Engine application and unpublished Chrome Web Store listing) and integrated the build and deployment into our Jenkins server.
One tricky bit worth mentioning about integrating with Jenkins: the Google App Engine deployment script appcfg.py asks for a password interactively, which is a problem if you want to deploy automatically. The solution was to use the echo pwd | appcfg.py trick found here.
After some more testing and refinements, shipping day came, Balsamiq Mockups for Google Drive was live.
It was a very exciting day. Getting mentioned in the official Google blog was quite awesome. The only stressful moment came because for some reason my Google App Engine account was not set up for payments (I could have sworn I had done it in advance), so our app went over our bandwidth quota an hour after launch, resulting in people receiving a white blank screen instead of the app. Two people even gave us bad reviews because of it. Boo! :(
In the days that followed, things went pretty well. People started trying it out, and only a few bug reports came. One very useful Google App Engine feature is the "errors per second" chart in the dashboard, which gives you an insight on how your app is doing.
I noticed that we had a few errors, but couldn't figure out why. With the help of the docs and our main Developer Relations contact at Google, we narrowed them down to a couple of OAuth issues: one was that the library I was using didn't save the refresh_token properly, and another that had to do with sessions timing out when people use the editor for over an hour and then go to save their work.
Fixing these bugs took way longer than what I wanted, mostly due to the fact that I'm a total OAuth and Python n00b.
After a few particularly frustrating bug hunting sessions, I decided to rewrite the backend to Java. The benefits of this approach are that a) we get static type checking and b) I can get help from some of our programmers since Java is a language we're all already familiar with here.
Since by now the Java section of the Google Drive SDK website had been beefed up, the rewrite only took a day, and it felt awesome. Sorry python, I guess I'm too old for you.
The hardest part of the java rewrite was the Jenkins integration, since the echo pwd trick doesn't work with the java version of appcfg. To get around that, I had to write an Expect script, based on this Fábio Uechi blog post. By the way, I would recommend reading the Expect README, it has an awesome 1995 retro feel to it.
Overall, integrating Balsamiq Mockups with Google Drive was a breeze. Google is a technology company employing some of the brightest people in our industry, and it shows. The APIs are clean and extremely well tested. The people at Google are very responsive whenever I have an issue and have been instrumental in making us successful.
While the application is still pretty young - we are working on adding support for Drive images, linking, symbols… - we are very happy with the results we're getting already. The Drive application netted around $2,500 in its first full month of operation, and sales are growing fast.
Alright, back to coding for me, yay! :)
Peldi
Today at Google I/O, we announced many enhancements to Google Apps Script to help you build richer applications and share your apps with users. Apps Script began as a tool for helping users get more done with their Google spreadsheets. Over time, Apps Script has grown to handle much more. It's a platform to extend Google spreadsheets and Sites, and a convenient way to create web applications.
We launched script.google.com as a destination for Apps Script developers. You can now create scripts from script.google.com or from Google Drive. Plus, your projects are now stored in Google Drive, and you can share them just like a Google document or spreadsheet.
HtmlService can help you create beautiful interfaces using Apps Script. HtmlService allows you to create web apps using HTML, client-side JavaScript, and CSS. You can also use jQuery to build web apps. HtmlService uses Google Caja to ensure that the HTML content served by your web apps is safer for you and your users.
We also launched a better way to store your application's data, ScriptDb . Every script now has an embedded JSON database. You no longer have to rely on a spreadsheet to store the data. Using ScriptDb, you can store a larger volume of data and search easily. We designed ScriptDb to be easy to use. It doesn't need connection strings or special passwords, and you can directly store and search your JavaScript objects without the need to convert them to a different format. You can learn more about ScriptDb on the Google Apps Script Developers page.
There are also now more options for deploying your web app. Your apps can now, with authorization, run as the user behind the keyboard, not just the script owner. This brings a new level of versatility to web apps built with Apps Script.
Finally, we wanted to make it easy to distribute your apps. You can now publish your apps in the Chrome Web Store . Register and package your app directly from the Publish menu in Google Apps Script. Then customize your listing from the Chrome Web Store and publish your app to the world.
We added a lot of new functionality to Apps Script, so in addition to our developer reference documentation , we've also created a new user guide . If you need further help you can reach out to us on Stack Overflow . You can also make feature requests and report issues on the Apps Script page on Google Code.
Building with Google Apps Script has become a lot easier and more powerful. We can't wait to see what you build. Happy scripting!
Editor’s Note: Kevin Winter is a guest author from the AdWords API Developer Relations team. He implemented these features in his 20% time. - Jan Kleinert
Google Spreadsheets provides powerful charting functionality to let you analyze your data many different ways. We recently added the ability to programmatically create, modify and delete Spreadsheet charts using Google Apps Script.
Charts that a user creates in a Google Spreadsheet are called Embedded Charts. Apps Script can be used to manage these types of Spreadsheet Charts. Each EmbeddedChart belongs to a Sheet. Embedded charts are named this way because they are embedded onto a Sheet within the spreadsheet. Embedded charts align to the upper left hand corner of the specified column and row in the spreadsheet.
EmbeddedChart
Sheet
Let’s say I want to track my gas mileage using Google Spreadsheets. First, I create a Spreadsheet, and set up columns and formulas. Next, I add a form interface to make it easy to add new entries and a chart to visualize my mileage. Now I can start recording data - but each time I add enough entries to go past the ranges the chart uses, I need to manually increase them. How about we use some Apps Script magic to solve this?
The script below iterates through all charts on this sheet and determines if any of the ranges need to be expanded (i.e. if there are more rows with data to display). It then updates the title, builds the new EmbeddedChart object and saves it to the sheet. It could also add a menu interface or a trigger to execute this periodically or when the spreadsheet is edited.
function expandCharts() { var sheet = SpreadsheetApp.getActiveSheet() // Get a list of all charts on this Sheet. var charts = sheet.getCharts(); for (var i in charts) { var chart = charts[i]; var ranges = chart.getRanges(); // Returns an EmbeddedChartBuilder with this chart’s settings. var builder = chart.modify(); for (var j in ranges) { var range = ranges[j]; // rangeShouldExpand_ is defined later. if (rangeShouldExpand_(range)) { // Removes the old range and substitutes the new one. builder.removeRange(range); var newRange = expandRange_(range); builder.addRange(newRange); } } // Update title. builder.setOption('title', 'Last updated ' + new Date().toString()); // Must be called to save changes. sheet.updateChart(builder.build()); } } function rangeShouldExpand_(range) { var s = range.getSheet(); var numColumns = range.getNumColumns() var values = s.getSheetValues(range.getLastRow(), range.getColumn(), 2, numColumns); for (var i = 0; i < numColumns; i++) { // If the next row has the same pattern of values, // it’s probably the same type of data and should be expanded. if (!values[0][i] && !values[1][i] || !!values[0][i] && !!values[1][i]) { continue; } else { return false; } } return true; } function expandRange_(range) { var s = range.getSheet() var startRow = range.getRow(); var startCol = range.getColumn(); var numRows = range.getNumRows(); var numCols = range.getNumColumns(); while (rangeShouldExpand_(range)) { numRows++; range = s.getRange(startRow, startCol, numRows, numCols); } return range; }
What if you wanted to create a new chart from scratch? You can do that too!
var sheet = SpreadsheetApp.getActiveSheet(); var chart = sheet.newChart() .setPosition(5, 6, 5, 5) .setChartType(Charts.ChartType.AREA) .addRange(sheet.getActiveRange()) .build(); sheet.insertChart(chart);
In the above code example, we’ve gotten a reference to an EmbeddedChartBuilder, set its position within the sheet, change the chartType, add the currently selected range and insert the new chart.
EmbeddedChartBuilder
The EmbeddedChartBuilder allows you to modify the chart in a couple ways:
addRange
removeRange
setOption
setChartType
setPosition
Embedded charts use more options than regular Apps Script charts, so options are handled slightly differently. Developers can pass a dotted field path to change options. For example, to change the animation duration, you can do this:
builder.setOption("animation.duration", 1000);
Now that we’ve created a bunch of charts, your spreadsheet is probably pretty cluttered. Want to clear it and start afresh with charts? Just remove them all:
var sheet = SpreadsheetApp.getActiveSheet(); var charts = sheet.getCharts(); for (var i in charts) { sheet.removeChart(charts[i]); }
Just like standalone charts, you can use embedded charts elsewhere. You can add them to a UIApp or a sites page as well as sending them as an email attachment:
// Attach all charts from the current sheet to an email. var charts = SpreadsheetApp.getActiveSheet().getCharts(); MailApp.sendEmail( "recipient@example.com", "Income Charts", // Subject "Here's the latest income charts", // Content {attachments: charts });
We hope you found this blog post useful. Enjoy editing embedded charts using Google Apps Script!
At the end of last year we launched the UserManager Apps Script service, allowing Google Apps domain administrators to write scripts to programmatically create, delete and edit their user accounts.
We are now extending the family of Domain services with two new additions: NicknameManager and GroupsManager.
The NicknameManager service allows domain administrators to define alternate email addresses (i.e. “nicknames”) with a single line of code, as in the following example:
var nickname = NicknameManager.createNickname("user", "nick");
With the GroupsManager service, Google Apps domain administrators can create and delete groups, and manage their members and owners. The following example shows how to list all members of a group given its unique groupId:
groupId
function listMembers(groupId) { var members = GroupsManager.getGroup(groupId).getAllMembers(); for (var i in members) { var member = members[i]; Logger.log(i + ": " + member); } }
With the same service, one line of code is enough to add a member to a group:
GroupsManager.getGroup(groupId).addMember(memberId);
If you want to know more about the new NicknameManager and GroupsManager services, please check our documentation, and don’t hesitate to get in touch with us if you have questions or suggestions.
The Google Picker API provides developers with an easy-to-use file dialog that can be used to open Google Drive files directly from their web app independently of the Drive UI. The Drive SDK documentation includes an example showing how to incorporate the Google Picker with just a few lines of JavaScript.
Another powerful use case for the Picker API is to allow users to upload files to Drive with the same consistent UI. A single Picker dialog can incorporate multiple views and users can switch from one to another by clicking on a tab on the left:
The following code sample opens the Picker dialog and registers a simple callback function to handle the completed upload event:
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Google Picker Example</title> <!-- The standard Google Loader script. --> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> // Use the Google Loader script to load the google.picker script. google.setOnLoadCallback(createPicker); google.load('picker', '1'); // Create and render a Picker object for searching images // and uploading files. function createPicker() { // Create a view to search images. var view = new google.picker.View(google.picker.ViewId.DOCS); view.setMimeTypes('image/png,image/jpeg'); // Use DocsUploadView to upload documents to Google Drive. var uploadView = new google.picker.DocsUploadView(); var picker = new google.picker.PickerBuilder(). addView(view). addView(uploadView). setAppId(YOUR_APP_ID). setCallback(pickerCallback). build(); picker.setVisible(true); } // A simple callback implementation. function pickerCallback(data) { if (data.action == google.picker.Action.PICKED) { var fileId = data.docs[0].id; alert('The user selected: ' + fileId); } } </script> </head> <body> </body> </html>
There is an important difference between this upload example and the code used to open files: in addition to the standard view, an instance of DocsUploadView is added to the Picker object, thus providing upload capability.
DocsUploadView
For more information about this component and all other available views, please refer to the Google Picker Reference Guide.
Updated to add links to the #gappschallenge hashtag and to Google Apps Script.
In the past year, the Google team has been engaging with local developers by running various Google conferences and Google+ Hackathons, showcasing creative applications, and supporting Tech Hubs. Since we are always looking for opportunities to encourage (and challenge!) you, we are looking forward to giving developers the opportunity to take on this year’s challenge, which will focus on Google Apps Script, Google Apps and Google Drive APIs.
With the Google Apps Developer Challenge, we hope developers across the globe will find new and innovative ways to use Apps Script, Apps and Drive APIs to build cool apps. This challenge is particularly unique as the APIs are available to a large community of developers who code in a variety of languages that include Java, PHP, Python, and .Net.
We will be working in collaboration with our Google Developer Groups (also known as GTUGs) and Google Business Groups to organize events and prepare for this challenge. Make sure to join your local community so that you are aware of meet ups.
How familiar are you with the various Google Apps and Drive APIs? If you aren’t familiar, make sure to read up about Google Apps Script, Google Apps and Drive APIs on Google Developers. Use the Chrome Web Store as a source of inspiration. Create an innovative application using Google Apps Script, Google Apps, and Drive APIs. If your application is the best within one of the three categories defined below in your region, you could win a prize of $20,000 dollars! Google is also committed to nurturing the next generation of computer scientists as well as encouraging more women to get into coding, so we have special prizes for all-student or all-female teams that make the second round — $1,000 dollars.
The first round of submissions will start on the 24th of August 2012. The categories are
Make sure you read all the details about the competition on the Google Apps Developer Challenge page and follow the hashtag #gappschallenge on Google+ for any additional updates.
What are you waiting for? Get coding!
Editor’s Note: This blog post is authored by Blair Kutzman, who developed the Gmail Delay Send script. - Eric Koleda
Update: To start using this script simply open this page and follow the instructions.
In today’s connected world, when you get your email could be just as important what’s in it. In 2011 over 107 trillion emails were sent to 1.97 billion internet users. Studies have shown that the average person can only effectively process 50 emails in a day. That leaves 100 emails per person per day that are not processed effectively. How can you be sure that the emails you send fall into the former category and not the latter?
Luckily, there are tools to assist with email overload. One such tool is Gmail Delay Send.
Gmail Delay Send is a Google Apps Script that allows you to schedule emails to be delivered on a specified date and time. Using this tool you can ensure your email is sent to its destination at a time when you can capture your recipient’s full attention. For example, receiving an email at 4:59 PM on friday might not receive the same attention as an email received on Monday at 10:00 AM.
A primary requirement of Gmail Delay Send was that it needed to work everywhere Gmail is available. There are already many browser add-ons and services available to enhance Gmail with similar functionality, so the purpose was not to duplicate that work. In order for the service to be available on all platforms, it needed to utilize native Gmail features.
We needed a native way that Gmail could:
Gmail already contains a 'Draft' folder which is exactly what is required for item 1. The real problem was where and how to store the metadata for item 2, without any new Gmail functions. I chose to encode the metadata in the subject of the message because it would contain less text, which would mean a smaller chance of mis-parsing. Text such as “5 hours” and “next tuesday” were turned into a date-time using an open source library called datejs and a few modifications. See below for details of how this potentially cumbersome process was improved.
The script works as follows:
Although using datejs to parse the dates from the subject line was easy to implement, it introduced some usability issues. First, how would a user know if a certain string can be parsed by datejs (eg. is “5 minutes before 4PM” parsable)? To assist the user in knowing which dates datejs supports, the script offers a mechanism to test a given string directly in the spreadsheet that Gmail Delay Send is installed inside of. In this way a user can test various strings to see if they are valid and, if so, when they would be sent. A wiki page is dedicated to helping people through this process.
Another possibly confusing part of using Gmail Delay Send was setting up triggers. Thanks to some recent improvements of the Script Services, this is now done automatically for users as they install.
Adding retry logic to the script was another important step in improving its reliability and user experience. Occasionally, users were getting emails from their trigger informing them that a certain Google Script API could not be contacted. Some retry logic was required to make things work correctly. As shown in the snippet below, the function executeCommand() takes any function and will try to execute it a specified number of times, retrying if an error is thrown:
function executeCommand(fp) { var msg; var ret_val; var last_error; for(var retries = NUM_RETRIES; retries > 0; retries -= 1) { try { ret_val = fp(); break; } catch(err) { last_error = err; msg = "Exception:" + err + " thrown executing function:" + fp; debug_logs.push(msg); Logger.log(msg); Utilities.sleep(SLEEP_TIME); } } if(retries == 0) { msg = "Attempted to execute command:" + fp + " " + NUM_RETRIES + " times without success. Error message: " + last_error + ". Aborting :-("; Logger.log(msg); throw(msg); } return ret_val; }
Using this method, statements like those below will automatically retry if the service is not available.
executeCommand( function() { GmailApp.send( … ) }); executeCommand( function() { UrlFetchApp.urlFetch(url) } );
Gmail Delay Send was a fantastic project for learning about Google Apps Script and I hope that it will continue to prove useful to its users. If you’re interested in using Gmail Delay Send or just interested in the development process please check out the homepage or source.
Have you ever written a particular piece of code over and over again? Or used scripts to do something that you thought others might want to do as well? Starting today, you’ll be able to share and reuse those scripts as libraries, right from inside Google Apps Script.
I often write scripts which check the National Weather Service for relevant weather-related information. This allows me to send myself an email if it’s going to rain, reminding me to bring an umbrella to work, or to annotate my spreadsheet of running workouts with the temperature of the day.
Remembering how to query the National Weather Service every time I write a script is a daunting task, however. They have a complicated XML format that is tricky to parse. As a result, I end up just copying and pasting code each time. This is not only error-prone, but also has the big disadvantage that I have to fix all of my scripts one by one whenever the Weather Service’s XML format changes.
The code I use to query the National Weather Service is a perfect use case for a library. By using a library, I no longer have to copy and paste code in my script project. Since logic is centralized, updates need to be applied just once. And now I am able to share my library with other developers who can benefit from the work I’ve already done.
Libraries are written just like any other Apps Script project. A good library has a clean API which is also well documented. Here’s a code snippet from my WeatherService library:
/** * Queries the National Weather Service for the weather * forecast of the given address. Example: * * <pre> * var chances = WeatherService * .getPrecipitation("New York, NY"); * var fridayChance = chances[“Friday”]; * Logger.log(fridayChance + “% chance of rain on Friday!”); * </pre> * * @param {String} address The address to query the * temperature for, in any format accepted by * Google Maps (can be a street address, zip * code, city and state, etc) * * @returns {JsonObject} The precipitation forecast, as * map of period to percentage chance of * precipitation. Example: * * <pre> * { Tonight: 50, Friday: 30, Friday Night: 40, ... } * </pre> */ function getPrecipitation(address) { // Code for querying weather goes // here... }
Notice how detailed the documentation is. We know that good documentation makes for a great library. So, for every library Apps Script will also auto-generate a documentation page based on the code comments using the JSDoc format. If you want a method in your code to not be exposed to users, simply end its name with an underscore.
Before code can be used as a library, a version of it needs to be saved. Versions are a new concept in Apps Script, and they represent a snapshot of your project which won’t change even as changes are made to the script code. Versions are useful because they allow you to change your library code without breaking existing users. Once you’re happy with the changes you’ve made, you can then save a new version. Please see the user guide for saving a version and sharing your code as a library is easy.
Using a library only takes a few steps. To be able to use a library, the owner of the library must share the library and its project key with you. You can follow these instructions to then use a library. To use this National Weather Service library, please visit this page for project key.
Script Libraries come with three interesting features.
To get started on using Script Libraries, you can find a list of useful libraries contributed by two of our top contributors - James Ferreira and Romain Vialard. You can also find a detailed user guide on managing versions and libraries. We hope you enjoy using libraries.
Editor’s note: This is a guest post by Laura Bârlădeanu, lead programmer at MindMeister. -- Steve Bazyl
MindMeister is a market innovator for providing collaborative online mind mapping solutions. Launched in May 2007, our site has since attracted hundreds of thousands of businesses, academic institutions and creative consumers who have mapped over 100 million ideas online. We were one of a few web applications invited to take part in the Google Drive launch earlier this year.
The goal was to provide users with an intuitive integration between Google Drive and Mindmeister that would cover all the cases provided by the Google Drive guidelines at that time:
Aside from these main integration points, we wanted to make use of the SDK and provide many useful Google Drive features, so we added a few more requirements to the list:
Google Drive applications are required to use OAuth 2.0 as an authorization mechanism, and are recommended to use OpenID Connect for login. The authorization scope for Drive files is added by default for all registered drive apps. Additionally, the application can require extra scopes that would fit its needs. For our requirements, we needed the following scopes:
https://www.googleapis.com/auth/drive.file
https://www.google.com/m8/feeds/
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/userinfo.email
However, we didn’t want the user to turn away from our application by being asked for too many scopes straight from the beginning. Instead, we defined sets of actions that required a subset of these scopes:
[‘drive’, ‘profile’, ‘email’]
[‘profile’, ‘email’]
[‘contacts’, ‘profile’, ‘email’]
Whenever the user wanted to execute an action that would require more scopes than they initially provided, we redirected them to a Google authorization dialog that requested the extra scope. Upon authorization, we stored the individual refresh tokens for each combination of scopes in a separate model (UserAppTokens).
UserAppTokens
Whenever the application needed the refresh token for a set of scopes (eg. for [‘profile’, ‘email’]) it would fetch the refresh token from the database which corresponded to a superset of the required scopes (eg. [‘drive’, ‘profile’, ‘email’] would fit for the required [‘profile’, ‘email’]). The access token would then be obtained from Google and stored in the session for future requests.
The main challenge we encountered during design and implementation was dealing with the special cases of multiple users (Google users or internal users) editing on the same map which is a Google Drive file, as well as dealing with the special cases of the map being edited in multiple editors. We also had to find a solution for mapping the Google Drive user’s permissions (owner, reader, or writer) to the MindMeister’s existing permission mechanism.
owner
reader
writer
The MindMeister application is registered for opening four types of files: our own .mind format, MindManager’s .mmap format, Freemind’s .mm format, as well as .xmind. However, since these formats are not 100% compatible with each other, there is always a chance of losing more advanced features when opening a file in a format other than .mind. We wanted to provide the user with the possibility to chose whether the opened file would be saved in its original format, thus risking some features loss, or saving the file in MindMeister format. This option should be per user, per file and with the possibility to be remembered for future files.
.mind
.mmap
.mm
.xmind
After analyzing the requirements and the use cases, we designed the following architecture:
Using the revision fields in both Map and DriveData we always know if the map has been edited on MindMeister’s side without it being synced with the corresponding file on Google Drive. On the other hand, the token field from DriveData represents the file’s MD5 checksum at the moment of the last update and is supplied via the Google Drive SDK. So if the file is edited externally using another application than MindMeister, we have a mechanism in place for detecting this issue and presenting the user with a few courses of action.
Upon opening a file that has a different format than .mind, the user is prompted with an option dialog where they can chose if they want the file to be saved back in the same format or in MindMeister’s own format. These options are then remembered in the current session and the per map settings are stored in the extension (the original format) and save_extension (the format to save back in) fields present in the DriveData model.
extension
save_extension
DriveData
A map on MindMeister can always be shared with other MindMeister users and the collaborators can have reading or writing access to the map. However, only some of these users will have a corresponding Google account with access to the MindMeister Google Drive application and furthermore, only some of them will have access to the same file on Google Drive with writing permission. This is why it is important for us to know which users can write back to the file and the solution for these access levels was achieved with the help of the permission field in the DriveDataRight model.
DriveDataRight
Now more than two weeks on from the Google Drive launch and we can confidently say that our integration was successful, with more than 14,000 new users using Google login and with over 7,000 users that have enabled the Google Drive integration. All in all, the Google Drive SDK was very easy to use and well documented. The developer support, especially, was always there to help and our contacts were open to our suggestions.
My role in Google Docs is to help manage many projects across Google Docs/Drive. As a part of my job, I ask for a fair amount of data from all of those teams and generate reports on project/feature status. To make this much simpler for everyone involved, I have created a lot of simple tools using Google Spreadsheets and Apps Script (as well as a lot of complex tools) that make it easier for collaborators to enter data and for me to collect that data and create reports. Below is a pair of foundational techniques that I include in nearly every Spreadsheet/Apps Script tool I build.
I have a dozens of scripts generating reports. I use a technique where I set up a dedicated sheet for script configuration and read values from the sheet during script execution. A simple configuration sheet makes this much more straightforward.
With a globally accessible array, globals, you can then load the “settings” from the configuration (sheet SHT_CONFIG, here) at any entrypoint to the script.
// globally accessible variables var SHT_CONFIG = 'Config'; var globals = new Array(); function entryPoint() { globals = (globals.length == 0) ? LoadGlobals( SpreadsheetApp.getActive(), SHT_CONFIG) : globals; // your code goes here }
The LoadGlobals function, below, parses the data in the first three columns of the workbook and sheet name passed to it. You can even include a fourth column (or more!) explaining what the variables do, and they’ll just be ignored - though hopefully not by your users!
// Generate gloabal variables to be loaded into globals array function LoadGlobals_(wb, configSheet) { var configsheet = wb.getSheetByName(configSheet); var tGlobals = new Array(); // Config data is structured as VARIABLE, ISARRAY, VALUE(S) // and includes that as the header row var cfgdata = configsheet.getDataRange().getValues(); for (i = 1; i < cfgdata.length; i++) { switch (cfgdata[i][1]) { case 'ARRAY': // treat as an array - javascript puts a null value in the // array if you split an empty string... if (cfgdata[i][2].length == 0) { tGlobals[cfgdata[i][0]] = new Array(); } else { tGlobals[cfgdata[i][0]] = cfgdata[i][2].split(','); } break; // Define your own YOURDATATYPE using your customTreatment function (or // just perform the treatment here) case 'YOURDATATYPE': tGlobals[cfgdata[i][0]] = customTreatment(cfgdata[i][2]); break; default: // treat as generic data (string) tGlobals[cfgdata[i][0]] = cfgdata[i][2]; } } return tGlobals }
As long as you’ve loaded the global values during the script execution, you can refer to any of the values with, for example, globals.toList. For instance:
function getToList() { return globals.toList.join(‘,’); // or return globals[‘toList’].join(‘,’); }
Asking colleagues to enter tracking data so they don’t have to report their own statuses is one thing. Asking them to enter tracking data in a specific format, within a specific column layout, in a way that doesn’t mesh with their existing processes is entirely different. So, I use the following technique, where I rely on column names and not column ordering. The code below lets me do just that by fetching a key-value object for column headings and their position in a worksheet.
// Returns key-value object for column headings and their column number. // Note that these are retrieved based on the array index, which starts at 0 // the columns themselves start at 1... // pass header row of data (array) and an array of variables/column names: // eg: BUG_COL_ARRAY['id'] = 'Id'; function ColNumbers(hArray, colArray) { for (oname in colArray) { this[oname] = getColIndex(hArray, colArray[oname]); } } // ----------------------------------------------------------------------------- function getColIndex(arr, val) { for ( var i = 0; i < arr.length; i++) { if (arr[i].toLowerCase() == val.toLowerCase()) { return i; } } return -1; }
With the associative array defined, below, I can ask Apps product managers to add (or rename) columns to their feature tracking sheets and then extract features from every apps product team in one fell swoop (a future post). Each product team can set their columns up in whatever order works best for them.
// key columns in the app feature sheets var COLS_KEYAPPCOLS = new Array(); COLS_KEYAPPCOLS[‘feature’] = ‘Feature Title’; COLS_KEYAPPCOLS[‘desc’] = ‘Description’; COLS_KEYAPPCOLS[‘visible’] = ‘Visible’; COLS_KEYAPPCOLS[‘corp’] = ‘Corp Date’; COLS_KEYAPPCOLS[‘prod’] = ‘Prod Date’;
What does this do for me, really? I reuse this code for every project of this sort. The steps to reuse are then:
var curFeatures = curSheet.getDataRange().getValues(); var curCols = new ColNumbers(curFeatures[0], COLS_KEYAPPCOLS);
I can, from now on, refer to the Description column using something like curCols.desc when referencing any of the products’ data. The Spreadsheets team may list new feature descriptions in the second column, and the Documents team may list new feature descriptions in the fourth column. I no longer worry about that.
As a bonus, I can define the columns and ordering to be used in a report in my config sheet (see above). If I’ve defined reportcols as feature, desc, prod in my config sheet, I can generate a report very simply:
// Iterate through the rows of data, beginning with 1 (0 is the header) for ( var fnum = 1; fnum < curFeatures.length; fnum++) { // Iterate through each of the fields defined in reportcols for ( var cnum = 0; cnum < globals.reportcols.length; cnum++) { outputvalue = curFeatures[fnum][curCols[globals.reportcols[cnum]]]; // outputvalue is what you want to put in your report. } }
You could do that a lot more simply, but if we want to use the ‘Corp Date’ I only need to change the value in the config sheet to feature, desc, corp and I’m done - you’d have to change the code.
Collecting and crunching data in a Google Spreadsheet becomes a lot easier if you use Apps Script. Trust me, it makes your life a lot easier. Try it now by copying this spreadsheet
Editor’s note: This is a guest post by Ben Dilts, CTO & Co-founder of Lucidchart. -- Steve Bazyl
The release of Drive SDK allowing deep integration with Google Drive shows how serious Google is about making Drive a great platform for third parties to develop.
There are a handful of obvious ways to use the SDK, such as allowing your users to open files from Drive in your application, edit them, and save them back. Today, I'd like to quickly cover some less-obvious uses of the Drive API that we’re using at Lucidchart.
Applications have the ability to create new files on Google Drive. This is typically used for content created by applications. For example, an online painting application may save a new PNG or JPG to a user's Drive account for later editing.
One feature that Lucidchart has long provided to its users is the ability to download their entire account's content in a ZIP file, in case they (or we!) later mess up that data in some way. These backups can be restored quickly into a new folder by uploading the ZIP file back to our servers. (Note: we’ve never yet had to restore a user account this way, but we provided it because customers said it was important to them.)
The problem with this arrangement is that users have to remember to do regular backups, since there's no way for us to automatically force them to download a backup frequently and put it in a safe place. With Google Drive, we now have access to a reliable, redundant storage mechanism that we can push data to as often as we would like.
Lucidchart now provides automated backups of these ZIP files to Google Drive on a daily or weekly basis, using the API for creating new files on Drive.
Another use for the files.create call is to publish finished content. Lucidchart, like most applications, stores its editable files in a custom format. When a user completes a diagram or drawing, they often download it as a vector PDF, image, or Microsoft Visio file to share with others.
files.create
Lucidchart is now using the create file API to export content in any supported format directly to a user's Google Drive account, making it easy to sync to multiple devices and later share those files.
Google Drive can't automatically index content created by Lucidchart, or any other application that saves data in a custom format, for full-text search. However, applications now have the ability to explicitly provide HTML content to Google Drive that it can then index.
Indexable text provided to the Drive API is always interpreted as HTML, so it is important to escape HTML entities. And if your text is separated into distinct pieces (like the text in each shape in Lucidchart), you can improve full-text phrase searching by dividing your indexable text into one div or paragraph element per piece. Both the files.create and files.update calls provide the ability to set indexable text.
files.update
We hope that this overview helps other developers implement better integrations into the Google Drive environment. Integrating with Drive lets us provide and improve a lot of functionality that users have asked for, and makes accessing and using Lucidchart easier overall. We think this is a great result both for users and web application developers and urge you to check it out.
Apps Script developers have consistently expressed the need to monitor the health of various Apps Script services. Additionally, at every forum, event, hackathon or hangout, we have heard you express a need to know and understand the quota limits in Apps Script.
We heard your message loud and clear, so we started working on a dashboard for Apps Script. Today, we are launching the Google Apps Script Dashboard. This experimental dashboard can be used to monitor the health of 10 major services. It also provides a detailed view into the quota restrictions in Apps Script.
Did you know that consumer accounts (for example @gmail.com accounts) have quota of 1 hour of CPU time per day for executing triggers? Imagine the extent of automation that can happen for each user with triggers. And how about 20,000 calls to any external APIs. Now that packs in a lot of 3rd party integration with the likes of Salesforce.com, Flickr, Twitter and other APIs. So, if you are thinking of building extensions in Google Apps for your product, then don’t forget to leverage the UrlFetch Service which has OAuth built-in. Event managers can create 5,000 calendar events per day and SQL aficionados get 10,000 JDBC calls a day.
Check out the Dashboard for more.
Editor’s note: This is a guest post by Martin Böhringer, Co-Founder and CEO of Hojoki. -- Steve Bazyl
Hojoki integrate productivity cloud apps into one newsfeed and enables sharing and discussions on top of the feed. We’ve integrated 17 apps now and counting, so it’s safe to say that we’re API addicts. Now it's Time to share with you what we learned about the Google Apps APIs!
Our initial reason for building Hojoki was because of the fragmentation we experience in all of our cloud apps. And all those emails. Still, there was this feeling of “I don’t know what’s going on” in our distributed teamwork. So we decided to build something like a Google+ where streams get automatically filled by activities in the apps you use.
This leads to a comprehensive stream of everything that’s going on in your team combined with comments and microblogging. You can organize your stream into workspaces, which are basically places for discussions and collaboration with your team.
To build this, we first need some kind of information on recent events. As we wanted to be able to aggregate similar activities and to provide a search, as well as splitting up the stream in workspaces, we also had to be able to sort events as unique objects like files, calendar entries and contacts.
Further, it’s crucial to not only know what has changed, but who did it. So providing unique identities is important for building federated feeds.
Google’s APIs share some basic architecture and structure, described in their Google Data Protocol. Based on that, application-specific APIs provide access to the application’s data. What we use is the following:
The basic call for Google Contacts for example looks like this:
https://www.google.com/m8/feeds/contacts/default/full
This responds with a complete list of your contacts. Once we have this list all we have to do is to ask for the delta to our existing knowledge. For such use cases, Google’s APIs support query parameters as well as sorting parameters. So we can set “orderby” to “lastmodified” as well as “updated-min” to the timestamp of our last call. This way we are able to keep the traffic low and get quick results by only asking for things we might have missed.
If you want to develop using those APIs you should definitely have a look at the SDKs for them. We used the Google Docs SDK for an early prototype and loved it. Today, Hojoki uses its own generic connection handler for all our integrated systems so we don’t leverage the SDKs anymore.
If you’re into API development, you’ve probably already realized that our information needs don’t fit into many of the APIs out there. Most of the APIs are object centric. They can tell you what objects are included in a certain folder, but they can’t tell you which object in this folder has been changed recently. They just aren’t built with newsfeeds in mind.
Google Apps APIs support most of our information needs. Complete support of OAuth and very responsive APIs definitely make our lives easier.
However, the APIs are not built with Hojoki-like newsfeeds in mind. For example, ETags may change even if nothing happened to an object because of asynchronous processing on Google’s side (see Google’s comment on this). For us this means that, once we detect an altered ETag, in some cases we still have to check based on our existing data if there really have been relevant activities. Furthermore, we often have trouble with missing actors in our activities. For example, up to now we know when somebody changed a calendar event, but there is no way to find out who this was.
Another issue is the classification of updates. Google’s APIs tell us that something changed with an object. But to build a nice newsfeed you also want to know what exactly has been changed. So you’re looking for a verb like created, updated, shared, commented, moved or deleted. While Hojoki calls itself an aggregator for activities, technically we’re primarily an activity detector.
You can think of Hojoki as a multi-layer platform. First of all, we try to get a complete overview on your meta-data of the connected app. In Google Docs, this means files and collections, and we retrieve URI and name as well as some additional information (not the content itself). This information fills a graph-based data storage (we use RDF, read more about it here).
At the moment, we subscribe to events in the integrated apps. If detected, they create a changeset for the existing data graph. This changeset is an activity for our newsfeed and related to the object representation. This allows us to provide a very flexible aggregation and filtering on the client side. See the following screenshot. You can filter the stream for a certain collection (“Analytics”) or only for the file history or for the Hojoki workspace where this file is added (“Hojoki Marketing”).
What’s really important in terms of such heavy API processing is to use asynchronous calls. We use the great Open Source project async-http-client for this task.
When I wrote that “we subscribe to events” this is a very nice euphemism for “we’re polling every 30s to see if something changed”. This is not really optimal and we’d love to change it. If Google Apps APIs would support a feed of user events modelled in a common standard like ActivityStrea.ms, combined with reliable ETags and maybe even a push API (e.g. Webhooks) this would also make life easier for lots of developers syncing their local files with Google and help to reduce traffic on both sides.
Editor's note: This post is cross-posted from the Google Developers Blog.
Editor’s Note: This blog post is authored by Dito’s Steve Webster who is a Google Apps Script Top Contributor - Saurabh Gupta
Recently a company who operates retail stores throughout a few states reached out to Dito. When their associates conduct business discussions with vendors or customers where monetary exchanges are involved, their Ethics Disclosure policy requires a log for each event, a supervisor approval, and committee review.
The customer’s existing disclosure review process was cumbersome and time consuming. The employees would add a disclosure review request to a spreadsheet with 29 columns. The new review request would then be reviewed by their supervisor. The supervisor and the employee would go back and forth exchanging emails and making changes to a dense spreadsheet until an approval was granted. It was clear that the customer needed a workflow solution. They decided to hire Dito to build a workflow solution based on Google Apps Script.
To make the process more user friendly and productive, Dito decided to build a user interface to collect ethics disclosure events, make updates, and automate the routing of email notifications. Writing a Google Apps Script to create a user interface (UI), enabled associates to interact with their contacts to select their supervisor’s email address and simplify the data collection with list boxes. The script sends approval emails with HTML form radio buttons, text box, approve/decline buttons, and a “Post” command to invoke other workflow scripts. Below are some of the main design points for this Approval Workflow script.
The Disclosure Review workflow requires (a) Associates to fill out the Ethics Disclosure form. (b) Supervisor to either approve or decline the submission. (c) If supervisor approves, the Ethics Disclosure Committee is notified. (d) If supervisor declines, the associate is notified to make corrections. (e) After improving the submission, the workflow repeats itself.
Dito developed a custom review request form. A form was developed using Google Apps Script’s UI Services. The form provides the ability to look up data to populate such things as a drop-down list box. This form allowed real-time email look-ups by using the Apps Script’s Contacts service. First efforts included per character look-ups in a list box, but since they have over 1,000 users, it was best to enter the first and/or last name of their supervisor before initiating the look-up (see code snippet below).
var byName = ContactsApp.getContactsByName(searchKey); for (var i in byName) { var emailStr = byName[i].getPrimaryEmail(); // If there is no 'primary' email, try again for the next email if (emailStr == null) var emailStr = byName[i].getEmails()[0].getAddress(); // If emailStr is still null, try again by getting the next email if (emailStr == null) var emailStr = byName[i].getEmails()[1].getAddress(); }
Another dynamic field was the “activity type”. Depending on the selection more form fields are displayed. For example, if the activity type is “Meals”, display a list box to select lunch or dinner.
When an associate submits his/her review request by using the custom form within a spreadsheet, their supervisor receives an email with easy-to-read HTML formatted results. The approval decision, as well as a comment field (e.g. decline reason), is made within the email. This is more productive and prevents unnecessary back and forth into the spreadsheet.
If the request is declined by the supervisor, the associate who submitted the review request receives an email and can review the details. The email also contains a “Continue” button which opens the form in a new browser tab. After corrections are submitted, the supervisor receives another email and the workflow repeats itself.
When approved, the Ethics Disclosure Committee is notified by sending a group email within the script.
Since history records existed in their original spreadsheet form and they wanted to insert these records into the new work flow spreadsheet as a one-time task, an Apps Script was used to copy the data. Of course their columns did not match the new spreadsheet. By using a mapping approach and a “read once” and “write once” technique, the Apps Script quickly made the changes.
function myFunction() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName('Sheet 1'); var rowsWithData = sheet.getRange(2, 1, sheet.getLastRow(), sheet.getLastColumn()).getValues() var sheet1Data = []; var sheet2Data = []; for (var i = 0; i < rowsWithData.length; i++) { switch (rowsWithData[i][4]) // This is the activity type { ... case "Gift": sheet1Data.push([rowsWithData[i][12], rowsWithData[i][13], rowsWithData[i][14]]); sheet2Data.push([rowsWithData[i][15]]); continue; ... default: continue; } } sheet.getRange(2, 6, sheet1Data.length, 3).setValues(sheet1Data); sheet.getRange(2, 12, sheet2Data.length, 1).setValues(sheet2Data); }
Google Apps Script is very powerful and Dito uses it to build interesting solution for its customers. If you are using Google Apps then be sure to use Google Apps Script. You’ll be amazed with what you can build with it.