The Google Apps Marketplace is a storefront for Google Apps customers to discover, purchase, deploy and manage web applications which are integrated with Google Apps. These applications are typically used from desktops and laptops, but many vendors on the Apps Marketplace have also optimized the experience for their users who are on-the-go. There are several different strategies for enabling a mobile workforce, and each requires a different approach to authentication and authorization.
Google has written applications and synchronization clients to help ensure that the core Google Apps data is available to users on their mobile devices, whether they’re on their mobile phones or tablets. By storing contacts, dates and documents from your application in Google Apps using the application APIs, you can leverage these features to provide a mobile view for your users.
Since you’re only accessing the application APIs on your web application’s server, and the user has already linked up their mobile device to their Google account, there are no special techniques for authentication and authorization when using this lightweight approach.
With the latest advances in HTML5 web technologies such as offline and local storage, it’s possible to build mobile interfaces for business apps which are full-featured and accessible to users on many devices. The primary goal in building the mobile web application is to optimize the user experience for different input devices, form factors and limitations in network availability and bandwidth.
Because the application is in a web browser, most of the changes to implement are in the frontend-- HTML, JavaScript and CSS. User authentication and data authorization continue to use the same OpenID and OAuth technologies as are used for the desktop/laptop version of the application.
Does your application need access to hardware-specific APIs which are not available in a web browser, or do you feel a great user experience can only be achieved using native code? Several Apps Marketplace vendors have built native applications for popular mobile platforms like Android and iOS. Although it takes considerably more effort to build multiple native applications to cover the major platforms, these vendors can also take advantage of the additional distribution channels offered by mobile stores.
Authentication and authorization are often challenging for developers building native mobile applications because they cannot simply ask users for a password if their app supports single-sign on to Google with OpenID. We recently published an article describing a technique using an embedded webview for accomplishing OpenID authentication in mobile apps. This article includes references to sample code for Android and iOS.
Editor’s Note: This post written by Ferris Argyle. Ferris is a Sales Engineer with the Enterprise team at Google, and had written fewer than 200 lines of JavaScript before beginning this application. --Ryan Boyd
I started with Apps Script in the same way many of you probably did: writing extensions to spreadsheets. When it was made available in Sites, I wondered whether it could meet our needs for gathering roadmap input from our sales engineering and enterprise deployment teams.
At Google, teams like Enterprise Sales Engineering and Apps Deployment interact with customers and need to share product roadmap ideas to Product Managers. Product Managers use this input to iterate and make sound roadmap decisions. We needed to build a tool to support this requirement. Specifically, this application would be a tool used to gather roadmap input from enterprise sales engineering and deployment teams, providing a unified way of prioritizing customer requirements and supporting product management roadmap decisions. We also needed a way to share actual customer use cases from which these requirements originated.
This required bringing together the capabilities of Google Forms, Spreadsheets and Moderator in a single application: form-based user input, dynamically generated structured lists, and ranking.
This sounds like a fairly typical online transaction processing (OLTP) application, and Apps Script provides rich and evolving UI services, including the ability to create grids, event handlers, and now a WYSIWYG GUI Builder; all we needed was a secure, scalable SQL database backend.
One of my geospatial colleagues had done some great work on a demo using a Fusion Tables backend, so I did a little digging and found this example of how to use the APIs in Apps Script (thank you, Fusion Tables Developer Relations).
Full sample code for this app is available and includes a test harness, required global variables, additional CRUD wrappers, and authorization and Fusion REST calls. It has been published to the Script Gallery under the title "Using Fusion Tables with Apps Script."
/** * Read records * @param {string} tableId The Id of the Fusion Table in which the record will be created * @param {string} selectColumn The Fusion table columns which will returned by the read * @param {string} whereColumn The Fusion table column which will be searched to determine whether the record already exists * @param {string} whereValue The value to search for in the Fusion Table selectColumn; can be '*' * @return {string} An array containing the read records if no error; the bubbled return code from the Fusion query API if error */ function readRecords_(tableId, selectColumn, whereColumn, whereValue) { var query = ''; var foundRecords = []; var returnVal = false; var tableList = []; var row = []; var columns = []; var rowObj = new Object(); if (whereValue == '*') { var query = 'SELECT '+selectColumn+' FROM '+tableId; } else { var query = 'SELECT '+selectColumn+' FROM '+tableId+ ' WHERE '+whereColumn+' = \''+whereValue+'\''; } var foundRecords = fusion_('get',query); if (typeof foundRecords == 'string' && foundRecords.search('>> Error')>-1) { returnVal = foundRecords.search; } else if (foundRecords.length > 1 ) { //first row is header, so use this to define columns array row = foundRecords[0]; columns = []; for (var k = 0; k < row.length; k++) { columns[k] = row[k]; } for (var i = 1; i < foundRecords.length; i++) { row = foundRecords[i]; if( row.length > 0 ) { //construct object with the row fields rowObj = {}; for (var k = 0; k < row.length; k++) { rowObj[columns[k]] = row[k]; } //start new array at zero to conform with javascript conventions tableList[i-1] = rowObj; } } returnVal = tableList; } return returnVal; }
Now all I needed were CRUD-type (Create, Read, Update, Delete) Apps Script wrappers for the Fusion Tables APIs, and I’d be in business. I started with wrappers which were specific to my application, and then generalized them to make them more re-usable. I’ve provided examples above so you can get a sense of how simple they are to implement.
The result is a dynamically scalable base layer for OLTP applications with the added benefit of powerful web-based visualization, particularly for geospatial data, and without the traditional overhead of managing tablespaces.
I’m a Fusion tables beginner, so I can’t wait to see what you can build with Apps Script and Fusion Tables. You can get started here: Importing data into Fusion Tables, and Writing a Fusion Tables API Application.
Google Docs supports sharing collections and their contents with others. This allows multiple Google Docs resources to be shared at once, and for additional resources added to the collection later to be automatically shared.
Class.io, an EDU application on the Google Apps Marketplace, uses this technique. When a professor creates a new course, the application automatically creates a Google Docs collection for that course and shares it with all the students. This gives the students and professor a single place to go in Google Docs to access and manage all of their course files.
A collection is a Google Docs resource that contains other resources, typically behaving like a folder on a file system.
A collection resource is created by making an HTTP POST to the feed link with the category element’s term set to http://schemas.google.com/docs/2007#folder, for example:
http://schemas.google.com/docs/2007#folder
<?xml version='1.0' encoding='UTF-8'?> <entry xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/docs/2007#folder"/> <title>Example Collection</title> </entry>
To achieve the same thing using the Python client library, use the following code:
from gdata.docs.data import Resource collection = Resource('folder') collection.title.text = 'Example Collection' # client is an Authorized client collection = client.create_resource(entry)
The new collection returned has a content element indicating the URL to use to add new resources to the collection. Resources are added by making HTTP POST requests to this URL.
content
<content src="https://docs.google.com/feeds/default/private/full/folder%3A134acd/contents" type="application/atom+xml;type=feed" />
This process is simplified in the client libraries. For example, in the Python client library, resources can be added to the new collection by passing the collection into the create_resource method for creating resources, or the move_resource method for moving an existing resource into the collection, like so:
create_resource
move_resource
# Create a new resource of document type in the collection new_resource = Resource(type='document', title='New Document') client.create_resource(new_resource, collection=collection) # Move an existing resource client.move_resource(existing_resource, collection=collection)
Once resources have been added to the collection, the collection can be shared using ACL entries. For example, to add the user user@example.com as a writer to the collection and every resource in the collection, the client creates and adds the ACL entry like so:
user@example.com
writer
from gdata.acl.data import AclScope, AclRole from gdata.docs.data import AclEntry acl = AclEntry( scope = AclScope(value='user@example.com', type='user'), role = AclRole(value='writer') ) client.add_acl_entry(collection, acl)
The collection and its contents are now shared, and this can be verified in the Google Docs user interface:
Note: if the application is adding more than one ACL entry, it is recommended to use batching to combine multiple ACL entries into a single request. For more information on this best practice, see the latest blog post on the topic.
The examples shown here are using the raw protocol or the Python client library. The Java client library also supports managing and sharing collections.
For more information on how to use collections, see the Google Documents List API documentation. You can also find assistance in the Google Documents List API forum.
Since March of this year, Google has supported OAuth 2.0 for many APIs, including Google Data APIs such as Google Calendar, Google Contacts and Google Documents List. Google's implementation of OAuth 2.0 introduces many advantages compared to OAuth 1.0 such as simplicity for developers and a more polished user experience.
We’ve just added support for this authorization mechanism to the gdata-python-client library-- let’s take a look at how it works by retrieving an access token for the Google Calendar and Google Documents List APIs and listing protected data.
First, you will need to retrieve or sync the project from the repository using Mercurial:
hg clone https://code.google.com/p/gdata-python-client/
For more information about installing this library, please refer to the Getting Started With the Google Data Python Library article.
Now that the client library is installed, you can go to your APIs Console to either create a new project, or use information about an existing one from the API Access pane:
Your application will require the user to grant permission for it to access protected APIs on their behalf. It must redirect the user over to Google's authorization server and specify the scopes of the APIs it is requesting permission to access.
Available Google Data API’s scopes are listed in the Google Data FAQ.
Here's how your application can generate the appropriate URL and redirect the user:
import gdata.gauth # The client id and secret can be found on your API Console. CLIENT_ID = '' CLIENT_SECRET = '' # Authorization can be requested for multiple APIs at once by specifying multiple scopes separated by # spaces. SCOPES = ['https://docs.google.com/feeds/', 'https://www.google.com/calendar/feeds/'] USER_AGENT = '' # Save the token for later use. token = gdata.gauth.OAuth2Token( client_id=CLIENT_ID, client_secret=CLIENT_SECRET, scope=' '.join(SCOPES), user_agent=USER_AGENT) # The “redirect_url” parameter needs to match the one you entered in the API Console and points # to your callback handler. self.redirect( token.generate_authorize_url(redirect_url='http://www.example.com/oauth2callback'))
If all the parameters match what has been provided by the API Console, the user will be shown this dialog:
When an action is taken (e.g allowing or declining the access), Google’s authorization server will redirect the user to the specified redirect URL and include an authorization code as a query parameter. Your application then needs to make a call to Google’s token endpoint to exchange this authorization code for an access token.
import atom.http_core url = atom.http_core.Uri.parse_uri(self.request.uri) if 'error' in url.query: # The user declined the authorization request. # Application should handle this error appropriately. pass else: # This is the token instantiated in the first section. token.get_access_token(url.query)
The redirect handler retrieves the authorization code that has been returned by Google’s authorization server and exchanges it for a short-lived access token and a long-lived refresh token that can be used to retrieve a new access token. Both access and refresh tokens are to be kept private to the application server and should never be revealed to other client applications or stored as a cookie.
To store the token object in a secured datastore or keystore, the gdata.gauth.token_to_blob() function can be used to serialize the token into a string. The gdata.gauth.token_from_blob() function does the opposite operation and instantiate a new token object from a string.
gdata.gauth.token_to_blob()
gdata.gauth.token_from_blob()
Now that an access token has been retrieved, it can be used to authorize calls to the protected APIs specified in the scope parameter.
import gdata.calendar.client import gdata.docs.client # Access the Google Calendar API. calendar_client = gdata.calendar.client.CalendarClient(source=USER_AGENT) # This is the token instantiated in the first section. calendar_client = token.authorize(calendar_client) calendars_feed = client.GetCalendarsFeed() for entry in calendars_feed.entry: print entry.title.text # Access the Google Documents List API. docs_client = gdata.docs.client.DocsClient(source=USER_AGENT) # This is the token instantiated in the first section. docs_client = token.authorize(docs_client) docs_feed = client.GetDocumentListFeed() for entry in docs_feed.entry: print entry.title.text
For more information about OAuth 2.0, please have a look at the developer’s guide and let us know if you have any questions by posting them in the support forums for the APIs you’re accessing.
Updated 9/30/2011 to fix a small typo in the code
There are a number of ways to add resources to your Google Documents List using the API. Most commonly, clients need to upload an existing resource, rather than create a new, empty one. Legacy clients may be doing this in an inefficient way. In this post, we’ll walk through why using resumable uploads makes your client more efficient.
The resumable upload process allows your client to send small segments of an upload over time, and confirm that each segment arrived intact. This has a number of advantages.
Since only one small segment of data is sent to the API at a time, clients can store less data in memory as they send data to the API. For example, consider a client uploading a PDF via a regular, non-resumable upload in a single request. The client might follow these steps:
But that 100,000 bytes isn’t a customizable value in most client libraries. In some environments, with limited memory, applications need to choose a custom chunk size that is either smaller or larger.
The resumable upload mechanism allows for a custom chunk size. That means that if your application only has 500KB of memory available, you can safely choose a chunk size of 256KB.
In the previous example, if any of the bytes fail to transmit, this non-resumable upload fails entirely. This often happens in mobile environments with unreliable connections. Uploading 99% of a file, failing, and restarting the entire upload creates a bad user experience. A better user experience is to resume and upload only the remaining 1%.
Traditional non-resumable uploads via HTTP have size limits depending on both the client and server systems. These limits are not applicable to resumable uploads with reasonable chunk sizes, as individual HTTP requests are sent for each chunk of a file. Since the Documents List API now supports file sizes up to 10GB, this is very important.
The Java, Python, Objective-C, and .NET Google Data API client libraries all include a mechanism by which you can initiate a resumable upload session. Examples of uploading a document with resumable upload using the client libraries is detailed in the documentation. Additionally, the new Documents List API Python client library now uses only the resumable upload mechanism. To use that version, make sure to follow these directions.
Editor’s note: This is a guest post by Cameron Henneke. Cameron is the founder and principal engineer of GQueues, a task management app on the Google Apps Marketplace. Cameron tells the story of his application and provides some tips for developers considering integrating with Google Apps and launching on the Marketplace -- Ryan Boyd
Google recently announced that over 4 million businesses now run on Google Apps, continuing its growth as enterprise software that focuses on collaboration. This of course is great news for Google Apps developers, since this means there are 4 million potential customers on the Google Apps Marketplace looking for complimentary tools to enhance their productivity. As you know, listing an app requires just a few quick steps, and the Marketplace targets a growing audience of customers ready to purchase additional solutions.
So what kind of success might you see on the Marketplace and how can you maximize revenue? As the founder of GQueues, an online task manager, I have listed the app on the Marketplace since its launch in March 2010. Over the past year and half, I have found the Marketplace to be my most successful channel, and have discovered a few tips along the way that proved key to this success.
Though this seems obvious, this first point is critical: make sure your app solves a real problem. This means you’ve identified actual people and businesses that have this problem and are actively looking for a solution. Perhaps they have already tried other tools or cobbled something together on their own. For example, I’ve verified Google Apps users are looking for an intuitive way to organize their work and manage tasks with others. GQueues fills this need as a full-featured task manager that allows users to assign tasks, share lists, set reminders, create tasks from email and tag work for easy filtering. Google Apps users come to the Marketplace with a variety of needs, make sure your app addresses at least one of them.
As you solve a customer’s problem, make sure you integrate with their existing tools. For Marketplace customers, this means adding as many integration points with Google products as possible. This is important for several reasons.
First, it’s great for the user and facilitates adoption. If your service works seamlessly with other products they are already familiar with, they don’t have to spend time learning something new. For instance, GQueues has two-way syncing with Google Calendar. Since users already know how to drag events to new dates in Calendar, dragging GQueues tasks around the calendar is quite intuitive.
Secondly, more integration directly helps your app’s listing in the Marketplace. Each listing has a set of icons representing product integrations. GQueues integrates with Calendar, Mail, Contacts and Google Talk, which indicates to a customer that using this tool will allow their users to work more efficiently. Plus, customers can search based on integration points, so the more you have, the broader your presence in the Marketplace.
Lastly, integrating with existing products speeds development. Utilizing Google APIs allows you to innovate faster and respond to your customers growing needs. GQueues uses the XMPP service of Google App Engine, which eliminated the need to build a separate chat backend and makes it easy for users to add tasks by chatting a message from anywhere.
Once you’ve listed your deeply integrated app that solves a real problem on the Marketplace, be sure to engage with your customers. The Marketplace allows users to rate your app and leave verified reviews, which not only impact the app’s listing position, but greatly influence potential customers’ willingness to test it out. I manage the GQueues Marketplace listing with a two-fold approach:
These actions are quite simple, but immensely effect your app’s presence in the Marketplace.
Though each app is unique, I’ve found that following the tips mentioned above have helped the Google Apps Marketplace become GQueues’ top revenue channel.
GQueues is based on a freemium model, and the average conversion rate for a typical freemium product is 3-5%. Looking at all the regular channels, GQueues has a 6% average conversion rate from free users to paid subscribers - slightly higher than expected. However, the GQueues users from the Marketplace convert at an astonishing rate of 30%.
The Marketplace claims to target an audience ready to buy, and the data really backs this up.
Not only does the Marketplace have a substantially higher conversion rate, but it also drives a considerable amount of traffic. Looking at the data over the same period, 27% of all new GQueues users were acquired via the Marketplace.
Combining the acquisition rate with the conversion rate shows that the Marketplace is actually responsible for 63% of all paid GQueues users.
As Google Apps continues to grow worldwide, the need for deeply integrated, complimentary business tools will also expand. Based on my experience with GQueues, I strongly recommend the Google Apps Marketplace as a rewarding channel for apps that integrate with Google Apps.
We are announcing the deprecation of SWF export functionality for presentations from the Google Documents List API. We are taking this action due to the limited demand for this feature, and in order to focus engineering efforts on other aspects of the API.
Clients currently making the following request to the API are affected by this change.
https://docs.google.com/feeds/download/presentations/Export?docID=1234&exportFormat=swf
We recommend clients currently using SWF exports switch to PDF exports, using the appropriate exportFormat value.
https://docs.google.com/feeds/download/presentations/Export?docID=1234&exportFormat=pdf
We are disabling SWF exports in the coming weeks. Clients attempting to export presentations as SWF after the exports are disabled will receive an HTTP 400 response.
For more information on exporting presentations, see the Google Documents List API documentation. If you have any questions, feel free to reach out in the forums.
We believe in the vision of “nothing but the web” -- where business applications are delivered over the Internet and accessed in a web browser. Why? We believe the web brings substantial benefits for companies that no other IT model can -- in simplicity, cost, security, flexibility and pace of innovation.
Of course, we recognize that some companies have substantial investments in legacy technology -- desktop applications or client/server applications which they’re using every day. We’d like to understand what it will take to move these apps to the web.
Do you build or maintain business applications-- either internally for your company or for sale to other companies? We’d love to hear more about your apps, tools and what types of challenges you have. Please fill out this short survey and let us know whether you’d be interested in a potential HTML5 training class.
We’d love to hear what apps you’re still using in your business which haven’t yet moved to the web and why. Please fill out this short survey.
Editor’s Note: This is a guest post by Saqib Ali. Saqib is a Google Apps evangelist at Seagate. He has used Apps Script to create a number of applications that leverage Google Apps. His other interests include the curation of Lewis Carroll’s letter and translation of Urdu poetry to English. -- Ryan Boyd
Idea Banks are repositories for innovative ideas that Seagate employees can submit, and others can vote on those ideas. Before Google Apps Script we had a custom built Idea Bank on the LAMP stack. With the release of the UI Services in the Google Apps Script, we wanted to port that Idea Bank to Google Apps to easily manage idea submissions in a Google Spreadsheet.
A typical Idea Bank consists of three basic functions:
A traditional application would probably use a Relational Database like MySQL to store the ideas. However we found that using Google Spreadsheet to store the ideas provides two inherent benefits:
The number of votes, and the voters are tracked using cells in the spreadsheet. For voters we used the Session.getUser().getEmail() to get the email address of the logged in user, and store them in the spreadsheet.
Session.getUser().getEmail()
Since the Ideas Bank is embedded in a Google Site, we were able to simply use the Google Sites Page as a place holder to add description and comments to the ideas. Once the idea is submitted, a Google Sites page gets created corresponding to that idea from predefined template using the createPageFromTemplace() function. The submitter can then add detailed description in the template. Others can add comments to that Site pages.
createPageFromTemplace()
All the data is stored in a Google Spreadsheet, which makes it easy for the Idea Bank manager to manage (delete, remove, modify) the ideas using the Spreadsheets Editor.
Code snippet for adding new ideas to the spreadsheet:
var ss = SpreadsheetApp.openById(""); // Spreadsheet id goes here SpreadsheetApp.setActiveSpreadsheet(ss); ideas_sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Ideas"); var last_row = ideas_sheet.getLastRow(); var next_empty_row = last_row+1; ideas_sheet.setActiveCell("A"+next_empty_row); ideas_sheet.getActiveCell().setValue(e.parameter.ideadescription); ideas_sheet.setActiveCell("B"+next_empty_row); ideas_sheet.getActiveCell().setValue(Session.getActiveUser().getUserLoginId()); ideas_sheet.setActiveCell("E"+next_empty_row); ideas_sheet.getActiveCell().setValue(Session.getActiveUser().getEmail());
Code snippet to read the ideas from the Spreadsheet and display them:
var ss = SpreadsheetApp.openById(""); // Spreadsheet id goes here SpreadsheetApp.setActiveSpreadsheet(ss); ideas_sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Ideas"); var last_row = ideas_sheet.getLastRow(); var last_column = ideas_sheet.getLastColumn(); var sheet_array = ideas_sheet.getRange(2, 1, last_row, last_column).getValues(); var submitIdeaButton = app.createButton("I have another idea"); var submitIdeaButtonHandler = app.createServerClickHandler('showSubmitIdeaDialog'); submitIdeaButton.addClickHandler(submitIdeaButtonHandler); applyCSS(submitIdeaButton, _submitideabutton); var ideaContents = app.createGrid().resize(last_row,3); ideaContents.setId("ideacontents"); ideaContents.setWidth("100%"); ideaContents.setCellSpacing(0); scrollPanel.add(ideaContents); app.add(scrollPanel); for (var row_i = 0; row_i < last_row-1; row_i++) { var ideaDescriptionLabel = app.createLabel(sheet_array[row_i][0]).setStyleAttribute("font","16px Sans-serif").setWordWrap(true); var submitter = sheet_array[row_i][1].split("@"); var ideaAuthor = app.createLabel(submitter[0]).setStyleAttribute("font","10px Courier New, Courier, monospace").setStyleAttribute("color", "#CCC") ideaContents.setWidget(row_i, 0, app.createVerticalPanel().add(ideaDescriptionLabel).add(ideaAuthor)).setStyleAttribute("overflow","visible").setStyleAttribute("white-space","normal !important"); //Button to display the voters var numberOfVotesForm = app.createFormPanel().setId('numofvotesform'); var numberOfVotesFormContent = app.createVerticalPanel() numberOfVotesForm.add(numberOfVotesFormContent); numberOfVotesFormContent.add(app.createTextBox().setName('ideaID').setText(row_i + "").setVisible(false).setSize("0","0")); numberOfVotesFormContent.add(app.createTextBox().setName('voters').setText(sheet_array[row_i][4]).setVisible(false).setSize("0","0")); var numberOfVotesButton = app.createButton(countVotes(sheet_array[row_i][4]) + " vote(s)").setId("numberOfVotesButton"+row_i); applyCSS(numberOfVotesButton, _numofvotesbutton); var numberOfVotesButtonHandler = app.createServerClickHandler('showVotersDialog'); numberOfVotesButtonHandler.addCallbackElement(numberOfVotesFormContent); numberOfVotesButton.addClickHandler(numberOfVotesButtonHandler); numberOfVotesFormContent.add(numberOfVotesButton); //Button to cast a vote var voteForm = app.createFormPanel().setId('voteform'); var voteFormContent = app.createVerticalPanel(); voteForm.add(voteFormContent); voteFormContent.add(app.createHidden('ideaID', row_i + "").setSize("0","0")) // Identify the function schedule as the server click handler var voteButton = app.createButton('I like this!').setId("voteButton"+row_i) var voteButtonHandler = app.createServerClickHandler('casteVote'); voteButtonHandler.addCallbackElement(voteFormContent); voteButton.addClickHandler(voteButtonHandler); if (sheet_array[row_i][4].indexOf(Session.getActiveUser().getEmail())>-1) voteFormContent.add(voteButton.setText("Thanks!").setEnabled(false)); else voteFormContent.add(voteButton); ideaContents.setWidget(row_i, 1, numberOfVotesForm); ideaContents.setWidget(row_i, 2, voteForm); } app.add(submitIdeaButton);
Ui Service was used to build the front end for the app. UI Services are based on GWT, so it is a good idea to have a basic understanding of the GWT framework. The following were used in building this app:
A real live working example is available here. Full source code is available here.
So why did I choose Google Apps Script? Well for one it is at no extra cost, comes with your Google Account, it is in the cloud (i.e. no servers required), integrates well with Google Sites and Spreadsheets, and most importantly it uses GWT UI widgets. Google Apps Script’s UI Services and the ability to easily integrate with any REST interface make Apps Script an easy choice.
Saqib is a Google Apps evangelist at Seagate. He has used Apps Script to create a number of applications that leverage Google Apps. His other interests include the curation of Lewis Carroll’s letter and translation of Urdu poetry to English.
Gmail servers support the standard IMAP and POP protocols for email retrieval but sometimes you only need to know whether there are any new messages in your inbox. Using any of the two protocols mentioned above may seem like an overkill in this scenario and that’s why Gmail also exposes a read only feed called Gmail Inbox Feed which you can subscribe to and get the list of unread messages in your inbox.
The Gmail Inbox Feed is easily accessible by pointing your browser to https://mail.google.com/mail/feed/atom and authenticating with your username and password if you are not already logged in.
Using basic authentication to access the inbox feed doesn’t provide a very good user experience if we want delegated access. In that case, we should instead rely on the OAuth authorization standard, which is fully supported by the Gmail Inbox Feed.
OAuth supports two different flows. With 3-legged OAuth, an user can give access to a resource he owns to a third-party application without sharing his credentials. The 2-legged flow, instead, resembles a client-server scenario where the application is granted domain-wide access to a resource.
Let’s write a .NET application that uses 2-legged OAuth to access the Gmail Inbox Feed for a user in the domain and list unread emails. This authorization mechanism also suits Google Apps Marketplace developers who want to add inbox notifications to their applications.
There is no dedicated client library for this task and the Inbox Feed is not based on the Google Data API protocol but we’ll still use the .NET library for Google Data APIs for its OAuth implementation.
First, create a new C# project and add a reference to the Google.GData.Client.dll released with the client library. Then add the following using directives to your code:
using System; using System.Linq; using System.Net; using System.Net.Mail; using System.Xml; using System.Xml.Linq; using Google.GData.Client;
The next step is to use 2-legged OAuth to send an authorized GET request to the feed URL. In order to do this, we need our domain’s consumer key and secret and the username of the user account we want to access the inbox feed for.
string CONSUMER_KEY = "mydomain.com"; string CONSUMER_SECRET = "my_consumer_secret"; string TARGET_USER = "test_user"; OAuth2LeggedAuthenticator auth = new OAuth2LeggedAuthenticator("GmailFeedReader", CONSUMER_KEY, CONSUMER_SECRET, TARGET_USER, CONSUMER_KEY); HttpWebRequest request = auth.CreateHttpWebRequest("GET", new Uri("https://mail.google.com/mail/feed/atom/")); HttpWebResponse response = request.GetResponse() as HttpWebResponse;
The response is going to be a standard Atom 0.3 feed, i.e. an xml document that we can load into an XDocument using the standard XmlReader class:
XDocument
XmlReader
XmlReader reader = XmlReader.Create(response.GetResponseStream()); XDocument xdoc = XDocument.Load(reader); XNamespace xmlns = "http://purl.org/atom/ns#";
All the parsing can be done with a single LINQ to XML instruction, which iterates the entries and instantiates a new MailMessage object for each email, setting its Subject, Body and From properties with the corresponding values in the feed:
MailMessage
Subject
Body
From
var messages = from entry in xdoc.Descendants(xmlns + "entry") from author in entry.Descendants(xmlns + "author") select new MailMessage() { Subject = entry.Element(xmlns + "title").Value, Body = entry.Element(xmlns + "summary").Value, From = new MailAddress( author.Element(xmlns + "email").Value, author.Element(xmlns + "name").Value) };
At this point, messages will contain a collection of MailMessage instances that we can process or simply dump to the console as in the following snippet:
Console.WriteLine("Number of messages: " + messages.Count()); foreach (MailMessage entry in messages) { Console.WriteLine(); Console.WriteLine("Subject: " + entry.Subject); Console.WriteLine("Summary: " + entry.Body); Console.WriteLine("Author: " + entry.From); }
If you have any questions about how to use the Google Data APIs .NET Client Library to access the Gmail Inbox Feed, please post them in the client library discussion group.
ACL (Access Control List) entries control who can access Google Docs resources. This allows more specific control over resource privacy or permissions.
Many types of applications need to grant document access for several users at once. As an example: when a new user is added to a project in the Manymoon project management application, every user on the project needs to be granted access to all attached Google docs. If there are 10 users on the project and 10 shared documents, this means the app would typically need to perform 100 HTTP requests -- a lot of overhead. With batching of ACL requests, the application can reduce the number of requests to one per document, resulting in a 10x savings.
A typical ACL entry for a single user is created by making an HTTP POST to the ACL link provided with each resource entry. The POST body looks something like this:
<entry xmlns="http://www.w3.org/2005/Atom" xmlns:gAcl='http://schemas.google.com/acl/2007'> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/acl/2007#accessRule'/> <gAcl:role value='writer'/> <gAcl:scope type='user' value='new_writer@example.com'/> </entry>
from gdata.acl.data import AclScope, AclRole from gdata.docs.data import AclEntry acl = AclEntry( scope = AclScope(value='user@example.com', type='user'), role = AclRole(value='writer') )
Instead of submitting the requests separately, multiple ACL operations for a resource can be combined into a single batch request. This is done by POSTing a feed of ACL entries. Each ACL entry in the feed must have a special batch:operation element, describing the type of operation to perform on the ACL entry. Valid operations are query, insert, update, and delete.
batch
batch:operation
query
insert
update
delete
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:gAcl='http://schemas.google.com/acl/2007' xmlns:batch='http://schemas.google.com/gdata/batch'> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/acl/2007#accessRule'/> <entry> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/acl/2007#accessRule'/> <gAcl:role value='reader'/> <gAcl:scope type='domain' value='example.com'/> <batch:operation type='insert'/> </entry> <entry> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/acl/2007#accessRule'/> <id>https://docs.google.com/feeds/default/private/full/document%3Adocument_id/acl/user%3Aold_writer%40example.com</id> <gAcl:role value='writer'/> <gAcl:scope type='user' value='new_writer@example.com'/> <batch:operation type='update'/> </entry> </feed>
The following code represents the same operation in the Python client library:
from gdata.data import BatchOperation from gdata.acl.data import AclScope, AclRole from gdata.docs.data import AclEntry acl1 = AclEntry( scope=AclScope(value='example.com', type='domain'), role=AclRole(value='reader'), batch_operation=BatchOperation(type='insert') ) acl2 = client.get_acl_entry_by_self_link( ('https://docs.google.com/feeds/default/private/full/' 'document%3Adocument_id/acl/user%3Aold_writer%40example.com')) acl2.scope = AclScope(value='new_writer@example.com', type='user') acl2.role = AclRole(value='writer') acl2.batch_operation = BatchOperation(type='update') entries = [acl1, acl2]
The feed of these entries can now be submitted together to apply to a resource:
results = client.batch_process_acl_entries(resource, entries)
The return value is an AclFeed, with a list of AclEntry elements for each operation, the status of which can be checked individually:
for result in results.entry: print entry.title.text, entry.batch_status.code
The examples shown here are using the raw protocol or the Python client library. The Java client library also supports batch operations on ACL entries.
For more information on how to use batch operations when managing ACLs, see the Google Documents List API documentation, and the Google Data APIs batch protocol reference guide. You can also find assistance in the Google Documents List API forum.
Like any Google service, we’re always working to refine the Google Apps Marketplace for our vendors and their customers. Normally we make small, incremental improvements and let the better experience speak for itself, but this week we think several of the new features are noteworthy enough to point out.
A frequently requested feature has been improved analytics. If you have Google Analytics configured for your listing, your analytics profile will now receive the search terms and category selected by your customers.
When a customer searches on the Marketplace, the search results will all have two parameters attached: query and category. You’ll want to add these terms to your Google Analytics Website Profile. The query term will include all of the search terms the user entered into the search box, or it could be blank if the customer found your application through browsing. Similarly, the category parameter will be blank unless the customer narrowed his search or browse by picking the category you chose in your application listing, e.g. Accounting & Finance. Now you’ll have much better data about how a customer has reached your application, whether through browsing, searching, or a combination.
category
Another developer-focused enhancement we’ve made is adding a count of installs and uninstalls on each application’s listing when signed into the vendor account:
"Net install count" represents the number of current installs -- any uninstalls have already been deducted. If you add "Net install count" to "Uninstall count" you will have the total number of installations for the app since it launched. You are also able to retrieve this information from the Licensing API, but it is nice to have it easily accessible on your listing pages.
On the Vendor Profile page you’ll also that we’ve added the ability to hide unused applications (and show them again) so that you can manage your applications better and remove clutter on your dashboard. You still cannot delete applications, but you can sweep them under the rug! Note that Hiding and Unhiding are just for managing your list as a Vendor -- they do not change your Publish/Unpublish settings for each app. You can hide a published app as easily as an unpublished app.
Since announcing the Staff Picks program in May, we’ve featured a number of especially well-integrated and innovative applications on the @GoogleAtWork twitter stream and here on the Google Apps Developer Blog. Now we also feature Staff Picks on the front page of the Marketplace.
You can find four Staff Picks on the Marketplace landing page, chosen from the pool of apps selected as staff picks. See the Staff Picks page on code.google.com for more information on how we choose these apps.
Editors note: This is a guest post by Alex Steshenko. Alex is a core software engineer for Solve360, a highly-rated CRM in the Google Apps Marketplace which was recently chosen as a Staff Pick. Solve360 offers many points of useful integration with Google Apps. Today, in contrast to the conventional Data API integrations, Alex will showcase how he extended Solve360 using Google Apps Script. --- Ryan Boyd
Solve360 CRM integrates with Google services to provide a two-way contact & calendar sync, email sync and a comprehensive Gmail contextual gadget. We use the standard Google Data APIs. However, some of our use cases required us to use Google documents and spreadsheets. Enter Apps Script!. What brought our attention to Google Apps Script was that it allows you to run your application code right within the Google Apps platform itself, where documents can be manipulated using a wide range of native Google Apps Script functions, changing the perspective.
Our first experience with Google Apps Script was writing a "Contact Us" form. We decided to use the power and flexibility of Apps Script again to generate different kinds of reports.
Google Spreadsheets can produce rich reports leveraging features such as filters, pivot tables, built-in functions and charts. But where’s the data to report on? Using Google Apps Script, users can integrate Google Spreadsheets with a valuable source of data - the Solve360 CRM - completing the solution.
Solve360 Google Apps Reporting script lets users configure the reporting criteria while pulling reports into a Google Spreadsheet.
Here's a video demonstrating a real use case for Solve360 Reporting:
For this script, we realized, simply providing spreadsheet functions would not be good enough. We needed a user interface to let users configure their account details and define what kind of data to fetch from the Solve360 CRM. Google Apps Script’s Ui Services came in handy. For instance, here is the function responsible for showing the “Solve360 account info” dialog:
/* * Creates new UI application and opens setting window */ function settingsUi() { var app = UiApp.createApplication(); app.setTitle('Solve360 account info') .setWidth(260) .setHeight(205); var absolutePanel = app.createAbsolutePanel(); absolutePanel.add(authenticationPanel_(app)); app.add(absolutePanel); SpreadsheetApp.getActiveSpreadsheet().show(app); }
Solve360 CRM has an external API available so the system can be integrated with custom business applications and processes. Reporting script use case is a good example of what it can be used for.
One of the first tricks learned was creating our own Google Apps-like “service” to encapsulate all those functions responsible for interacting with Solve360 CRM’s API. What is the most interesting is that this service’s code isn’t a part of the distributed Google Apps script. Instead the library is loaded from within the script itself directly from our servers. Why? Let’s say we found a bug or added new functions - if we had many copies of the service we would need to update them all, somehow notifying our clients, and so on. With one source, there’s no such problem. You may think of it as a way to distribute a Google Apps Script solution, or, in our case, a part of the solution. The service is called Solve360Service and its usage looks like this:
Solve360Service
var contact = Solve360Service.addContact(contactData);
There were two problems with getting such an external service to work: Google Apps Script permissions restrictions and actually including it in the script.
The issue with permissions is that the Google Apps Script environment can’t see which Google Apps Script services are used inside the external service - that’s why it doesn’t ask you to grant special permissions for them. To force the authorization request for those permissions we added this to the onInstall function (called once when script is added to the spreadsheet):
onInstall
function onInstall() { // to get parseJS permissions Xml.parseJS(['solve360', '1']); // to get base64Encode permissions Utilities.base64Encode('solve360'); // ... }
Here is the solution we used to load our centralized code into the script:
eval(UrlFetchApp.fetch("https://secure.solve360.com/gadgets/resources/js/Solve360Service.js").getContentText());
The Solve360Service is loaded from a single source - no copy-paste. All the functions for accessing the Solve360 API aka “the plumbing” are abstracted and hidden in inside this service, while the essentials of the reports script itself can be modified and tweaked to a particular client’s case. Inside of Solve360Service we use UrlFetchApp:
UrlFetchApp
/** * Request to the Solve360 API server * data should be an Array in Short Hand notation */ request : function(uri, restVerb, data) { if (this._credentials == null) { throw new Error('Solve360 credentials are not set'); } if (typeof(data) != 'undefined') { if (restVerb.toLowerCase() == 'get') { var parameters = []; for each(var parameter in data) { parameters.push(encodeURIComponent(parameter[0]) + '=' + encodeURIComponent(parameter[1])); } uri += '?' + parameters.join('&'); data = ''; } else { data.unshift('request'); data = Xml.parseJS(data).toXmlString(); } } else { data = ''; } var options = { "contentType" : "application/xml", "method" : restVerb.toLowerCase(), "payload" : data, "headers" : {"Authorization" : "Basic " + this._credentials} }; return Xml.parse(UrlFetchApp.fetch(this._url + uri, options).getContentText()).getElement(); }
As the result is always XML, in order to remove any extra work we call Xml.parse() right inside the request function and always return a XmlElement so you can iterate through it, access nodes and attributes. Here is a simplified version of how we load some items when building a report:
Xml.parse()
XmlElement
/* * Builds a search config from user preferences and loads a slice of data from Solve360 * To configure how many items should be loaded at a time, change ITEMS_LOAD_REQUEST_LIMIT constant */ function retrieveItems_(parameter, offset) { initSolve360Service_(); // ... var searchParameters = [ ['layout', '1'], ['sortdir', 'ASC'], ['sortfield', 'name'], ['start', '' + offset], ['limit', '' + ITEMS_LOAD_REQUEST_LIMIT], ['filtermode', filtermode], ['filtervalue', filtervalue], ['searchmode', searchmode], ['searchvalue', searchvalue], ['special', special], ['categories', '1'] ]; if (parameter.showAllFieldsCheckbox != 'true' && fields.length > 0) { searchParameters.push(['fieldslist', fields.join(',')]); } // ... var items = Solve360Service.searchProjectBlogs(searchParameters); // ... return items; }
To simplify the usage of the service we added another function which initializes the service object, named Solve360Service:
/* * Loads external Solve360Service * For the service functions available refer to the source code here: * https://secure.solve360.com/gadgets/resources/js/Solve360Service.js */ var Solve360Service = null; function initSolve360Service_() { if (Solve360Service == null) { eval(UrlFetchApp.fetch("https://secure.solve360.com/gadgets/resources/js/Solve360Service.js").getContentText()); var user = UserProperties.getProperty(USERPROPERTY_USER); var token = UserProperties.getProperty(USERPROPERTY_TOKEN); if (user == null || user.length == 0 || token == null || token.length == 0) { throw new Error('Use Solve360 spreadsheet menu to set email and token first'); } Solve360Service.setCredentials(user, token); } }
As you can see, it uses the email/token pair previously saved in the “Solve360 Account Info” dialog or signals an error if the credentials were not yet saved.
There are many use cases where you can apply the Google Apps Script. The fact that you can work and implement solutions right from “inside” one of the greatest and most universal web applications available is amazing.
You can integrate your own software with Google Docs or even learn from us and build a reporting script for any other system accessible online. Try to look at solving business tasks from a different perspective, from the Google Apps point of view. We encourage it!
The code of the new script is available for use and study here: https://secure.solve360.com/docs/google-apps-reports.js.
Alex Steshenko is a core software engineer for Solve360, a CRM application on the Google Apps Marketplace.
We have released version 1.9 of the .NET Library for Google Data APIs and it is available for download.
This version adds the following new features:
This new version also removes the client library for the deprecated Google Base API and fixes 20 bugs.
For more details, please check the Release Notes and remember to file feature requests or bugs in the project issue tracker.
function doGet() { // Populate the DataTable. We'll have the data labels in // the first column, "Quarter", and then add two data columns, // for "Income" and "Expenses" var dataTable = Charts.newDataTable() .addColumn(Charts.ColumnType.STRING, "Quarter") .addColumn(Charts.ColumnType.NUMBER, "Income") .addColumn(Charts.ColumnType.NUMBER, "Expenses") .addRow(["Q1", 50, 60]) .addRow(["Q2", 60, 55]) .addRow(["Q3", 70, 60]) .addRow(["Q4", 100, 50]) .build();
SpreadsheetApp
UiApp
UrlFetch
// Build the chart. We'll make income green and expenses red // for good presentation. var chart = Charts.newColumnChart() .setDataTable(dataTable) .setColors(["green", "red"]) .setDimensions(600, 400) .setXAxisTitle("Quarters") .setYAxisTitle("$") .setTitle("Income and Expenses per Quarter") .build();
setDataTable()
build()
// Add our chart to the UI and return it so that we can publish // this UI as a service and access it via a URL. var ui = UiApp.createApplication(); ui.add(chart); return ui;}
// Save the chart to our Document List var file = DocsList.createFile(chart); file.rename("Income Chart"); file.addToFolder(DocsList.getFolder("Charts")); // Attach the chart to the active sites page. var page = SitesApp.getActivePage(); page.addHostedAttachment(chart, "Income Chart"); // Attach the chart to an email. MailApp.sendEmail( "recipient@example.com", "Income Chart", // Subject "Here's the latest income chart", // Content {attachments: chart });