Editor's note: This has been cross-posted from the Google Code blog -- Ryan Boyd
In March, we announced that all of the Google Web APIs adopted support for OAuth 2.0. It is the recommended authorization mechanism when using Google Web APIs.
Today, we are announcing the OAuth 2.0 Playground, which simplifies experimentation with the OAuth 2.0 protocol and APIs that use the protocol. Trying out some requests in the OAuth 2.0 playground can help you understand how the protocol functions and make life easier when the time comes to use OAuth in your own code.
Selecting the APIs to authorize
With the OAuth 2.0 Playground, you can walk through each step of the OAuth 2.0 flow for server-side web applications: authorizing API scopes (screen shot above), exchanging authorization tokens (screen shot below), refreshing access tokens, and sending authorized requests to API endpoints. At each step, the Playground displays the full HTTP requests and responses.
The OAuth Playground can also use custom OAuth endpoints in order to test non-Google APIs that support OAuth 2.0 draft 10.
OAuth configuration screen
You can click the link button to generate a link to a specific Playground state. This allows quick access to replay specific requests at a later time.
Generating a deep link to the playground’s current state
Please feel free to try the OAuth 2.0 Playground. We are happy to receive any feedback, bugs, or questions in the OAuth Playground forum.
Two weeks ago, we had our inaugural Office Hours on Google+ Hangouts, bringing together Google Apps developers from the UK, Ireland, Russia, Brazil, Germany and the US to chat. Everyone asked great questions and provided feedback on many of the APIs. It was also exciting that Google+ for Google Apps was announced at the same time as our hangout.
Given the strong interest in these Office Hours, we’re going to continue doing Hangouts with the Google Apps developer community. Some will be general Hangouts where all types of questions related to the Google Apps APIs will be welcome. Others will be focused on individual products and include core software engineers and product managers who are building the APIs you love.
Here’s the next couple: Tomorrow, November 8th @ 11:30am PST (General Office Hours) November 16th @ 10am PST (Google Apps Script team)
We’ll continue adding more Office Hours on the events calendar, and announce them on @GoogleAppsDev and our personal Google+ profiles.
Hope you’ll hang out with us soon!
The OAuth Playground is a great tool to learn how the OAuth flow works. But at the same time it can be used to generate a "long-lived" access token that can be stored, and used later by applications to access data through calls to APIs. These tokens can be used to make command line tools or to run batch jobs.
In this example, I will be using this token and making calls to the Google Provisioning API using the Python client library for Google Data APIs. But the following method can be used for any of the Google Data APIs. This method requires the token is pushed on the token_store, which is list of all the tokens that get generated in the process of using Python client libraries. In general, the library takes care of it. But in cases where it’s easier to request a token out of band, it can be a useful technique.
token_store
Step 1: Generate an Access token using the OAuth Playground. Go through the following process on the OAuth Playground interface:
consumer_key
consumer_secret
After entering all the required details you need to press these buttons on the OAuth Playground in sequence:
After the last step the text field captioned auth_token in the OAuth Playground has the required Access token and that captioned access_token_secret has the corresponding token secret to be used later.
Step 2: Use the above token when making calls to the API using a Python Client Library.
Here is an example in Python which uses the OAuth access token that was generated from OAuth Playground to retrieve data for a user.
CONSUMER_KEY = “CONSUMER_KEY” CONSUMER_SECRET = “CONSUMER_SECRET” SIG_METHOD = gdata.auth.OAuthSignatureMethod.HMAC_SHA1 TOKEN = “GENERATED_TOKEN_FROM_PLAYGROUND” TOKEN_SECRET = “GENERATED_TOKEN_SECRET_FROM_PLAYGROUND” DOMAIN = “your_domain” client = gdata.apps.service.AppsService(source=”app”, domain=DOMAIN) client.SetOAuthInputParameters(SIG_METHOD, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET) temp_token = gdata.auth.OAuthToken(key=TOKEN, secret=TOKEN_SECRET); temp_token.oauth_input_params = client.GetOAuthInputParameters() client.SetOAuthToken(temp_token) #Make the API calls user_info = client.RetrieveUser(“username”)
It is important to explicitly set the input parameters as shown above. Whenever you call SetOuthToken it creates a new token and pushes it into the token_store. That becomes the current token. Even if you call SetOauthToken and SetOAuthInputParameters back to back, it won’t set the input params for the token you set.
SetOuthToken
SetOauthToken
SetOAuthInputParameters
You can use the long-lived token to make command line requests, for example using cURL. It can be useful when you need to counter-check bugs in the client library and to test new features or try to reproduce issues. In most cases, developers should use the client libraries as they are designed, as in this example.
Sometimes you want to cache data in your script. For example, there’s a RSS feed you want to use and a UiApp that you’ve built to view and process the feed. Up until now, each operation to work on the feed would require re-fetching the feed, which can get slow.
UiApp
Enter the newly launched CacheService which will allow for caching resources between script executions. Like the recently announced LockService, there are two kinds of caches: a public cache that is per-script, and a private cache which is per-user, per-script. The private cache should be used to store user-specific data, while the public cache is used to store strings that should be accessible no matter who calls the script.
So for our example feed viewer/processor, you’d already have a function to retrieve and process the feed. In order to use the CacheService, you’d wrap it like this:
function getFeed() { var cache = CacheService.getPublicCache(); var value = cache.get(“my rss feed”); if (value == null) { // code to fetch the contents of the feed and store it in value // here (assumes value is a string) // cache will be good for around 3600 seconds (1 hour) cache.put(“my rss feed”, value, 3600); } return value; }
The cache doesn’t guarantee that you won’t have to fetch it again sooner, but will make a best effort to retain it for that long, and expire it quickly after the time passes. Now you can call getFeed() often and it won’t re-fetch the feed from the remote site on each script execution, resulting in improved performance.
getFeed()
Check out the CacheService documentation for more information.
Here’s the scenario: you create a form, you have a script that triggers onFormSubmit and all is well... until it gets popular. Occasionally you start having interlacing modifications from separate invocations of your script to the spreadsheet. Clearly, this kind of interlacing is not what you intended for the script to do. Up until now, there was no good solution to this problem -- except to remain unpopular or just be lucky. Neither are great solutions.
onFormSubmit
Now, my friend, you are in luck! We’ve just launched the LockService to deal with exactly this problem. The LockService allows you to have only one invocation of the script or portions thereof run at a time. Others that would’ve run at the same time can now be made to wait nicely in line for their turn. Just like the line at the checkout counter.
The LockService can provide two different kinds of locks-- one that locks for any invocation of your script, called a public lock, and another that locks only invocations by the same user on your script, called a private lock. If you’re not sure, using a public lock is the safest bet.
For example, in the scenario in the previous paragraph you would want something like this:
function onFormSubmit() { // we want a public lock, one that locks for all invocations var lock = LockService.getPublicLock(); lock.waitLock(30000); // wait 30 seconds before conceding defeat. // got the lock, you may now proceed ...whatever it used to do here.... lock.releaseLock(); }
It’s best to release the lock at the end, but if you don’t, any locks you hold will be released at the end of script execution. How long should you wait? It depends on two things mainly: how long the thing you’re going to do while holding the lock takes, and how many concurrent executions you expect. Multiply those two and you’ll get your timeout. A number like 30 seconds should handle a good number of cases. Another way to pick the number is frankly to take an educated guess and if you guess too short, the script will occasionally fail.
If you want to avoid total failure if you can’t get the lock, you also have the option trying to get the lock and doing something else in the event of not being able to get it:
function someFunction() { var lock = LockService.getPublicLock(); if (lock.tryLock(30000)) { // I got the lock! Wo000t!!!11 Do whatever I was going to do! } else { // I couldn’t get the lock, now for plan B :( GmailApp.sendEmail(“admin@example.com”, “epic fail”, “lock acquisition fail!”); } }
So now your scripts can be as popular as they can get with no worries about messing up shared resources due to concurrent edits! Check out the LockService documentation for more information.
We are currently rolling out a change to the organization of existing resources in collections in Google Docs. This change is completely transparent to users of the Google Docs web user interface, but it is technically visible when using the Google Documents List API to make requests with the showroot=true query parameter or specifically querying the contents of the root collection. In order to understand this change, first read how Google Docs organizes resources.
showroot=true
The change involves Google removing those resources from a user’s root collection that already exist within another collection accessible to the given user. That is, if “My Presentation” is currently in root and in the “My Talks” collection, after this change it will only exist in the “My Talks” collection.
We are making this change in order to make the organization of resources less confusing for API developers. This change allows clients to know that a resource either exists in root or in some collection under root. Clients can still retrieve all resources, regardless of which collections they’re in, using the resources feed.
The change is rolling out gradually to all Google Docs users over the next few months.
Developers with further questions about this change should post in the Google Documents List API forum.
Update (August 2014): Try the Yet Another Mail Merge add-on for Google Sheets.
Editor’s Note: This blog post is co-authored by James, Steve and Romain who are Google Apps Script top contributors. -- Ryan Boyd
The Google Apps Script team is on a roll and has implemented a ton of new features in the last few months. Some of us “Top Contributors” thought it will be a useful exercise to revisit the Mail Merge use case and discuss various ways in which we can do Mail Merge using Apps Script. Below are several techniques that tap into the power of Google Apps Script by utilizing Gmail, Documents and Sites to give your mailings some zing. Mail Merge is easy and here is how it can be done.
The Simple Mail Merge tutorial shows an easy way to collect information from people in a Spreadsheet using Google Forms then generate and distribute personalized emails. In this tutorial we learn about using “keys,” like ${"First Name"}, in a template text document that is replaced by values from the spreadsheet. This Mail Merge uses HTML saved in the “template” cell of the spreadsheet as the content source.
${"First Name"}
The Gmail Service is now available in Google Apps Script, allowing you to create your template in Gmail where it is saved as a draft. This gives us the advantage of making Mail Merge more friendly to the typical user who may not know or care much about learning to write HTML for their template. The mail merge script will replace the draft and template keys with names and other information from the spreadsheet and automatically send the email.
To use this mail merge, create a new spreadsheet, and click on Tools > Script Gallery. Search for “Yet another Mail Merge” and you will be able to locate the script. Then, click Install. You’ll get two authorization dialogs, click OK through them. Add your contact list to the spreadsheet, with a header for each column. Then compose a new mail in Gmail. Follow this syntax for the “keys” in your template: $%column header% (see above). Click Save now to save your draft. Go back to your spreadsheet and click on the menu Mail Merge. A dialog pops up. Select your draft to start sending your emails.
$%column header%
You can add CCs, include attachments and format your text just as you would any email. People enjoy “Inserting” images in the body of their emails, so we made sure to keep this feature in our updated mail merge. To automate this process we will use a new advanced parameter of the method sendEmail, inlineImages. When the script runs it looks in the email template for images and make sure they appear as inline images and not as attachments. Now your emails will look just as you intended and the whole process of mail merge got a whole lot simpler.
sendEmail
inlineImages
The next Mail Merge will use a template that is written in a Google Document and sent as an attachment. Monthly reports, vacation requests and other business forms can use this technique. Even very complex documents like a newsletter or brochure can utilize the automation of Google Apps Script to add the personal touch of having your patron’s name appear as a salutation.
Like in the Mail Merge for Gmail, the Google Docs template will use “keys” as placeholders for names, addresses or any other information that needs to be merged. Google Apps Script can add dynamic elements as well. For example you may want to include a current stock quote using the Financial Service, a chart from the Charts Service, or a meeting agenda automatically fetched for you by the Calendar Service.
As the code sample below demonstrates, the Google Apps Script gets the document template, copies it in a new temporary document, opens the temp document, replaces the key placeholders with the form values, converts it to PDF format, composes the email, sends the email with the attached PDF and deletes the temp document.
Here is a code snippet example to get you started. To use this mail merge, create a new spreadsheet, and click on Tools > Script Gallery. Search for “Employee of the Week Award” and you will be able to locate the script.
// Global variables docTemplate = “enter document ID here”; docName = “enter document name here”; function sendDocument() { // Full name and email address values come from the spreadsheet form var full_name = from-spreadsheet-form var email_address = from-spreadsheet-form // Get document template, copy it as a new temp doc, and save the Doc’s id var copyId = DocsList.getFileById(docTemplate) .makeCopy(docName+' for '+full_name) .getId(); var copyDoc = DocumentApp.openById(copyId); var copyBody = copyDoc.getActiveSection(); // Replace place holder keys, copyBody.replaceText('keyFullName', full_name); var todaysDate = Utilities.formatDate(new Date(), "GMT", "MM/dd/yyyy"); copyBody.replaceText('keyTodaysDate', todaysDate); // Save and close the temporary document copyDoc.saveAndClose(); // Convert temporary document to PDF by using the getAs blob conversion var pdf = DocsList.getFileById(copyId).getAs("application/pdf"); // Attach PDF and send the email MailApp.sendEmail(email_address, subject, body, {htmlBody: body, attachments: pdf}); // Delete temp file DocsList.getFileById(copyId).setTrashed(true); }
For the last example let’s assume you have a great Google Site where you create new letters for your followers. However, you have had some feedback suggest that while many users don’t mind visiting your site, some would prefer to have the newsletter emailed to them. Normally this would require copying and pasting into an email or doc. Why not simply automate this with Google Apps Script?
The body section of a site, the part you edit, can be captured as HTML by the Sites Service and placed in the body of an email. Because the return value is HTML, the pictures and text formatting come through in the email.
Here is a simple example for you to try out:
function emailSiteBody() { var site = SitesApp.getPageByUrl('YourPageURL'); var body = site.getHtmlContent(); MailApp.sendEmail('you@example.com', 'Site Template', 'no html :( ', {htmlBody: body}); }
It really is that simple. Add a for loop with email values from a spreadsheet and this project is done.
for
Happy merging!
Updated 10/28: fixed instructions for accessing the complete script source for solution 3.
Editor's note: This post by Google Senior Product Manager Justin Smith has been cross-posted from the Google Code blog because we think it'll be of great interest to Google Apps developers. -- Ryan Boyd
In the coming weeks we will be making three changes to the experimental OAuth 2.0 endpoint. We expect the impact to be minimal, and we’re emailing developers who are most likely to be affected.
https://www.example.com/back?error=access_denied.
https://www.example.com/back?error=access_denied
https://www.example.com/back#error=access_denied
approval_prompt=force
access_type=offline
https://accounts.google.com/o/oauth2/auth? client_id=21302922996.apps.googleusercontent.com& redirect_uri=https://www.example.com/back& scope=https://www.google.com/m8/feeds/& response_type=code
https://accounts.google.com/o/oauth2/auth? client_id=21302922996.apps.googleusercontent.com& redirect_uri=https://www.example.com/back& scope=https://www.google.com/m8/feeds/& response_type=code& access_type=offline& approval_prompt=force
Just a few weeks ago, several members of our Google Apps Developer Relations team returned from Buenos Aires, Sao Paulo, Hyderabad and Bangalore where they met with many enthusiastic developers as part of Google Developer Day and DevFest events. We're now headed to the skies again and looking forward to talking with amazing Russian, Polish, Czech and French developers.
Whether you're building integrations with Google Apps into your products to connect users with their data, helping customers integrate Google Apps with other parts of their Enterprise IT systems, or are simply customizing your own Google Apps environment-- we want to meet you. Drop us a line on Google+ or Twitter and let us know where you'll be.
Here's our schedule:
Here's who's visiting:
“@FOOD”
“@WEIGH”
The Google Apps Marketplace is a storefront for Google Apps customers to discover, purchase, deploy and manage web applications which are integrated with Google Apps. These applications are typically used from desktops and laptops, but many vendors on the Apps Marketplace have also optimized the experience for their users who are on-the-go. There are several different strategies for enabling a mobile workforce, and each requires a different approach to authentication and authorization.
Google has written applications and synchronization clients to help ensure that the core Google Apps data is available to users on their mobile devices, whether they’re on their mobile phones or tablets. By storing contacts, dates and documents from your application in Google Apps using the application APIs, you can leverage these features to provide a mobile view for your users.
Since you’re only accessing the application APIs on your web application’s server, and the user has already linked up their mobile device to their Google account, there are no special techniques for authentication and authorization when using this lightweight approach.
With the latest advances in HTML5 web technologies such as offline and local storage, it’s possible to build mobile interfaces for business apps which are full-featured and accessible to users on many devices. The primary goal in building the mobile web application is to optimize the user experience for different input devices, form factors and limitations in network availability and bandwidth.
Because the application is in a web browser, most of the changes to implement are in the frontend-- HTML, JavaScript and CSS. User authentication and data authorization continue to use the same OpenID and OAuth technologies as are used for the desktop/laptop version of the application.
Does your application need access to hardware-specific APIs which are not available in a web browser, or do you feel a great user experience can only be achieved using native code? Several Apps Marketplace vendors have built native applications for popular mobile platforms like Android and iOS. Although it takes considerably more effort to build multiple native applications to cover the major platforms, these vendors can also take advantage of the additional distribution channels offered by mobile stores.
Authentication and authorization are often challenging for developers building native mobile applications because they cannot simply ask users for a password if their app supports single-sign on to Google with OpenID. We recently published an article describing a technique using an embedded webview for accomplishing OpenID authentication in mobile apps. This article includes references to sample code for Android and iOS.
Editor’s Note: This post written by Ferris Argyle. Ferris is a Sales Engineer with the Enterprise team at Google, and had written fewer than 200 lines of JavaScript before beginning this application. --Ryan Boyd
I started with Apps Script in the same way many of you probably did: writing extensions to spreadsheets. When it was made available in Sites, I wondered whether it could meet our needs for gathering roadmap input from our sales engineering and enterprise deployment teams.
At Google, teams like Enterprise Sales Engineering and Apps Deployment interact with customers and need to share product roadmap ideas to Product Managers. Product Managers use this input to iterate and make sound roadmap decisions. We needed to build a tool to support this requirement. Specifically, this application would be a tool used to gather roadmap input from enterprise sales engineering and deployment teams, providing a unified way of prioritizing customer requirements and supporting product management roadmap decisions. We also needed a way to share actual customer use cases from which these requirements originated.
This required bringing together the capabilities of Google Forms, Spreadsheets and Moderator in a single application: form-based user input, dynamically generated structured lists, and ranking.
This sounds like a fairly typical online transaction processing (OLTP) application, and Apps Script provides rich and evolving UI services, including the ability to create grids, event handlers, and now a WYSIWYG GUI Builder; all we needed was a secure, scalable SQL database backend.
One of my geospatial colleagues had done some great work on a demo using a Fusion Tables backend, so I did a little digging and found this example of how to use the APIs in Apps Script (thank you, Fusion Tables Developer Relations).
Full sample code for this app is available and includes a test harness, required global variables, additional CRUD wrappers, and authorization and Fusion REST calls. It has been published to the Script Gallery under the title "Using Fusion Tables with Apps Script."
/** * Read records * @param {string} tableId The Id of the Fusion Table in which the record will be created * @param {string} selectColumn The Fusion table columns which will returned by the read * @param {string} whereColumn The Fusion table column which will be searched to determine whether the record already exists * @param {string} whereValue The value to search for in the Fusion Table selectColumn; can be '*' * @return {string} An array containing the read records if no error; the bubbled return code from the Fusion query API if error */ function readRecords_(tableId, selectColumn, whereColumn, whereValue) { var query = ''; var foundRecords = []; var returnVal = false; var tableList = []; var row = []; var columns = []; var rowObj = new Object(); if (whereValue == '*') { var query = 'SELECT '+selectColumn+' FROM '+tableId; } else { var query = 'SELECT '+selectColumn+' FROM '+tableId+ ' WHERE '+whereColumn+' = \''+whereValue+'\''; } var foundRecords = fusion_('get',query); if (typeof foundRecords == 'string' && foundRecords.search('>> Error')>-1) { returnVal = foundRecords.search; } else if (foundRecords.length > 1 ) { //first row is header, so use this to define columns array row = foundRecords[0]; columns = []; for (var k = 0; k < row.length; k++) { columns[k] = row[k]; } for (var i = 1; i < foundRecords.length; i++) { row = foundRecords[i]; if( row.length > 0 ) { //construct object with the row fields rowObj = {}; for (var k = 0; k < row.length; k++) { rowObj[columns[k]] = row[k]; } //start new array at zero to conform with javascript conventions tableList[i-1] = rowObj; } } returnVal = tableList; } return returnVal; }
Now all I needed were CRUD-type (Create, Read, Update, Delete) Apps Script wrappers for the Fusion Tables APIs, and I’d be in business. I started with wrappers which were specific to my application, and then generalized them to make them more re-usable. I’ve provided examples above so you can get a sense of how simple they are to implement.
The result is a dynamically scalable base layer for OLTP applications with the added benefit of powerful web-based visualization, particularly for geospatial data, and without the traditional overhead of managing tablespaces.
I’m a Fusion tables beginner, so I can’t wait to see what you can build with Apps Script and Fusion Tables. You can get started here: Importing data into Fusion Tables, and Writing a Fusion Tables API Application.
Google Docs supports sharing collections and their contents with others. This allows multiple Google Docs resources to be shared at once, and for additional resources added to the collection later to be automatically shared.
Class.io, an EDU application on the Google Apps Marketplace, uses this technique. When a professor creates a new course, the application automatically creates a Google Docs collection for that course and shares it with all the students. This gives the students and professor a single place to go in Google Docs to access and manage all of their course files.
A collection is a Google Docs resource that contains other resources, typically behaving like a folder on a file system.
A collection resource is created by making an HTTP POST to the feed link with the category element’s term set to http://schemas.google.com/docs/2007#folder, for example:
http://schemas.google.com/docs/2007#folder
<?xml version='1.0' encoding='UTF-8'?> <entry xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/docs/2007#folder"/> <title>Example Collection</title> </entry>
To achieve the same thing using the Python client library, use the following code:
from gdata.docs.data import Resource collection = Resource('folder') collection.title.text = 'Example Collection' # client is an Authorized client collection = client.create_resource(entry)
The new collection returned has a content element indicating the URL to use to add new resources to the collection. Resources are added by making HTTP POST requests to this URL.
content
<content src="https://docs.google.com/feeds/default/private/full/folder%3A134acd/contents" type="application/atom+xml;type=feed" />
This process is simplified in the client libraries. For example, in the Python client library, resources can be added to the new collection by passing the collection into the create_resource method for creating resources, or the move_resource method for moving an existing resource into the collection, like so:
create_resource
move_resource
# Create a new resource of document type in the collection new_resource = Resource(type='document', title='New Document') client.create_resource(new_resource, collection=collection) # Move an existing resource client.move_resource(existing_resource, collection=collection)
Once resources have been added to the collection, the collection can be shared using ACL entries. For example, to add the user user@example.com as a writer to the collection and every resource in the collection, the client creates and adds the ACL entry like so:
user@example.com
writer
from gdata.acl.data import AclScope, AclRole from gdata.docs.data import AclEntry acl = AclEntry( scope = AclScope(value='user@example.com', type='user'), role = AclRole(value='writer') ) client.add_acl_entry(collection, acl)
The collection and its contents are now shared, and this can be verified in the Google Docs user interface:
Note: if the application is adding more than one ACL entry, it is recommended to use batching to combine multiple ACL entries into a single request. For more information on this best practice, see the latest blog post on the topic.
The examples shown here are using the raw protocol or the Python client library. The Java client library also supports managing and sharing collections.
For more information on how to use collections, see the Google Documents List API documentation. You can also find assistance in the Google Documents List API forum.
Since March of this year, Google has supported OAuth 2.0 for many APIs, including Google Data APIs such as Google Calendar, Google Contacts and Google Documents List. Google's implementation of OAuth 2.0 introduces many advantages compared to OAuth 1.0 such as simplicity for developers and a more polished user experience.
We’ve just added support for this authorization mechanism to the gdata-python-client library-- let’s take a look at how it works by retrieving an access token for the Google Calendar and Google Documents List APIs and listing protected data.
First, you will need to retrieve or sync the project from the repository using Mercurial:
hg clone https://code.google.com/p/gdata-python-client/
For more information about installing this library, please refer to the Getting Started With the Google Data Python Library article.
Now that the client library is installed, you can go to your APIs Console to either create a new project, or use information about an existing one from the API Access pane:
Your application will require the user to grant permission for it to access protected APIs on their behalf. It must redirect the user over to Google's authorization server and specify the scopes of the APIs it is requesting permission to access.
Available Google Data API’s scopes are listed in the Google Data FAQ.
Here's how your application can generate the appropriate URL and redirect the user:
import gdata.gauth # The client id and secret can be found on your API Console. CLIENT_ID = '' CLIENT_SECRET = '' # Authorization can be requested for multiple APIs at once by specifying multiple scopes separated by # spaces. SCOPES = ['https://docs.google.com/feeds/', 'https://www.google.com/calendar/feeds/'] USER_AGENT = '' # Save the token for later use. token = gdata.gauth.OAuth2Token( client_id=CLIENT_ID, client_secret=CLIENT_SECRET, scope=' '.join(SCOPES), user_agent=USER_AGENT) # The “redirect_url” parameter needs to match the one you entered in the API Console and points # to your callback handler. self.redirect( token.generate_authorize_url(redirect_url='http://www.example.com/oauth2callback'))
If all the parameters match what has been provided by the API Console, the user will be shown this dialog:
When an action is taken (e.g allowing or declining the access), Google’s authorization server will redirect the user to the specified redirect URL and include an authorization code as a query parameter. Your application then needs to make a call to Google’s token endpoint to exchange this authorization code for an access token.
import atom.http_core url = atom.http_core.Uri.parse_uri(self.request.uri) if 'error' in url.query: # The user declined the authorization request. # Application should handle this error appropriately. pass else: # This is the token instantiated in the first section. token.get_access_token(url.query)
The redirect handler retrieves the authorization code that has been returned by Google’s authorization server and exchanges it for a short-lived access token and a long-lived refresh token that can be used to retrieve a new access token. Both access and refresh tokens are to be kept private to the application server and should never be revealed to other client applications or stored as a cookie.
To store the token object in a secured datastore or keystore, the gdata.gauth.token_to_blob() function can be used to serialize the token into a string. The gdata.gauth.token_from_blob() function does the opposite operation and instantiate a new token object from a string.
gdata.gauth.token_to_blob()
gdata.gauth.token_from_blob()
Now that an access token has been retrieved, it can be used to authorize calls to the protected APIs specified in the scope parameter.
import gdata.calendar.client import gdata.docs.client # Access the Google Calendar API. calendar_client = gdata.calendar.client.CalendarClient(source=USER_AGENT) # This is the token instantiated in the first section. calendar_client = token.authorize(calendar_client) calendars_feed = client.GetCalendarsFeed() for entry in calendars_feed.entry: print entry.title.text # Access the Google Documents List API. docs_client = gdata.docs.client.DocsClient(source=USER_AGENT) # This is the token instantiated in the first section. docs_client = token.authorize(docs_client) docs_feed = client.GetDocumentListFeed() for entry in docs_feed.entry: print entry.title.text
For more information about OAuth 2.0, please have a look at the developer’s guide and let us know if you have any questions by posting them in the support forums for the APIs you’re accessing.
Updated 9/30/2011 to fix a small typo in the code
There are a number of ways to add resources to your Google Documents List using the API. Most commonly, clients need to upload an existing resource, rather than create a new, empty one. Legacy clients may be doing this in an inefficient way. In this post, we’ll walk through why using resumable uploads makes your client more efficient.
The resumable upload process allows your client to send small segments of an upload over time, and confirm that each segment arrived intact. This has a number of advantages.
Since only one small segment of data is sent to the API at a time, clients can store less data in memory as they send data to the API. For example, consider a client uploading a PDF via a regular, non-resumable upload in a single request. The client might follow these steps:
But that 100,000 bytes isn’t a customizable value in most client libraries. In some environments, with limited memory, applications need to choose a custom chunk size that is either smaller or larger.
The resumable upload mechanism allows for a custom chunk size. That means that if your application only has 500KB of memory available, you can safely choose a chunk size of 256KB.
In the previous example, if any of the bytes fail to transmit, this non-resumable upload fails entirely. This often happens in mobile environments with unreliable connections. Uploading 99% of a file, failing, and restarting the entire upload creates a bad user experience. A better user experience is to resume and upload only the remaining 1%.
Traditional non-resumable uploads via HTTP have size limits depending on both the client and server systems. These limits are not applicable to resumable uploads with reasonable chunk sizes, as individual HTTP requests are sent for each chunk of a file. Since the Documents List API now supports file sizes up to 10GB, this is very important.
The Java, Python, Objective-C, and .NET Google Data API client libraries all include a mechanism by which you can initiate a resumable upload session. Examples of uploading a document with resumable upload using the client libraries is detailed in the documentation. Additionally, the new Documents List API Python client library now uses only the resumable upload mechanism. To use that version, make sure to follow these directions.
Editor’s note: This is a guest post by Cameron Henneke. Cameron is the founder and principal engineer of GQueues, a task management app on the Google Apps Marketplace. Cameron tells the story of his application and provides some tips for developers considering integrating with Google Apps and launching on the Marketplace -- Ryan Boyd
Google recently announced that over 4 million businesses now run on Google Apps, continuing its growth as enterprise software that focuses on collaboration. This of course is great news for Google Apps developers, since this means there are 4 million potential customers on the Google Apps Marketplace looking for complimentary tools to enhance their productivity. As you know, listing an app requires just a few quick steps, and the Marketplace targets a growing audience of customers ready to purchase additional solutions.
So what kind of success might you see on the Marketplace and how can you maximize revenue? As the founder of GQueues, an online task manager, I have listed the app on the Marketplace since its launch in March 2010. Over the past year and half, I have found the Marketplace to be my most successful channel, and have discovered a few tips along the way that proved key to this success.
Though this seems obvious, this first point is critical: make sure your app solves a real problem. This means you’ve identified actual people and businesses that have this problem and are actively looking for a solution. Perhaps they have already tried other tools or cobbled something together on their own. For example, I’ve verified Google Apps users are looking for an intuitive way to organize their work and manage tasks with others. GQueues fills this need as a full-featured task manager that allows users to assign tasks, share lists, set reminders, create tasks from email and tag work for easy filtering. Google Apps users come to the Marketplace with a variety of needs, make sure your app addresses at least one of them.
As you solve a customer’s problem, make sure you integrate with their existing tools. For Marketplace customers, this means adding as many integration points with Google products as possible. This is important for several reasons.
First, it’s great for the user and facilitates adoption. If your service works seamlessly with other products they are already familiar with, they don’t have to spend time learning something new. For instance, GQueues has two-way syncing with Google Calendar. Since users already know how to drag events to new dates in Calendar, dragging GQueues tasks around the calendar is quite intuitive.
Secondly, more integration directly helps your app’s listing in the Marketplace. Each listing has a set of icons representing product integrations. GQueues integrates with Calendar, Mail, Contacts and Google Talk, which indicates to a customer that using this tool will allow their users to work more efficiently. Plus, customers can search based on integration points, so the more you have, the broader your presence in the Marketplace.
Lastly, integrating with existing products speeds development. Utilizing Google APIs allows you to innovate faster and respond to your customers growing needs. GQueues uses the XMPP service of Google App Engine, which eliminated the need to build a separate chat backend and makes it easy for users to add tasks by chatting a message from anywhere.
Once you’ve listed your deeply integrated app that solves a real problem on the Marketplace, be sure to engage with your customers. The Marketplace allows users to rate your app and leave verified reviews, which not only impact the app’s listing position, but greatly influence potential customers’ willingness to test it out. I manage the GQueues Marketplace listing with a two-fold approach:
These actions are quite simple, but immensely effect your app’s presence in the Marketplace.
Though each app is unique, I’ve found that following the tips mentioned above have helped the Google Apps Marketplace become GQueues’ top revenue channel.
GQueues is based on a freemium model, and the average conversion rate for a typical freemium product is 3-5%. Looking at all the regular channels, GQueues has a 6% average conversion rate from free users to paid subscribers - slightly higher than expected. However, the GQueues users from the Marketplace convert at an astonishing rate of 30%.
The Marketplace claims to target an audience ready to buy, and the data really backs this up.
Not only does the Marketplace have a substantially higher conversion rate, but it also drives a considerable amount of traffic. Looking at the data over the same period, 27% of all new GQueues users were acquired via the Marketplace.
Combining the acquisition rate with the conversion rate shows that the Marketplace is actually responsible for 63% of all paid GQueues users.
As Google Apps continues to grow worldwide, the need for deeply integrated, complimentary business tools will also expand. Based on my experience with GQueues, I strongly recommend the Google Apps Marketplace as a rewarding channel for apps that integrate with Google Apps.
We are announcing the deprecation of SWF export functionality for presentations from the Google Documents List API. We are taking this action due to the limited demand for this feature, and in order to focus engineering efforts on other aspects of the API.
Clients currently making the following request to the API are affected by this change.
https://docs.google.com/feeds/download/presentations/Export?docID=1234&exportFormat=swf
We recommend clients currently using SWF exports switch to PDF exports, using the appropriate exportFormat value.
https://docs.google.com/feeds/download/presentations/Export?docID=1234&exportFormat=pdf
We are disabling SWF exports in the coming weeks. Clients attempting to export presentations as SWF after the exports are disabled will receive an HTTP 400 response.
For more information on exporting presentations, see the Google Documents List API documentation. If you have any questions, feel free to reach out in the forums.