Right now, Apps Script developers have three competing ways to create user interfaces: Ui Service, a visual tool for Ui Service called GUI Builder, and Html Service, which we launched at Google I/O in 2012. We designed Html Service specifically to help developers build complex applications by letting them work with familiar libraries like jQuery and jQuery UI.
Today, we are deprecating GUI Builder and five UiApp widgets — but not Ui Service itself. This will help us further focus our development efforts on Html Service.
The GUI Builder will continue to be available until September 9, 2013. After that point, you will not be able to create or manage GUI Builder components, although existing components will still function. The five deprecated UiApp widgets are Hyperlink, InlineHyperlink, LayoutPanel, RichTextArea, and SuggestBox. These widgets will be also available until September 9, 2013, at which point they will cease to function.
To plan for the future, we recommend that you migrate your user interfaces to Html Service, which will offer the best combination of features and support in the long term.
Meanwhile, we have a few awesome new features planned for 2013. Although we’re not quite ready to announce those features, I dropped a few hints when Arun Nagarajan interviewed me for a State of the Script episode on Google Developers Live last month. Give it a watch, and I’m sure you’ll be as excited about the future of Apps Script as we are.
Lots of photographers, both professionals and amateurs, have started using Google Drive to store their photos online. We recently launched new features such as a way to quickly preview files and today I wanted to share more details about the image media metadata capabilities of the Drive SDK.
All digital cameras add some Exif information to the photos they take, and we exposed an initial set of Exif fields via the Google Drive API at the end of 2012. That set of metadata has now been expanded to include 9 new fields, such as the sensor type or the metering mode.
For instance, take a look at this recently taken photo:
Photo credit: Claudio Cherubino
What follows is the image media metadata as returned by the Drive API (in bold the new fields):
"imageMediaMetadata": { "width": 2048, "height": 1536, "rotation": 0, "date": "2013:02:18 12:51:51", "cameraMake": "Panasonic", "cameraModel": "DMC-GF2", "exposureTime": 0.0025, "aperture": 7.1, "flashUsed": false, "focalLength": 14.0, "isoSpeed": 100, "meteringMode": "Pattern", "sensor": "One-chip color area", "exposureMode": "Auto", "colorSpace": "sRGB", "whiteBalance": "Auto", "exposureBias": 0.0, "maxApertureValue": 3.6289062 }
You might have noticed that a number of fields have been added to the response while others (“subjectDistance” and “lens”) were not returned. This is expected as the camera doesn’t have to populate all Exif fields and in that case the corresponding properties will simply not be included in the API response.
For more information and to check the description of all metadata fields returned by the API, check the Files resource Reference Guide. If you have technical questions, please post them on Stack Overflow, my team monitors the google-drive-sdk tag and is happy to help.
Hey Silicon Valley developers,
We are organizing a Google Drive hackathon next week. If you’d like to learn more about the Google Drive SDK, meet with Google Drive engineers and have fun developing your first Google Drive application or integrating your existing application, join us.
The event will take place at the Googleplex in Mountain View, CA. We’ll start with an introduction to the Google Drive SDK at 4:00 p.m. on Wednesday March 13th 2013 and the hackathon will run through the next day at 3:00 p.m. See the detailed agenda of this event, don’t forget to RSVP and tell your friends about it.
Also there will be some exciting Google prizes for the best apps.
See you there!
When a file of a common type is uploaded to Google Drive, it is automatically indexed so users can easily search for it in their Drive files. Google Drive also tries to recognize objects and landmarks in images uploaded to Drive.
For instance, if a user uploaded a list of customers as an HTML, XML, PDF or text file he could easily find it later by searching for one of its customer’s name that is written inside the file. Users could also upload a picture of their favorite green robot, then search for “Android” and Google Drive would find it in their Drive:
Metadata such as the file’s title and description are always indexed so users can always find a file by name. However, Google Drive does not automatically index the content of less common or custom file types. For example if your application uploads or creates files using the custom MIME-type custom/mime.type, then Drive would not try to read and index the content of these files and your users would not be able to find them by searching for something that’s inside these files.
custom/mime.type
To have Google Drive index the content of such files you have to use one of the following two options available when uploading files through the Google Drive API.
We recently added a way for you to indicate that the file you are uploading is using a readable text format. In the case where your file data format is text based — for instance if you are using XML or JSON — you can simply set the useContentAsIndexableText URL parameter to true when uploading the file’s content to Drive. When this flag is set Google Drive will try to read the content of the file as text and index it.
useContentAsIndexableText
There is a more flexible approach which is to set the indexableText attribute on the File Metadata. You can set the value of the indexableText attribute which is a hidden — write-only — attribute that we will index for search. This is very useful if you are using a shortcut file — in which case there is no content uploaded to Google Drive — or if you are using a non-text or binary file format which Google Drive won’t be able to read.
indexableText
Have a look at our Google Drive API references or watch our latest Google Developer Live video about the topic to learn more.
What does the new Google+ Sign-In mean for your Drive app, and why should you use it?
All APIs can be authorized using the “Sign in with Google” button, including the Drive API. To authorize additional scopes, just pass them in the markup for the “Sign in with Google” button like we’ve done in this example.
<span class="g-signin" data-scope="https://www.googleapis.com/auth/drive.file">
The “Sign in with Google” button can cater to whatever kind of application you create: web, client, or mobile. Now you can choose the authorization flow you like and get a token using the OAuth 2.0 client-side flow or server flow. There are loads of features, and the button is highly customizable.
I’ve saved my favorite feature until the end: when the user authorizes an application on the web, the mobile version of the app can be installed over the air onto their mobile device. Just add your Android package name when you create the button like in this second example, and your app will be automagically installed.
<span class="g-signin" data-apppackagename="org.aliafshar.android.driveapp">
I know many of your Drive apps have mobile and web components, so this should be really useful for you. This helps you provide your users with a beautiful and seamless experience on all of their devices.
All-in-all, we think you’ll find these features useful and recommend that you use the Google+ Sign-In as the preferred way to authorize a user with the Google Drive API from inside a user interface. Check out how to get started with Google+ Sign-In in the language of your choice.
Editor’s Note: Guest author John Gale is a Solutions Developer at Appogee, a Google Cloud Service Partner. — Arun Nagarajan
Ever since we launched Appogee Leave — the first tool in the Google Apps Marketplace for tracking employees’ absences and time off — customers have been asking, “Can you support my native language?”
Our partners offered to help crowdsource the translation, but it was a challenge to know where to begin. We started by identifying a few needs:
With just a couple days’ effort in Google Apps Script, we created a complete application for crowd-sourced localization that handles each of those requirements. You can get a glimpse of the system in the screenshot below.
Source: Appogee
Let’s take a look at a few specific Apps Script tricks we used to make the whole thing work.
Like many Apps Script users, we store almost all of the data for our translation system in Google Sheets, including both the list of English terms we want to translate and users’ translations.
During testing, we found that if two users submitted translations at the same time, the spreadsheet wrote both sets of changes to the same place, causing us to lose one user’s updates. To solve this, we use Apps Script’s semaphore-based Lock Service. In the code below, a public lock ensures that a user has temporary exclusive use of the spreadsheet so that their correction is added even if another user also submits a correction.
function submit(e){ /* get the fields from the UI callback */ var incorrect = e.parameter.foreignWordIncorrectTxt; var correct = e.parameter.foreignWordCorrectTxt; var reason = e.parameter.reasonTxt; var lang = e.parameter.hiddenLang; /* validate the input; return the user a message if invalid */ /* open the spreadsheet */ var active_user_email = UserProperties.getProperty('user_email') || ""; var master_spreadsheet = SpreadsheetApp.openById(MASTER_SPREADSHEET_KEY); var correction_sheet = master_spreadsheet.getSheetByName('Corrections'); /* get a lock and update the spreadsheet */ var lock = LockService.getPublicLock(); lock.waitLock(30000); correction_sheet.appendRow([ lang, incorrect, correct, reason, active_user_email ]); SpreadsheetApp.flush(); lock.releaseLock(); /* reset the UI */ return reset(); }
You’ll note that this code opens the spreadsheet before obtaining a lock. At this point, we are only reading, not writing, and thus do not yet require a lock. We then tell Apps Script we are prepared to wait up to 30 seconds for our turn to lock the worksheet. On the rare occasion that a lock is not available within 30 seconds (usually because somebody else has an exclusive lock), the code throws an exception and stops execution.
Once we have acquired the lock, we quickly write the correction to the spreadsheet — including a call to SpreadsheetApp.flush() to ensure the data is written immediately — and release the lock.
SpreadsheetApp.flush()
Because the translations are stored in a spreadsheet along with information about who provided them, it’s easy to recognize our top contributors through a leaderboard. The leaderboard data is a good candidate for caching because it’s shown to a large number of people, but only changes when we receive new updates from top-ranking users.
Like the Lock Service described earlier, the Cache Service provides both public and private variants. The public cache is useful for storing data that should be available to all users, such as the leaderboard. The private cache is more appropriate for storing information about a user, such as the translations they have submitted so far.
Since the Apps Script cache can only store strings, complex objects must first be converted. Lucky for us, Apps Script provides JSON utilities that make this conversion easy, as shown in this example:
function getBoardData(){ var cache = CacheService.getPublicCache(); var leaderboard_data = cache.get('leaderboard_data'); if (leaderboard_data == null) { leaderboard_data = getTopTen(); cache.put('leaderboard_data', Utilities.jsonStringify(leaderboard_data), 3600); } else { leaderboard_data = Utilities.jsonParse(leaderboard_data); } return leaderboard_data; }
Our hope is that the leaderboard will encourage users to provide more translations by introducing some friendly competition.
Thanks to Google Apps Script and the techniques shown above, we built a powerful crowdsourcing translation system without unnecessary complexity or development effort. If you’d like to help translate Appogee Leave, we’d love to have your contribution.
Sometimes you just want to sit uninterrupted at your keyboard, bashing out a clever Apps Script to automate your life and your work … but sometimes you want to see how Google experts approach the tough problems. Sometimes you want to draw on other Apps Scripters for inspiration or help.
That’s why the Apps Script team — and many other developer-focused teams at Google — record Google Developers Live episodes in which we highlight a specific topic and drill down to discuss it in detail.
We also hold regular livestreamed office hours via Google+ Hangouts, which we post on YouTube afterwards. In these office hours, we discuss recent releases and give in-depth tutorials on topics interesting to Apps Script users.
Now that the 2013’s GDLs and office hours are underway, let’s recap six topics we discussed in GDL segments over the last few months.
Triggers are an incredibly powerful part of Apps Script that allow developers to run code non-interactively. In this video, I talk about ways to schedule code via the GUI as well as programmatically, and briefly touch on intermediate topics such as common patterns and pitfalls when working with triggers.
Charts are a great way to visualize data. In this next video, Kalyan Reddy starts with a few slides about Apps Script’s Charts Service, then works his way into code samples for an application that pulls data from the StackOverflow API in order to create an online dashboard that displays contributions from top developers. If you want to follow along, Kalyan’s code samples are available on Github.
BigQuery is a Google service that allows developers to analyze massive datasets very quickly in the cloud. In this video, Michael Manoochehri from the BigQuery team joins us to talk about how to use Apps Script to automatically export aggregate BigQuery data into Google Sheets to make it easier to share. This show dovetails nicely with Kalyan’s video about charts (above), in which you’ll learn how to quickly wire up visualizations for the exported data.
And what developer doesn’t love Google Analytics? Although Analytics has built-in mechanisms to export data from the UI, it also provides an API for automated data retrieval. Nick Mihailovski from the Google Analytics team joins us to talk about the reasons why people might want to do this, and to demonstrate a toolkit that makes it easy to work with Google Analytics data within Google Sheets.
Many Google Apps users are also Salesforce users. In this show, Arun Nagarajan explains how to integrate Google Apps with Salesforce via Apps Script, and shows off a few code samples that demonstrate moving data between Salesforce and Google Apps in either direction. Make sure to grab a copy of Arun’s code samples on Github.
Need to build a robodialer or otherwise automate voice calls? Twilio provides an API for doing just that. Arun and Eric Koleda take us through some of the cool possibilities for integrating Twilio’s API with Google Apps. We had a lot of fun setting up the studio for this one, and it’s one of the most fun to watch. Here’s the code on Github.
Of course, if you want to hear our tricks and tips as soon as possible, you’ll should watch Google Developers Live, well, live — so check out the calendar of upcoming episodes for Apps Script and Drive. If you have any ideas for further segments you’d like to see, leave a suggestion in the comments below! We’d love to hear your feedback.
Cheers!
We're looking for a small set of developers that are committed to building apps for Google Drive to join our Google Drive SDK early access program. Participants will get early access to upcoming features and the opportunity to shape the direction of the SDK. This is an ongoing program covering multiple aspects of the API, and there are two new features that we're ready to share with developers.
Some of you might have already heard of the upcoming Google Drive Realtime API at Google IO 2012. The Google Drive Realtime API will allow you to use the technology that powers the realtime collaboration features of Google products such as Google Docs and Google Sheets in your own application. It will handle all aspects of data transmission, storage, and conflict resolution when multiple users are editing.
We are looking for trusted testers for what will be a short and intense pre-release phase of the Drive Realtime API. Good candidates will be:
We also told developers about an upcoming Push Notifications system at Google IO 2012. Push Notifications will allow you to get near-instant notifications when files are modified in Google Drive. In the past you would typically have had to frequently poll the Drive API to check if files have been modified to obtain similar results, Push notifications makes this super efficient.
Please fill out our signup form to tell us more about your use case and we’ll contact you shortly.
Way back in the dawn of time, before I joined Google — OK, fine, two months ago — I was a video-game designer. Occasionally, I had to come up with names for people or places. And I'm no good at naming things.
So once, instead of manually naming hundreds of towns in a (fictitious) foreign country, I weaseled my way out of creativity by writing a ridiculous set of custom spreadsheet functions. My spreadsheet analyzed a list of real placenames from a similar country, then spit out plausible fake names with the same lexical structure.
It worked — but I was pushing spreadsheet functions so far that the “code” (if you can call it that) became difficult to maintain. At the same time, the reason I used a spreadsheet in the first place was so I could lean on the analytical power of pivot tables.
That’s what made Google Apps Script perfect for revamping the project. With Apps Script, I can still use pivot tables, then do the heavy lifting in JavaScript and package everything up as a tidy web app. I call it Name Generator … because I’m terrible at naming software, too.
Now, before you say, “There’s no way I’d call my daughter Harliance,” remember that the goal wasn’t to produce real names. The goal was to produce names that were good enough for a video game. Perhaps Harliance is a cyborg woman of negotiable virtue in dystopian future-America? You should probably pick your daughter’s name the old-fashioned way.
So let’s look at a few of the techniques that NameGen uses.
1. We start out in Google Sheets, first dropping a list of real names into column A, then slicing it into overlapping three-letter segments using the formula =MID($A2,COLUMN(B2)-1,3) (that’s the version of the formula you’d use in cell B2; from there, just copy and paste the formula across the rest of the sheet and the cell references will update accordingly). Here’s a sample of one of the spreadsheets so you can see how the data is set up.
=MID($A2,COLUMN(B2)-1,3)
2. We then create a pivot table for each column in that first sheet, just summarizing the column by COUNTA (the number of times each segment occurs). For example, since Lakisha, Nakia, and Nakisha (from our list of real names) share “aki” as letters 2 through 4, the pivot table for Segment 2 shows “aki: 3.”
COUNTA
The plan is that NameGen will randomly pick one of the starting three-letter segments, then look at the last two letters of its selection so that it can find an overlapping segment from the next column. The script then uses the pivot-table statistics to weight its selections toward more common segments. It continues until a segment ends in a blank. This diagram shows how it might build the name Calina:
3. This is where Apps Script takes over. Just once per source list of names, we run the utility function below (shown slightly simplified) to convert the spreadsheet data to a more useful format and store it in ScriptDb. The script can then pull the data from ScriptDb in about 0.5s, versus about 5s to read directly from the spreadsheet. Note that we’re using Script Properties to store the spreadsheet ID rather than cluttering up the code with extra variables.
function updateDb() { var language = 'americanFemale'; // Look up the spreadsheet ID in Script Properties, then grab its sheets. var ssId = ScriptProperties.getProperty(language); var sheets = SpreadsheetApp.openById(ssId).getSheets(); var dictSize = sheets[0].getLastRow() - 1; var segment = {}; // Transform each sheet into the segment object we want later. for (var i = 0; i < sheets.length; i++) { // Retrieve the list of real names (first loop) or a pivot table. if (i === 0) { segment.data = sheets[0].getRange('A:A').getValues(); } else { segment.data = sheets[i].getDataRange().getValues(); } // Store other properties so we can retrieve the right record later. segment.index = i; segment.language = language; segment.size = dictSize; // Save the object as a ScriptDb record, then start the loop again. ScriptDb.getMyDb().save(segment); } }
4. Now, every time the app runs, it jumps straight to the generateNames() function shown below (again slightly simplified). After it queries the database, it’s straight JavaScript — but one Apps Script–specific trick you’ll notice is that we assemble the data into an array using segments[current.index] = current.data rather than segments.push(current.data). Because ScriptDb returns the records in an unpredictable order, we gave our objects an index property to store the correct order.
generateNames()
segments[current.index] = current.data
segments.push(current.data)
index
function generateNames(language, numNames) { // Query the database to find all results for this language. var results = ScriptDb.getMyDb().query({language: language}); var current = {}; var segments = []; // Assemble the DB records into an array so we can pass it around. while (results.hasNext()) { current = results.next(); segments[current.index] = current.data; } var names = []; var segment = ''; for (var i = 0; i < numNames; i++) { var name = ''; // For each requested name, pick one segment, making // sure it overlaps with the previous two letters. for (var j = 1; j < segments.length; j++) { segment = randomSegment(segments[j], name); name = name.slice(0, name.length - 2); name += segment; // If the segment wasn't full length (end of a name), done! if (segment.length < 3) { break; } } names.push(name); } return names; }
I haven’t explained the randomSegment() function, but you can probably guess at how it works based on the description above. Still, if you want to dig in further, you can view the full source code here.
randomSegment()
5. The only remaining step is to expose the results to the world through a web app. Apps Script provides several ways of doing this; I used Html Service, but didn’t require any of the advanced features. Here’s how little HTML we need to turn NameGen into a functional app:
<html> <body> <script> function sendRequest() { var language = document.getElementById('language').value; var numNames = document.getElementById('numNames').value; google.script.run.withSuccessHandler(updateField). generateNames(language, numNames); } function updateField(names) { var output = ""; for (var i = 0; i < names.length; i++) { output += (names[i] + '<br/>'); } document.getElementById('resultsBox').innerHTML = output; } </script> <select id="language"> <option value="americanFemale">American Females</option> <option value="americanMale">American Males</option> <option value="american">American Towns</option> <option value="british">British Towns</option> <option value="french">French Towns</option> <option value="irish">Irish Towns</option> <option value="italian">Italian Towns</option> <option value="spanish">Spanish Towns</option> </select> <select id="numNames"> <option value=1>1</option> <option value=10 selected>10</option> <option value=100>100</option> <option value=1000>1000</option> </select> <input id="generateButton" type="button" value="Generate" onclick="sendRequest()"> <div id="resultsBox"> </div> </body> </html>
And presto chango, you have a web app that harnesses the tremendous power of Google’s infrastructure … to produce names that could only ever exist in a parallel universe. It’s like Adriano Celentano’s "Prisencolinensinainciusol" — a convincing rendition of an American pop song, unless you actually speak English, in which case it’s total gibberish.
Editor’s Note: Guest author Ronald Dahrs runs Forscale, an IT and project management company based in the Netherlands. -- Arun Nagarajan
Google Apps is well-suited for project management because it’s a cloud-based productivity suite that helps you and your team connect and get work done from anywhere on any device. Using Google Apps Script, we can push the capabilities even further to create advanced scheduling and management tools. A common tool in project management circles is the Gantt chart: a schedule of the tasks in the project and how they relate to each other over time.
The spreadsheet that generated that Gantt chart is available in the template gallery today. In this post, we’ll explore the basics of how the template works and explain a few of the Apps Script techniques that transform Google Sheets into such a powerful project management tool.
When you open the template, you’ll see stubs for each type of task, but the screenshot above shows an example of a slightly larger project plan — in fact, the same data used to generate the Gantt chart below.
The template’s sophisticated formulas rely on the structure of the table to enable schedule awareness and task dependencies. However, we still ensure that the user can rename, rearrange, or add columns by using a hidden header to identify each column. This diagram demonstrates the spreadsheet’s structure:
In Apps Script, we use the spreadsheet’s onEdit() event to monitor user interaction with the schedule portion of the spreadsheet and update the Gantt chart accordingly. The powerful JavaScript language does all the required summary calculations based on the provided dates and completion percentages.
onEdit()
We have also used Apps Script’s addMenu() method to build a custom menu that calls row-oriented functions like indenting tasks to get a so-called Work Breakdown Structure with summary tasks. If you just want to see an overview, the custom menu allows you to collapse tasks, which we accomplished through the hideRows() method.
addMenu()
hideRows()
For changes that do not trigger an onEdit() event (for example, clearing a row), the user can use the menu’s Refresh command to recalculate the schedule.
The template stores user preferences as Script Properties and offers an interactive user interface built in UiApp to change those settings:
Finally, to render the Gantt chart, we use cell background colors to visually group and highlight the appropriate cells. This creates the effect of a continuous calendar with clearly visible start and finish dates for each task.
var ganttColors = ganttRange.getBackgroundColors(); var ganttValues = ganttRange.getValues(); // update Gantt colors and values ganttRange.setBackgroundColors(ganttColors).setValues(ganttValues);
In just a few hours at the recent Apps Script hackathon in Los Angeles, we saw attendees build everything from website monitoring to room booking to financial tracking apps. For those of you who couldn’t make it, attendees were given a brief introduction to Apps Script and a few hours to let their imaginations run wild. Apps Script’s ease of use enabled them to quickly create fully functioning, useful apps. Here are a few interesting things we saw from local developers:
These days, small businesses are quickly increasing their online presence; a website outage during a critical period can be devastating to a mom-and-pop shop. Eduardo realized that existing network-monitoring solutions require a significant investment in technology and infrastructure that is beyond the reach of many small-business users. Using Apps Script’s UrlFetch and Spreadsheet services, he was able to quickly create a website monitor packaged in an easy-to-use spreadsheet that, given a list of URLs, tries to fetch each one and records the latency and content length.
The code is available here.
Get A Room allows users to book meeting rooms by taking advantage of Apps Script’s tight integration with Google Calendar and events. The app, built entirely on a Chromebook utilizing Apps Script's cloud friendliness, displays building floorplans with buttons that users can click to book a room. In response to a booking request, the app fetches the room’s calendar and creates a new event. It also updates the UI by replacing the floor plan with a modified image to show the newly booked room. Here is a snippet of the booking code:
// Click handler for the interaction to book a room function bookBoardroomHandler(e) { var app = UiApp.getActiveApplication(); // Perform the calendar-booking operations bookBoardroom(); // Swap the images as visual confirmation app.remove(app.getElementById('imageDefaultLayout')); app.add(app.getElementById('imageBoardroom')); app.close(); return app; } function bookBoardroom(e) { var calendarBoardroom = CalendarApp.getCalendarsByName("Boardroom"); calendarBoardroom[0].createEventFromDescription("Boardroom Meeting"); }
Matt decided to build a web service that provides information about publicly traded stocks. The app’s backend consists of a spreadsheet with stock symbols of interest. Using Apps Script’s FinanceService, Matt loops through the spreadsheet on a timed trigger and appends the latest stock information for each symbol. He then uses HtmlService to create a web app that outputs an XML page of the stock info based on a symbol parameter in the URL. Here’s a picture of his script in action:
These are just some examples of how quickly useful apps can be created with Apps Script. Thanks to all the attendees for coming out! If you couldn’t make it to the hackathon, check out these tutorials to see how you can get started making great apps.
Editor’s Note: Guest author David Fothergill works at QueryClick, a search-engine marketing company based in the UK. — Eric Koleda
Working in Paid Search account management, I've often found tremendous wins from making reports more useful and efficient. Refining your analytics allows you to streamline your workflow, allowing more time for strategic and proactive thinking — and that's what we're paid for, not endless number-crunching.
The integration between Google Analytics and Apps Script has opened up lots of opportunities for me to make life easier through automation. In a recent blog post on my agency's website, I outlined how an automated report can quickly “heatmap” conversion rate by time and day. The aim of the report is to provide actionable analysis to inform decisions on day-part bidding and budget strategies.
In that post, I introduce the concepts and provide the scripts, sheet, and instructions to allow anyone to generate the reports by hooking the scripts up to their own account. Once the initial sheet has been created, the script only requires the user to provide a Google Analytics profile number and a goal for which they want to generate heatmaps. In this post, we’ll break down the code a bit.
This is a slight amendment to the code that queries the Core Reporting API. Apart from customising the optArgs dimensions to use day and hour stats, I have modified it to use goal data from the active spreadsheet, because not all users will want to measure the same goals:
function getReportDataForProfile(ProfileId, goalNumber) { //take goal chosen on spreadsheet and select correct metric var tableId = 'ga:' + ProfileId; if (goalNumber === 'eCommerce Trans.') { var goalId = 'ga:Transactions' ; } else { var goalId = 'ga:goal' + goalNumber + 'Completions'; } // Continue as per example in google documentation ... }
Once we’ve brought the Google Analytics data into the spreadsheet in raw form, we use a pivot table to plot the hour of the day against the day of the week.
For this type of report, I'd like to use conditional formatting to heatmap the data — but conditional formatting in Google Sheets is based on fixed values, whereas we want the thresholds to change based on cell values. However, thanks to the flexibility of scripts, I was able to achieve dynamic conditional formatting.
The script needs to know the boundaries of our data, so I’ve set up several cells that display the maximums, minimums, and so forth. Once these were in place, the next step was to create a function that loops through the data and calculates the desired background color for each cell:
function formatting() { var sheet = SpreadsheetApp.getActiveSpreadsheet(). getSheetByName('Heatmap'); var range = sheet.getRange('B2:H25'); range.setBackgroundColor('white'); var values = range.getValues() //get boundaries values for conditional formatting var boundaries = sheet.getRange('B30:B35').getValues(); //get range to 'heatmap' var backgroundColours = range.getBackgroundColors(); for (var i = 0; i < values.length; i++) { for (var j = 0; j < values[i].length; j++) { // Over 90% if (values[i][j] > boundaries[1][0]) { backgroundColours[i][j] = '#f8696b'; } // Between 80% and 90% if (values[i][j] < boundaries[1][0] && values[i][j] >= boundaries[2][0]) { backgroundColours[i][j] = '#fa9a9c'; } // Between 60% and 80% if (values[i][j] < boundaries[2][0] && values[i][j] >= boundaries[3][0]) { backgroundColours[i][j] = '#fbbec1'; } // Between 40% and 60% if (values[i][j] < boundaries[3][0] && values[i][j] >= boundaries[4][0]) { backgroundColours[i][j] = '#fcdde0'; } // Between 20% and 40% if (values[i][j] < boundaries[4][0] && values[i][j] >= boundaries[5][0]) { backgroundColours[i][j] = '#ebf0f9'; } // Less than 20% if (values[i][j] < boundaries[5][0]) { backgroundColours[i][j] = '#dce5f3'; } } } // set background colors as arranged above range.setBackgroundColors(backgroundColours); }
Calling the functions based on the profile ID and goal number specified in the main sheet gives us a quick, actionable report that can easily be adapted for use across multiple accounts.
function generateHeatmap() { try { var profileId = SpreadsheetApp.getActiveSpreadsheet() .getSheetByName('Heatmap').getRange(4,10).getValue(); var goalNumber = SpreadsheetApp.getActiveSpreadsheet() .getSheetByName('Heatmap').getRange(7,10).getValue(); if (profileId === '') { Browser.msgBox('Please enter a valid Profile ID'); } else { var results = getReportDataForProfile(profileId, goalNumber); outputToSpreadsheet(results); formatting(); } } catch(error) { Browser.msgBox(error.message); } }
This was my first foray into the slick integration between the Core Reporting API and spreadsheets, but has proven a valuable test case for how effective it will be to roll this method of reporting into our daily process of managing accounts.
We have now started the next steps, which involves building out “client dashboards” that will allow account managers access to useful reports at the press of a button. This moves us toward the goal of minimizing the time gathering and collating data, freeing it up to add further value to client projects.
Editor's Note: If you're interested in further scripting your AdWords accounts, take a look at AdWords Scripts, a version of Apps Script that's embedded right into the AdWords interface.
When we launched version 1.0 of Google Play services to all Android 2.2+ devices worldwide in September, one of our main goals was to provide developers with better tools for working with OAuth 2.0 tokens in their Android apps.
Thanks to the new components, Android apps can get access to Google APIs with an easy-to-use authentication flow and can provide a consistent experience to both their users and developers. We recently decided to test that statement by writing a small camera app that automatically uploads photos you take to your Google Drive account.
We documented all the steps required to go from zero to hero in a quickstart guide. By following the step-by-step instructions in the guide, you’ll have a working Android app that uses Google Play services to perform authorization and the Google Drive API to upload files to Drive.
Do you want to learn how to build this app but prefer to watch a video tutorial instead of reading the documentation? We’ve got you covered! Check out the recording of the Google Developers Live session that covers the setup and running of the quickstart app.
If you’re building an Android app that integrates with Drive and have questions for us, please don’t hesitate to let us know on Stack Overflow.
Setting up a new domain name and configuring it to work with Google Apps email is about to get a lot easier. We’re working with the top domain registrars worldwide to reduce the number of manual steps necessary for this portion of the signup process. We made improvements earlier this year to allow users to more easily verify their domain with GoDaddy and eNom, now with a new API available to any registrar, users can verify and transfer their email in 3 easy steps, down from 10–and users are no longer required to leave the Google Apps signup flow to complete domain registration.
Customers can experience the new, easier process today with TransIP and Hover domains, as these registrars have completed their integrations with Google Apps signup flow API. More than 10 additional registrars, including some of the largest, are actively building through the API and are currently expected to be available through the simpler setup over the next few months:
If you are a registrar interested in implementing this RESTful API to automate the DNS setup process, please apply here.
Editor’s Note: Guest author Ashraf Chohan works at the Government Digital Service (GDS), part of the UK Cabinet Office. -- Arun Nagarajan
Recently, when we were preparing the launch of GOV.UK, my team was tasked with creating a series of high-level metrics reports which could be quickly compiled and presented to managers without technical or analytical backgrounds. These reports would be sent daily to ministers and senior civil servants of several government departments, with the data customised for each department.
We decided to use Adobe InDesign to manage the visual appearance of the reports. InDesign’s data-merge functionality, which can automatically import external data into the layout, made it easy to create custom departmental reports. The challenge was to automate the data collection using the Google Analytics API, then organize the data in an appropriate format for InDesign’s importer.
In a previous post on this blog, Nick Mihailovski introduced a tool which allows automation of Google Analytics Reporting using Google Apps Script. This seemed an ideal solution because the team only had basic developer knowledge, much of the data we needed was not accessible from the Google Analytics UI, and some of the data required specific formatting prior to being exported.
We started by building the core reports in a Google spreadsheet that pulls in all of the required raw data. Because we wanted to create daily reports, the start and end dates for our queries referenced a cell which defaulted to yesterday’s date [=(TODAY())-1].
[=(TODAY())-1]
These queries were dynamically fed into the Google Analytics API through Apps Script:
// All variables read from each of the “query” cells var optArgs = { 'dimensions': dimensions, 'sort': sort 'segment': segment 'filters': filters, 'start-index': '1', 'max-results': '250' }; // Make a request to the API. var results = Analytics.Data.Ga.get( tableId, // Table id (format ga:xxxxxx). startDate, // Start-date (format yyyy-MM-dd). endDate, // End-date (format yyyy-MM-dd). endDate, // Comma seperated list of metrics. optArgs);
Next, we created additional worksheets that referenced the raw data so that we could apply the first stage of formatting. This is where storing the data in a spreadsheet really helps, as data formatting is not really possible in the Google Analytics UI.
For example, the final report had a 47-character limit for page titles, so we restricted the cells in the spreadsheet to 44 characters and automatically truncated long URLs by appending “...”.
Once the initial formatting was complete, we used formulas to copy the data into a summary sheet specially laid out so it could be exported as a CSV file that merges seamlessly into InDesign.
Below is an example of how a report looks on publication. Nearly everything on the page was extracted from the API tool, including the department name and the day number. Because most of the data was automated, it required minimal effort on our part to assemble these reports each morning.
We discovered that an added bonus of pulling data into a Google spreadsheet was that it also allowed us to publish the data to a Google site. This helped us display data to stakeholders without adding lots of users to our Google Analytics account.
The tools let us present Google Analytics data in deeper, more creative ways. That’s really important as we share information with more and more non-technical people, whether they’re inside GDS or beyond.
Editor’s Note: Guest author Romain Vialard works at Revevol, an international service provider dedicated to Google Apps and other Cloud solutions. -- Arun Nagarajan
There are many tools available to help you manage a task list and Google Apps comes with its own simple Tasks app. But sometimes it is more convenient and collaborative to simply manage your task list in a shared spreadsheet. This spreadsheet can be a simple personal task list or a project-management interface that requires team-wide coordination.
Google Spreadsheets come with a set of notification rules that might come in handy. For example, you can be notified each time someone adds a new task to the list or each time the status of a task is updated. Furthermore, it is very easy add to add basic reminders through Apps Script with just a few lines of code:
function remindMe() { var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheets()[0]; var data = sheet.getDataRange().getValues(); for(var i = 1; i < data.length; i++){ if(data[i][2] > new Date()){ MailApp.sendEmail(message); } } }
The simple remindMe function performs a standard JavaScript-based date comparison on every row and sends an email for tasks that are due. You can then schedule the remindMe function via a programmable trigger based on the settings.
remindMe
This script is already available in the Script Gallery today. Try it out for yourself!
Once you have installed the script, you get a new menu option in the spreadsheet that opens a simple user interface to set the options you want. As a developer, you can extend this interface further to provide more options and capabilities.
Would you like to programmatically publish some web content? Or let your users do so in the context of your Drive app?
This is possible now with Google Drive. Your app can now insert static web assets in a publicly shared folder, and then serve those files directly to users via file names in a relative path. Google Drive site publishing supports JavaScript, so it's even possible to run a JavaScript Drive app directly from Drive.
Publishing from Drive is a simple, two-step dance: create a public folder and use a link to the root folder — the webViewLink — to publish its contents. You can refer to the Drive SDK documentation for full detail on how to work with public folders and content links, but basically the requirements are:
webViewLink
It’s important to emphasize the added simplicity provided by a webViewLink: using this link as a starting point, you can extend the path to any web asset in any subfolder without worrying about retrieving the exact file ID. What used to look like 0B2Gk2F2ImIBiaUkwY3JNX1JMaTg is now a recognizably cute path such as images/kittens.jpg. For the root path to the folder, we’ll display a list of files in the folder, or, if you have an index.html file in your folder we’ll load that as expected.
0B2Gk2F2ImIBiaUkwY3JNX1JMaTg
images/kittens.jpg
index.html
The ability to publish files this way opens lots of possibilities for Drive app developers. Writing a blogging tool, creating a process to publish updates to shared docs, outputting images in a folder in a gallery page — any Drive use case that involves presenting a file in a browser can benefit from site publishing. We look forward to seeing what you create, and we’re happy to answer your questions on Stack Overflow.