Google I/O Rundown

Here’s a quick rundown from a tech perspective of what Google announced last week at its I/O conference:

Google bases most of its service model on artificial intelligence (AI). In this year’s I/O, they announced some really cool new data-center clusters called Tensor Processing Units (TPUs): huge, heavy-duty number-crunching machines that each provide up to 900 petaflops of AI power. These TPUs are coming immediately to Google Compute Engine so that everyone can take advantage of their deep learning power. Sundar Pichai, Google’s CEO, claimed that their image-recognition vision algorithms are now more accurate than humans.

The hardest part of machine learning isn’t writing the algorithms, but rather has to do with tuning them to fit with your desired data set. Any particular configuration of a neural network might work well with one data set, but terribly with another. Google’s TPUs have been designed to use neural nets to train other neural nets, or deep learning “inception” in Pinchai’s words. These systems are probably how Google has accomplished such state of the art image recognition and will contribute to many more advancements in artificial intelligence, deep learning, and machine learning in the near future.

Other rad additions to the lineup:

  • TensorFlow: Google’s TensorFlow ML libraries have snatched dominance in these areas from Matlab, Octave, and R. Implementing an AI algorithm and training it against your data set is easier than ever before.
  • TensorFlowLite: Now TensorFlow is available for Android devices! I can’t wait to implement some of the awesome new computer vision and voice recognition technologies in an upcoming project.
  • Google Lens:Now using deep learning and convolutional neural nets to analyze your photos. Google is (by far) best suited to provide this technology. You’ll be able to gather important contextual information about your surroundings simply by pointing your phone restaurant reviews, automatic translations, and object recognition that brings you straight to Wikipedia. In a mature form, this technology might be as transformative as getting your first smartphone was.
  • Google Photos: Google is bringing heavier machine learning to your family photos. Suggested Sharing and Shared Libraries are the new Facebook killer. Now you can view your family and friends’ photos, without also having to read their endless polemical soapbox rants! Suggested Sharing identifies who is in your photos and asks if you want to share your photos with them, and Shared Libraries lets families automatically merge their photo collections. How many photos do you have? Between my wife and myself, we have over 120GB of photos and videos — far more than we’ll ever organize, clean up, or even view. I’ve thought for quite a while that artificial intelligence is really the only way for us (all of us) to manage our photo collections in today’s connected age. “Ok Google, reconstruct my memories.”

Home vs. Alexa Showdown

Things are really heating up in the battle between Google Home and Amazon Alexa. Chromecast screen integration is really cool: Ask Google Home to show you something, and the result pops up directly on your Chromecast. This includes scheduling, photos, YouTube videos, and more. Google is playing catchup with Amazon here, but, honestly, the YouTube integration is a killer feature. Google Home is also adding voice calling, another feature to catch up with Amazon.

It sounds like they’re ready to voiceprint using Google Home, so that custom results can be delivered to the speaker depending on who they are. This is especially useful in the context of IoT, where we might like to provide different authentication levels depending on who the speaker is. This is essential voice assistant technology. Amazon has already announced this feature but has not launched it yet, so perhaps Google has the drop on them this time.

Finally, Google announced Actions on Google, which is a way to add much more complicated business integrations to Google Assistant. These new voice-driven interactions enable efficient transactions between companies and their customers using voice alone. Please note: is not providing this cool functionality. Rather, it must be added by the development team of each company tying into Google Assistant.

Google O

Google O is the next upcoming version of the Android operating system. A developer preview has been available for the bold and the cutting edge since March 21.

Google O contains many new features for Android. My favorites are:

  • VoIP API Making it easier to integrate Voice-over-IP into apps is a pretty significant addition. We’ve been using IP-connected phones now since 2007, but we’ve been stuck using the traditional voice networks all the same. 3rd party apps for VoIP calling came onto the scene quickly, but they are cumbersome. Most importantly, today these apps don’t integrate with the phone itself, so when your VoIP app rings, your phone doesn’t answer it properly. We don’t need any networks other than the IP networks, do we? No!
  • Google Play Protect: Automatic scanning, identify unusual app behavior and eliminating it, and Find My Device.
  • Stricter app lifecycle control for battery management
  • Android Go

Android Go is a stripped-down version of Android that will offer much better performance on systems with less than 1GB of RAM. This is a particularly momentous development for IoT developers. Can anyone say Android on a Raspberry Pi? Android Go might enable us to develop attractive and performant applications quickly on inexpensive IoT ICs, connect them to a screen, and ship them with much shorter costs and lead times. We’ll be watching Android Go closely!


One of the most interesting announcements for developers is that Google has added a new programming language for developing Android applications. Kotlin runs on the JVM, is 100% interoperable with Java, and provides many improvements over Java. Below I list and demonstrate some of my favorite features, though more detail is available. Anything other than Java or C++ is, of course, a marked improvement! I like clean, simple code, something that newer languages always do better.

Data Classes

data class User(var name: String, var address: String, var company: Company)

Really beats the heck out of the alternative!

Monkey Patching (Extension Functions)

I’m sure there are implementation-level details, but the new feature to Kotlin called Extension Functions perfectly matches what is called monkey patching in every dynamic language: Add a feature to an object outside of its definition. The most likely difference is that monkey patches typically happen at run time, where extension functions (or protocols, in Objective-C parlance) are almost definitely applied during compile.

fun Point.getDistanceFromOrigin() {
    return Math.sqrt(this.x*this.x + this.y*this.y);


Lambda functions, or inline functions used for compositing collections, were added to Java 8. Kotlin does not miss this bandwagon!

collection.filter { it > 0 } { it ** 2 }

Kotlin provides the implicit single-name parameter it, saving us the need to write { value -> value ** 2}. More details here.

Default and named arguments

These are nice.

class Message(val sender: String = "",
              val recipient: String = "",
              val message: String = "Hey buddy!") {}


val message = Message(sender =, recipient =, message = user.lastMessage)

Better immutability

Kotlin enforces the use of varfor variable definition and val for finalvariable definition. Kotlin also makes a distinction between mutable and immutable collections, like Objective-C.

If you’re interested, here is a more in-depth comparison of Kotlin and Java.