Google conducted its annual developer conference in 2017 like it does every year. A whole host of products and services were announced at Google I/O 2017, but perhaps the most striking thing (for me) was that when seen as a whole, it was evident that Google had sowed the seeds for certain future computing platforms.
Go, Android Go?
Of course, the search giant also made some improvements to Android in order to make sure that the latest version of Android as accessible as possible.Android Go was an important and exciting announcement from Google. Fragmentation has long been an issue on Android, and previous efforts from Google to address this problem such as Android One have been unsuccessful. Android One’s failure was hardly a surprise since Google’s heavy-handed approach in dictating the hardware did not sit well with manufacturers. Making stock Android mandatory also left manufacturers unwilling to adopt the project as it would prevent them from monetizing by pre-installing apps.
Unlike Android One, Android Go takes a relatively lighter approach towards making the latest Android as accessible as possible. It does not require a requisite set of hardware nor does it make stock Android mandatory. Android Go is just a bunch of modifications done to Android O so as to make sure that it runs well on devices that have less RAM. These modifications to the Android O software are done at the manufacturing level itself. Apart from changes to the software, Android Go also has custom Google apps like YouTube Go, and the Play Store on Android Go devices would highlight apps specifically developed for low-end devices.
Manufacturers have nothing to lose by adopting Android Go. They can use whatever hardware they want, have custom skins on top of Android and even pre-install apps on devices so as to monetize them. However, how effective Android Go ends up remains to be seen, and Android fragmentation would still be a challenge.
Not Go, Went, Gone…yet!
Firstly, manufacturers, especially those at the bottom of the smartphone market have a vested interest in not shipping new smartphones with the latest version of Android. For a lot of manufacturers, providing the latest version of Android on a smartphone is in itself a reason to charge a premium. If manufacturers start loading all the smartphones right from the very lowest price points with the latest version of Android with the help of Android Go, then they cannot charge higher prices by capitalizing on the difference between different Android versions. Secondly, loading an Android smartphone with an older version of Android automatically reduces its potential lifespan by quite a few months or years thereby leading to a faster upgrade cycle.
Android Go is probably the best Google could do in order to solve the fragmentation problem of Android but the vested interest of Android smartphone manufacturers that survive on razor thin margins might not allow Android Go to achieve its full potential. Apart from Android Go, Google also released a new project called Project Treble. Project Treble tries to shorten Android update times by allowing silicon vendors to directly pass on the OS update to their partners without having to make any changes. While project Treble is encouraging, one must also take into account the fact that most of the delay in Android updates happens mostly because of smartphone manufacturers and carriers. Even if OS updates can bypass silicon vendors from now on, smartphone manufacturers and carriers still have the final say on when the update reaches you. If it ever does.
Android Go and Project Treble might end up having a minute impact on solving the software update problem of Google but they will still make the latest version of Android more accessible than it has been in the past. The fact of the matter is that the sheer number of parties involved in the software update process of Android means that unless and until there is a complete overhaul of the update process where Google controls the updates from end to end just like Apple does, nothing meaningful can be done to reduce fragmentation.
We are VR..and AR too!
The other major thing to come out of Google I/O has been a strategy of sorts for AR. I am well aware that in the current scenario most AR and VR devices cater to mostly a niche of power users. However, there is no denying that moving forward there would be a lot of well developed stand-alone AR and VR devices that would not necessarily cost us a fortune.
Google has mostly focused on the VR market in recent times considering its Cardboard and DayDream efforts. At Google I/O 2017, Google even introduced a DayDream headset that did not require a smartphone. However, one must also note that despite Google’s recent interest towards VR, AR has long been important to the company. Take Google Glass for example. The device was definitely indicative of how serious Google was towards AR, but the design and privacy issues meant that the device flopped. Similarly, Google has been one of the largest investors in Magic Leap which is supposed to be working on an AR headset that is incredibly futuristic.
Hardware-wise Google seems focused on VR for the most part, but I personally feel that at Google I/O 2017 the company sowed the seeds for being successful in the AR market even though it is not specifically working on any AR hardware. To understand why I feel so we must first look at what would make an AR headset successful.
The AR successful formula
An AR headset would need the following four features to be successful in my opinion:
- Ability to view and process real life objects
- Ability to listen and respond to commands
- A smart virtual assistant
- A back-end that’s able to process lots of data
Let us inspect each of these in some detail.
Ability to view and process real life objects
There is no doubt that if you wear an AR headset, you would want it to be smart enough to automatically focus on whatever you are looking at and provide relevant details. For example, if you are standing in a crowded place and want to search for a particular family member, you would want the AR headset to identify them for you automatically. Similarly, if you are standing in front of a hotel that you have never been to and are unaware of its cuisine, you would want the AR headset to be able to provide that information to you.
But for the AR headset to be smart enough for such things, it would need to be able to tap into a reservoir of data – just like Google crawls the web and indexes everything so as to make it available to anyone who uses Google search. I feel Photos and Google Lens are the perfect fodder for this.
Google Photos, with close to 500 million users (and growing), has one of the largest repositories of human faces to tap from. Also, Google announced at I/O 2017 that from now on, it would start recommending people with whom you could share photos. Such a feat will not be possible unless and until Google can figure out which face belongs to whom. Google has said that it recognizes faces every time you email a picture to someone you know through Gmail or other Google properties. In short, Google is already creating a vast database of photos and identifying them as well. While this might sound like a privacy nightmare, Facebook has been doing the same for years now. The reason why this is exciting is that once you put on an AR headset, you would want it to recognize someone you see instantly and tell you details about the person like scheduled meetings with that person or if a birthday is upcoming and so on.
Lens was also a great move by Google at I/O 2017. It will not be a standalone app; rather it would be a feature that is integrated across all Google properties to help you recognize the things you or your camera sees. With close to 2 billion Android smartphones on earth, there is a good chance that millions of people might end up using Lens every day even if the feature is accessed by just 1 percent of individuals using an Android smartphone. Every time you use Lens to identify a hotel or a monument, it not only draws from Google’s database but trains its algorithms and refines them further thanks to machine learning. Just like Photos, even Lens is enabling Google to create a database of images, but these images are mostly non-human, whereas Photos is focused on humans. Google is using Lens to tag/improve the recognition of the various real life buildings and animals so as to use it in an AR dominated world.
Ability to respond and react to commands
Voice commands are going to be a part of AR. Whether you are driving or just lazily lying on the bed, there is nothing better than a voice command to get your task done. To that extent, Google has significantly extended the scope to collect voice related data and further refine it. Firstly, actions have now been extended to Assistant. This means you could probably talk to Google Assistant and get your task done. This will result in a Tsunami of data for Google to tap from and refine. While I agree that a very small percentage of people regularly use Assistant, one must note that Android’s base makes even this small percentage big enough to train algorithms. Google’s voice recognition is already one of the finest in the industry and as time passes I am confident that it would reach a stage where it is able to pick any accent in any situation. It is already there to some extent, but a little more effort is required to refine it.
A smart assistant
When you wear an AR headset you would need a smart virtual assistant to automatically wake up and guide you through your day – remind you of upcoming meetings, set alarms, take care of your health, learn from your habits, etc. To that extent, Assistant has seen a huge expansion at Google I/O. It is now finally available on iOS but would be severely limited because of Apple’s restrictions. However, there have been other additions to Assistant as well. You can finally talk to it from your smartphone itself.
Assistant is now available to any hardware manufacturer, so regardless of what kind of a device they are building, they can integrate Assistant in it. Also, any app manufacturer can now use Assistant in their apps. Assistant’s availability both horizontally (iOS/third party device manufacturers) and vertically (third party apps) it again opens the floodgates for data, enabling Assistant to keep getting better. The more people use Assistant to carry out activities, the better it becomes. A smart AI assistant would be the cornerstone of a successful AR device as the AI would most essentially be the user interface in itself for the most part.
Processing enormous amounts of data
By having eight consumer facing products, each of which has more than 1 billion users, Google already collects and processes vasts amounts of data. The leading cost for any Internet company these days are the server costs it takes to store and process all the data that has been collected which helps explain why AWS is the money printing machine that it is and why Intel derives the majority of its profits from its data center division.
Every single AR headset would generate gigabytes and maybe in some cases terabytes (TB) of data that would have to be processed every single day. To that extent, Google is already working towards lowering the cost of processing data. Google’s TPU is a custom silicon chip which helps in processing vasts amounts of data at far lower costs than traditional GPUs, and at I/O 2017 Google released the second generation of TPUs that help in not just processing data but even training algorithms. As of now, Google has enough scale so as to justify investments in custom silicon for data processing, just like Apple has enough scale in smartphones so as to justify custom silicon for iPhones.
Look out for AR
While I/O 2017 saw Google attempt to improve almost all aspects of its services, the main takeaway for me has been the quiet manner in which Google has sowed the seeds for a lead in AR. Some might accuse me of stretching this a bit but the combination of Photos, Lens, Assistant and Google’s second generation of TPUs, simply make it impossible to not see an AR play here.