Almost two decades ago, Google was just a search engine with a textbox and a promising algorithm that curated the ever growing internet into a list of blue links. As technology paced further, the Mountain View based company invested in a bunch of rising platforms and ideas that revolved around the future of the World Wide Web. Spreading across soon enough, their growth skyrocketed eliminating major tech leaders from the market. Moreover, industry’s dependencies grew more on Google when they inhibited the responsibility of improving 80% of the smartphones around the globe.
However, in the past year or so, smartphone manufacturers have been unable to maintain the wow factor in their products. Sure, this year showcased some of the best phone launches we’ve witnessed in a while. But the next big technology phase isn’t far enough now, its flow will lead to numerous uphills involving artificial intelligence, crazy modular designs that are far superior to what we’re seeing right now on phones like the LG G5, machine learning, smarter virtual assistant and more.
Google’s leader, Sundar Pichai calls this as the “pivotal moment” for personal computing. While other OEMs are still suffering from the struggling competitive marketplace, Pichai, and his fellow subordinates have already planned out for the long journey their company is going to take for the next decade.
Google Assistant is a Technology rather than a Product
At their three day long 10th annual developer conference, Google demoed and talked about a series of projects they’ve been working in support of that roadmap. Their new and more interactive Assistant is being pitched as an individual technology rather than a product which will essentially power many other products and services. Whether it’s Google Home or company’s latest messaging platform, Allo, all are extended results of Assistant. Google wants users to achieve more and they’re cleverly doing so by the information graph that is aggregated from their search engine churning out millions of queries every day. That’s all a collective maneuver of Google’s undisputed capabilities to push AI onto every device out there.
One such product they unveiled was “Google Home”, a voice-activated Bluetooth speaker that can answer your queries, integrates with IoT enabled devices and extends itself to other “Cast-enabled” electronics to provide a streamlined experience. Another product announced was a smarter messaging application with predictive replying abilities and an Assistant bot embedded right into your chats. Google will have to, although, work on merging their four different messaging platforms into one. More importantly, though, these products center around Google’s ambitious attempt to revolutionize the future of personal computing through artificial intelligence and machine learning.
Moreover, Google is also isn’t limiting itself with the reality, they’re as others are jumping on the VR bandwagon. But the company behind Android doing this impacts a lot more than individuals with native headsets. Android is now getting VR support in the form of “Daydream” which is actually an entire division in Google’s online empire. Building a common interface across a majority of the phones out there promises better outcomes and increased developer interest which is the key ingredient in every technology advancement now.
Google ATAP’s projects are finally becoming a reality
Google’s ATAP (Advanced Technologies and Projects) department that is basically known for making dream inventions come true has managed to devise astonishingly breakthrough technologies that will serve as a base for the next big thing in computing. Under Project Soli, the team has created a tiny radar-based chip that can mechanize and control any gadget out there with a bunch of hand flicks and gestures in the air. Google already has a ton of prototypes including a smartwatch and a Bluetooth speaker which makes us wonder why they’re working on them. Ivan Poupyrev, Technical Project Lead at Project Soli mentioned that “If you can put something in a smartwatch, you can put it anywhere” that in turn points at their motives to power devices that aren’t even invented yet. Without actually touching the screen or pressing a button, these chips can analyze thin air to perform relevant actions.
Another ATAP’s mind boggling venture is Project ARA that is responsible for introducing the concept of modular phones to the world. Google is courageously opening the smartphone innovation ecosystem here to which every other manufacturer can contribute and surprisingly, the first consumer phone is coming next year. Past ARA’s implementation, you won’t be purchasing a single smartphone, you’ll be investing in about twelve components from twelve different OEMs if you want.
The much talked about, Project Tango that makes use of a bunch of sensors and infrared hardware to give handhelds a sense of space around them. It allows smartphones or tablets to figure out and map the objects to create a digital sample of the environment. Imagine buying furniture by actually placing them inside your living room on a screen. Possibilities are endless once this project breaks the development cords. Lenovo’s first Project Tango supported smartphone is going official as soon as this year’s July.
Wearable Tech and Project Aura
Remember Google Glass? the insanely futuristic product that was shut down in January 2015? That is now a predecessor to company’s Project Aura division, a team dedicated to working on wearables. Judging from the current state and progress, the next set of devices currently being developed under “Aura” will be integrating heavily with Google Assistant. A recent report pointed at three upcoming products, two of which will be screenless leaving voice input as the only alternative. Google Glass also isn’t particularly dead, a couple of rumors suggest an upgraded version already in the works. We just hope they don’t terminate it this time before reaching stores’ shelves.
Google is prepping itself for an AI-First world
While at this year’s I/O, the company didn’t mention anything about their self-driving cars’ progress, Google is continuously improving and securing their project in order to bring it to the mainstream market. Their AI programs have been officially declared as the first non-human driver in the US. The faithful methods they’ve been working on have ushered the automated cars to recognize hand signals by police officers, reflex at speeds humans can’t even touch and based on the handful of crashes, the technology has been learning to get better every instance.Google isn’t trying to replace humans with these step forwards, while it isn’t possible at least for a while, the company mentions that they’re preparing themselves for an AI-first world. They or more importantly, no one knows what it would look like and what all changes it will accompany.
The search engine leader is going all out right now and will be implementing the algorithms they’ve been designing for a considerable period in their general products’ lineup. However, Sundar Pichai or any other Google representative didn’t reveal how an AI-first world will look like. Android itself is getting smarter and more conscious with the new Awareness API that will allow third-party applications to manipulate sensors’ data.Their new messaging platform, Allo thrives on machine learning and Google Assistant to build a better environment for users.
New TPU Chips promise better Machine Learning Networks
To achieve maximum efficiency, Google has also designed a new Application-specific integrated circuit (ASIC) for driving deep neural networks. Basically, these are connections between a device’s hardware and software to learn and scrutinize vast amounts of data residing in company’s servers. The new chips are called TPU that stands for Tensor Processing Unit as they support TensorFlow which is a software library built to run machine learning services. Assembling native chips will definitely help Google’s projects execute more coherently, however, this is a tragic news for Intel who has been manufacturing and providing these components for years.