Tuesday 23 May 2017

Did you miss Google I/O 2017? Don’t worry, here are few heads up from Google I/O 2017

Google I/O has announced few things which you needs to know.


(1) Google Announces Cloud TPUs That Will Let You Build and Train Machine Learning Apps.

https://cloud.google.com/images/products/tpu/accelerated-machine.png
On the opening day of Google I/O developers conference in Mountain View on Wednesday, Google announced second-generation Tensor Processing Units (TPUs), successor to the TPUs the search giant unveiled at the same conference last year. Optimized for AI computations, Google says the new TPUs deliver up to 180 teraflops of floating-point performance, and they will be available via the Google Compute Engine. “We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs,” Jeff Dean, Google Senior Fellow, and Urs Hölzle, Senior Vice President, Google Cloud Infrastructure, said in a blog post.

Google says developers will be able to program the Cloud TPUs using TensorFlow, the open-source machine learning framework it announced back in 2015, as well as new high-level APIs, which will “make it easier to train machine learning models on CPUs, GPUs, or Cloud TPUs with only minimal code changes”.

Apart from the additional computing power, Google says the big difference is that the new TPUs can be used for both training and inference, compared to the first generation TPU that had to be trained separately.

“Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy,” Google said in the blog post, adding that the new TPUs will make the process faster.

“One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod,” the post added.
http://www.innvonix.com/wp-content/uploads/2017/05/tpu-v2-6.2e16d0ba.fill-1592x896.jpg
Each TPU includes a custom high-speed network that will allow up to 64 of these TPUs to operate in a “TPU pod” to deliver up to 11.5 petaflops of computational power.

Google says the new TPUs will allow developers to integrate cutting edge machine learning accelerators into their applications with ease.

Google says it will also make 1,000 Cloud TPUs available at “no cost” to ML researchers via the TensorFlow Research Cloud. For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and subscribe to our YouTube channel.

(2) Google’s AI is now detecting cancer

A pathologist’s report after reviewing a patient’s biological tissue samples is often the gold standard in the diagnosis of many diseases. For cancer in particular, a pathologist’s diagnosis has a profound impact on a patient’s therapy. The reviewing of pathology slides is a very complex task, requiring years of training to gain the expertise and experience to do well.

Even with this extensive training, there can be substantial variability in the diagnosis given by different pathologists for the same patient, which can lead to misdiagnosis. For example, agreement in diagnosis for some forms of breast cancer can be as low as 48 percent, and similarly low for prostate cancer. The lack of agreement is not surprising given the massive amount of information that must be reviewed in order to make an accurate diagnosis. Pathologists are responsible for reviewing all the biological tissues visible on a slide. However, there can be many slides per patient, each of which is 10+ gigapixels when digitized at 40X magnification. Imagine having to go through a thousand 10 megapixel (MP) photos, and having to be responsible for every pixel. Needless to say, this is a lot of data to cover, and often time is limited.
http://images.deccanchronicle.com/8d51cc3a3bb2a79cdee1bb74be0a686b53c540ce-tc-img-preview.jpg
(3) Google for Jobs Launched, an AI-Powered Job Search Tool
Google has announced its all-new Google for Job initiative which will help job seekers find a job right from the Search. Google CEO Sundar Pichai kicked off I/O 2017 conference on Wednesday with a keynote address stressing on the company’s new focus – AI (artificial intelligence). Under this umbrella, Pichai announced the Google for Jobs initiative – developed with the goal to help users find the right jobs using machine learning capabilities. Under the company’s new Google for Jobs initiative, the company will be launching a new feature in Search in the coming weeks that will help job seekers look for work. Pichai talked about why the company felt the need for Google Jobs as almost half of US employers were having issues filling open positions while job seekers weren’t even aware about a job opening.
https://fossbytes.com/wp-content/uploads/2017/05/Google-For-Jobs-640x360.jpg
“We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology,” wrote Pichai in a blog post. The new Google for Jobs search feature will be limited to the US market initially.

Google for its new initiative will initially partner with job listing sites like LinkedIn, Monster, Glassdoor, CareerBuilder, and Facebook among others. The company will further let job seekers filter job openings based on location, category, date posted, and full-time or part-time employment, and other options.

Notably, the company believes that the new search tool will make finding jobs that were traditionally harder to search for and classify, including service and retail jobs. Pichai also announced that some companies have been piloting the new Google for Job search tool like FedEx and Johnson & Johnson.

(4) Tango phone is coming by Asus
http://media2.intoday.in/indiatoday/images/stories//2017May/google-io-16_051817121846.jpg
VR time. Google talks about Daydream. LG will launch a new phone this year that will support Daydream. Also Galaxy S8 and the Galaxy S8 Plus will get software update soon for daydream support. But a bigger news is that Google is working with third-party hardware partners to build a standalone VR headset. Qualcomm is already working with Google to create a reference design for the VR headset. As far as actual products are concerned, these are coming later this year from HTC, which also makes Vive VR headset, and Lenovo. Google’s VR headset will come with Word Sense, which will help users have a sense of motion tracking.

Also, a new Tango phone is coming. This one will be made by the Asus and it will be called ZenFone AR. The AR stands for augmented reality.

(5) Android Go
https://9to5google.files.wordpress.com/2017/05/screen-shot-2017-05-17-at-3-27-33-pm1.png
There are more users of Android in India than there are in the US. And to support users who are using budget phones and bring more people online, Google announces a special version of Android called Android Go. This could be big for India. Android Go will run smoothly on even phones that have 512MB RAM or 1GB RAM.

(6) Kotlin is now an official language on Android!
https://cdn2.vox-cdn.com/thumbor/7wvS9yMZh28Lu0AQCMkCxFo4AFg=/0x1080/volume-assets.voxmedia.com/production/893b40eb148f48a5bd62bf86a751ba3d/VRG_VBO_490_Google_IO_Kotlin-THUMB.jpg
We have been watching Kotlin adoption on Android steadily rise over the years,
with increasing excitement among developers. Kotlin is expressive, concise, extensible, powerful, and a joy to read and write. It has wonderful safety features in terms of nullability and immutability, which aligns with our investments to make Android apps healthy and performance by default. Best of all, it’s interoperable with our existing Android languages and runtime. So we’re thrilled to make Kotlin an official language on Android.

If you’re interested in using Kotlin, it’s easy to get started because it works side by side with Java and C++ on Android. So you can keep your existing code, continue to use the various Android libraries, and incrementally add Kotlin code to your project. Unlike almost any other language, Kotlin is a drop-in replacement you can use bi-directionally—you can call into the Java language from Kotlin, and you can call into Kotlin from the Java language.

Of course, IDE support is also crucial, and we have it. Android Studio is built upon IntelliJ IDEA, an IDE built by JetBrains—the same company that created the Kotlin language. The JetBrains team has been working for years to make sure Kotlin works great with IntelliJ IDEA. So we’re inheriting all their hard work. Starting with Android Studio 3.0, tooling support for Kotlin is bundled directly into Android Studio. For more Info : https://developer.android.com/kotlin/index.html

(7) Android O Developer Preview
https://9to5google.files.wordpress.com/2017/05/android_o_io17_1.png
In Android O, vitals will help increase the battery life. It will also optimize the software and security. Google is also launching a new security app called protect. At the same time it is improving the performance of app and making the boot faster. To help developers, it is also launching a new tool that will automatically find the problems within apps so that developers can fix them. And finally, Google is adding a new programming language Kotlin for Android. Huge cheer from the crowd, developers apparently love it.

Finally, Android time. Dave Burke is on the stage. Android O will come with some new features. The two big improvements are Fluid Experience and Vitals. More new features in Android O: Notification dots, Autofill with Google, improved copy and paste feature.

And lastly, the Android O beta is now going to roll out. For more info : https://developer.android.com/preview/index.html

Also read : http://www.innvonix.com/blog/mobile-technologies/android-o-developer-preview-released-find-out-what-is-new/

(8) The 360 degree view and live 360 is now available for YouTube on big screen like TV
http://www.mercurynews.com/wp-content/uploads/2017/05/sjm-googleio-0518-161.jpg
The 360 degree view and live 360 is now available for YouTube on big screen like TV. Live is the big focus for YouTube now. In the last one year live streaming has grown by 4X. Google also demos Super Chat and how that can help YouTube content creators make more money.

There is a lot of AI and machine learning discussions at this I/O already. Although they also seem to be somewhat incremental. The “big” impact is not there… YouTube is next. Susan Wojcicki, who looks after YouTube, is on the stage. YouTube is growing at the rate of 90 percent on big TV screen. So Google is focusing on living room features for YouTube. To talk about it Sara Ali, a Google executive, is on the stage.

(9) Google also launches Photo Books
http://images.anandtech.com/doci/11409/screen_shot_2017-05-17_at_10.58.27_am_575px.png
Google also launches Photo Books. “Beautiful and easy to make,” says Anil. This is sort of virtual photo collection powered by machine learning. But it is also tied up with some real-world service that will print this photo book at a nominal charge and ship it to you. It will be available in the US starting now. More countries will get it later.

(10) Google Home

https://img.vidible.tv/prod/2017-05/18/591cf1ae1de5a12beaa4c5f3/591cf20d6df679755c9ed4fa_o_U_v1.png
 And finally, Google is getting support for visual response. So tell Google Home that find the best route to work and it will send the command to the phone that will show the map.

Another feature that Google Home is getting is integration with Spotify and Deezer. Also, it now gets Bluetooth support. It is important to remember that all these Google Home feature are for the US market. In India, the device is not yet available.

Now Google is talking about Google Home, a smart speaker that has assistant built inside it. Rishi Chandra, a Google executive explains, its new features. It is getting for new features: It will give contextual notifications now. Also Google Home will now make hands-free (and free in the US and Canada) calls to any number in the world. Rishi is calling his mom, who scolds him for calling days after Mother’s Day.

(11) Google Lens
https://tr3.cbsistatic.com/hub/i/r/2017/05/17/9aa44f20-8943-407a-aa2d-8e1ff5468a20/thumbnail/768x432/6751e8b0d46e90cb39225288780c7271/google-io-2.jpg
GOOGLE IS REMAKING itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it’s still primarily a search company. And today at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or “it’s called Golden Corral,” which you also know. It can automatically find you the hours, or call up the menu, or see if there’s a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you’ll learn that it’s an Elatior Begonia, and that it really needs indirect, bright light to survive. It’s a full-fledged search engine, starting with your camera instead of a text box.

Original Posted : http://bit.ly/2q8hps9