I attended my first I/O this year and enjoyed everything except the food. I focused mainly on talks on topics I had interests in, like AR, and Navigation. I really should have spent a bit more time doing code labs and digging into the sandbox demos, but perhaps next time.
Cool things in AR
I don't have much experience with AR, having only made a quick Sceneform app in the past, so I was interested in seeing what new things were available. I watched two talks that were quite good.
Walking Directions in Google Maps
Very good talk on how they approached the walking directions AR feature in Google maps. Machine learning from Street View to help with localizing a person. Then once the location is determined, UX comes into play to show the walking directions in a way that doesn't endanger the walker. Very cool.
I was quite inspired by this talk discussing the Augmented Faces API. The ability to get a reasonable resolution approximation of a face from still or video images, and then add 3d assets to it. I've compiled the demo app and want to spend some time to see what I can do with this tech.
There was a nice, helpful, navigation tool built into the conference app that would display a 3d overlay of what was around you when you scanned waypoint maps that were located on poles throughout the conference. I suspect each of these maps were double sided, and unique at each pole. The images would be scanned and recognised as an anchor for the 3d model, and the AR would be initiated showing all of the nearby venues. Definitely a nice use of Augmented Images API.
Jetpack and Kotlin First
So many talks on different Jetpack libraries and technologies. I missed the talk on Compose, the declarative UI framework for Android. I've watched it back since and it looks quite cool. I tried it out on my laptop but it filled my drive with files and I needed to remove it. I'll have to put it on my desktop machine so I can get more familiar with it. I've recently been using Epoxy and Databinding together, and that feels a bit closer to declarative, so I can't wait until I can get rid of the XML entirely.
Google's commitment to Kotlin First is a really good step in my opinion. I have been writing Kotlin code for about a year and a half now, and I can never go back to the old ways. So it's good to hear that Google is also thinking more about Kotlin.
I got to attend a talk by Geoffrey Hinton on Deep Learning and the directions the research is taking. I was particularly interested in current research on how the human dream state is inspiring research into allowing deep learning models to forget weights that are unimportant, allowing smaller models with better generalization. I'm still quite surprised that all of this deep learning is based on top of Back Propagation. Sometimes I wonder if Bill Armstrong's ALNs would be faster and safer in a deep learning structure.
Music on Android
I watched a coding demo on how to integrate MIDI input into a simple synthesiser on Android and Chrome. The latency is a lot lower than it used to be, but I wondered if any commercial companies would port their software to Android. The demo was capped by a quick demo on Chrome by Magnus Berger, the CTO of Propellerhead - the maker of the Reason software. Not sure if this means Android is a possibility for that company. They have only made iOS software so far. I can't blame them, the Android sound stack has been terrible for latency so far. Only recently has it been somewhat better. But is it good enough yet?
Google hinted that we would be able to share apps on Google Play by uploading APKs and bundles and sharing the links. I hope that feature goes live soon. I tried it when I got home and the server wasn't responding. I think it'll replace our need for services like App Center or Fabric for distributing test builds. Remains to be seen.
Google announced Pixel 3a, and Nest Hub. Not sure I have much interest in either of these so I won't talk about them.
I can't help feel that Google is running in all sorts of directions and none of those directions are of particular interest to me. They have a lot of stuff going on, but what I really want is better sound, faster hardware, and fewer challenges to make software for the hardware. I do like the direction I've seen in many of the Jetpack libraries. But some really don't go far enough yet, like the androidx.biometrics library. Why make developers do difficult things like encrypting keys with biometrics. So easy to get wrong and so hard to support all of the third-party variants of biometrics hardware and encryption primitives.
There was nothing on VR, no daydream, no updates to the VR controllers. Android Things was discontinued which makes me sad. Google did give us all a nice water bottle, however.
Will I Go Again
I think I might, it's hard to justify if all I see are talks. If I did go again I really would take the time to visit the code labs and get advice from some of the developers. I can always watch the talks online later.