Joel and Jeremy, two of our Android developers were on the ground at Google IO. Here are some of the notes from Day 2:
We began the day with a stream of presentations on new input devices to address the problems that arise as devices get smaller and smaller (i.e. watches). You’ve probably read articles on Project Jacquard – making textiles touch-sensitive devices by weaving capacitive wire through materials. Pretty cool, but the one I liked even better was Project Soli.
This involves shrinking a radar sensor down to a size that can fit in a watch and using it to detect hand gestures made slightly above the device. The fidelity it can detect is pretty amazing – it can understand the rolling gesture you make with your finger and thumb as if you were adjusting the crown on a watch. Worth checking out some videos if you can find them.
At lunch we enjoyed a demo of a gigapixel project aimed at capturing pieces of art at super-high resolutions. A resolution at which you can zoom in to see the cracks in the paint or the texture of the underlying canvas.
We tried out the Cardboard Jump booth. It showed off example stereoscopic 3D videos captured with the 16 GoPro, 360-degree camera rigs. This was amazing. Combined with 3D sound you felt like you were there. I can’t wait until content is available on YouTube. I attended a session on “Designing for VR” which was essentially “How to not make Cardboard user’s sick”. It emphasised the fact that as VR devices increase their capabilities the difficulty of creating virtual experiences also increases. It was primarily aimed at games developers, but I know this tech is going to become a standard for real estate apps in the not too distant future. Pretty sure Domain is already incorporating photospheres into some of its listings.
Project Ara did make an appearance, but unfortunately not in the form of a device demonstration. It merely had a stand taking suggestions on types of modules people would be interested in.
The following is very Android-specific but I thought I’d include it here anyway:
Sessions on testing and architecture were generally very popular meaning you had to arrive a session or two early just to get a seat.
The unit testing session was validation for architecture styles we are already applying, like Clean and MVP/MVC. The first slide posed the question “How do I unit test Android code?”. The second said “Don’t”. Move as much code into classes that do not depend on Android framework classes, and if you have to, use things like Mockito and, if you can’t avoid it, PowerMock.
Architecture sessions made it very clear that they weren’t going to promote any libraries, and that patterns come and go in popularity, but there are some fundamental elements that make for a good design. Google did a really good job of describing good architecture without relating it to specific, currently popular libraries or patterns. The focus was on ensuring your presentation layer is really responsive. Events are king for updating views when data changes; whether that be through callbacks, event busses, or reactive frameworks. Also important, ensuring the user knows that the data they are seeing may not be fresh if the request to update it is still in progress (e.g. a message app that displays a message in a different colour if it is currently being sent compared to when the send is complete).