Tina and Jeames at WWDC

With five Outwarians on the ground at WWDC this year, we’re really excited to share updates from Day 1. Most notably, that two of our developers, Tina and Jeames, made it onto the WWDC website!

We also had a small camp out in the Richmond office, with Jet, Adam and Rick calling out the most exciting updates as:

  • Swift being made open source in late 2015
  • Multi-tasking on iPad using a split screen, making the iPad a better productivity tool
  • Native watchOS apps
  • Enhanced intelligence, e.g. smarter Siri & Spotlight, able to guess who’s calling you from an unknown number

The following update is from our Senior iOS Developer, Mahmudul Alam, who is attending WWDC.

Native Watch App

As expected, apple announces support for a native watch app and introduces watchOS 2. WatchKit extension moves to the watch and watch apps can work without the phone being present. Watch OS comes with heaps of new features:
  • Watch connectivity framework
  • Watch extension can talk to web service directly
  • Animation support
  • Audio and video playback support on watch
  • API access to accelerometer and HealthKit sensor data
  • API access to Taptic engine.
  • In addition to glances and notification, introduces complications. Complications are glanceable custom information like upcoming flights or sports score.
  • API to access the digital crown
  • High priority push notifications to push immediate updates to watch apps.
 
XCode 7 and iOS9
 
On demand resource API:
  • Assets need not be in the app bundle, can be downloaded and used on demand
  • Customisable download order and priority
  • Resources will be hosted by app store
Storyboard refactoring:
  • Subsections of a storyboard can be extracted into another storyboard and replaced by a storyboard reference in the original one
App transport security:
  • Built into NSURLSession
  • Auto enforces current state of the art security standard (TLS 1.2, forward secrecy)
Test support:
  • API and network performance test support
  • Native UI test support
  • Native support for code coverage (unit tests and UI tests)
  • Recording UI  tests to generate tests with nearly zero coding effort
  • Supports all these feature on both Objective-C and Swift
Instruments:
  • Core location profiling
  • App transitions and network calls profiling
  • Address sanitiser – helps diagnose and fix memory issues
Crash analysis:
  • Get crash logs from both test flight and app store builds Open crash logs directly into the line of code causing issue.
… and a lot more to come! Watch this space for more updates from Day 2!

 

Joel and Jeremy, two of our Android developers were on the ground at Google IO. Here are some of the notes from Day 2:

We began the day with a stream of presentations on new input devices to address the problems that arise as devices get smaller and smaller (i.e. watches). You’ve probably read articles on Project Jacquard – making textiles touch-sensitive devices by weaving capacitive wire through materials. Pretty cool, but the one I liked even better was Project Soli.

This involves shrinking a radar sensor down to a size that can fit in a watch and using it to detect hand gestures made slightly above the device. The fidelity it can detect is pretty amazing – it can understand the rolling gesture you make with your finger and thumb as if you were adjusting the crown on a watch. Worth checking out some videos if you can find them.

At lunch we enjoyed a demo of a gigapixel project aimed at capturing pieces of art at super-high resolutions. A resolution at which you can zoom in to see the cracks in the paint or the texture of the underlying canvas.

We tried out the Cardboard Jump booth. It showed off example stereoscopic 3D videos captured with the 16 GoPro, 360-degree camera rigs. This was amazing. Combined with 3D sound you felt like you were there. I can’t wait until content is available on YouTube. I attended a session on “Designing for VR” which was essentially “How to not make Cardboard user’s sick”. It emphasised the fact that as VR devices increase their capabilities the difficulty of creating virtual experiences also increases. It was primarily aimed at games developers, but I know this tech is going to become a standard for real estate apps in the not too distant future. Pretty sure Domain is already incorporating photospheres into some of its listings.

Project Ara did make an appearance, but unfortunately not in the form of a device demonstration. It merely had a stand taking suggestions on types of modules people would be interested in.

The following is very Android-specific but I thought I’d include it here anyway:

Sessions on testing and architecture were generally very popular meaning you had to arrive a session or two early just to get a seat.

The unit testing session was validation for architecture styles we are already applying, like Clean and MVP/MVC. The first slide posed the question “How do I unit test Android code?”. The second said “Don’t”. Move as much code into classes that do not depend on Android framework classes, and if you have to, use things like Mockito and, if you can’t avoid it, PowerMock.

Architecture sessions made it very clear that they weren’t going to promote any libraries, and that patterns come and go in popularity, but there are some fundamental elements that make for a good design. Google did a really good job of describing good architecture without relating it to specific, currently popular libraries or patterns. The focus was on ensuring your presentation layer is really responsive. Events are king for updating views when data changes; whether that be through callbacks, event busses, or reactive frameworks. Also important, ensuring the user knows that the data they are seeing may not be fresh if the request to update it is still in progress (e.g. a message app that displays a message in a different colour if it is currently being sent compared to when the send is complete).

Two of our Android devs, Joel and Jeremy trekked to San Francisco for Google IO. Here are some observations from developers (rather than journos) and pics that you might not have seen about in the blogs yet:

Google Now on Tap is pretty impressive that promises to add a lot of exciting functionalities to your app. Google now takes care of the machine learning logic for you so that your app can provide a rich user experience.

Material is one year old and was celebrated by creating an in depth set of design guidelines and expanding it even more. With the new library called Android Design Library now supports Material components down to 2.1. A lot of useful design guidelines will be available such as Device screen guides, iOS to Android design.

Battery enhancement is a also big topic this year with Google introduced a new mode called Doze. Exciting improvements from project Volta now help reducing battery usage when device’s screen is off up to 2 times compared to before. They also provide backward compatibility to these enhancements with the new GCMNetworkManager.

New GCM 3.0 with new but not so-new features such as topic subscription (copy Amazon :p) and priorities.

New app permissions model, essentially requesting permissions when they’re required. Seems to be a move more towards the iOS way of doing things where they have nothing on install, but when you start the app you’ll be prompted.

iOS support for all the things (well, haven’t heard anything about Wear yet!)

Google Photos update is impressive and available now (unlimited storage for all! yay!)

Play Store developer pages seems like it might be a good way for Outware to promote its portfolio and get some credit for the work we do for others

Project Tango – we got some hands-on time with Project Tango, and when asked whether we have one we asked when they’ll be available in Australia – this was met with blank “nfi” stares. But Jeremy had fun visualising what a Camaro would look like in his garage. I’ve got some videos of this, but they’re too big to upload now.

Data binding demo – This stuff was really cool and the session went into it in a lot more depth than mentioned in the “What’s new in Android” talk. The documentation in the developer section has the best description of the functionality it provides. Compile-time generation data-mapping, observable view models, null-safety, etc. It’s all good.

In particular, the fact that you don’t have to worry if a particular view is present or not in the layout that the device is using in order to map data to it. E.g. your tablet layout probably contains more views than your phone layout, but when you map to them in code you don’t need to worry about doing an ‘if tablet’ check. Just map data to the view using the binding classes and if it doesn’t exist (i.e. you’re on a phone), no dramas.

Fingerprint auth API – Fingerprint auth was pretty much what you would expect. It’s something to start building into apps now so that when hardware comes along that actually includes fingerprint scanners we’ll be ready to go. For the demo they were using a modified Nexus 5 with a fingerprint scanner strapped on. The API was really simple and easy to use – just startActivityForResult and wait for SUCCESS.

Memory performance – Most of the stuff mentioned in this Colt McAnlis talk  would assume we already do at Outware; e.g. don’t allocate anything in onDraw of custom views, re-use bitmaps to avoid heap fragmentation, but something new was the ArrayMap feature – a data structure designed for Android to be more memory efficient than HashMap; the tradeoff being it’s slightly less efficient when performing lookups/insertions/deletions. The recommendation is to use ArrayMap instead of HashMap when the number of items is less than 1000 (which is most of the time). In these scenarios the extra lookup time is negligible compared to the reduced memory footprint.

20150528_153259 IMG_0368