Diversity and Inclusion at Bilue

At Bilue we firmly believe that diversity is core to our success. As the mobile and emerging technology company our aim is to mobilise millions and we know the best way we can help our clients do so is by having a diverse set of people who contribute their unique ideas, skills and perspectives to all our work. As a company, we consider every form of diversity important, including age, gender, ethnicity, sexual identity, sexual orientation, disability, socio-economic background and more.

Not only do we strongly believe this is the case, it’s also backed up by research conducted by professors from highly respected institutions such as the Kellogg School of Management, as well as a report from the The Anita Borg Institute which takes insights from sources such as McKinsey & Company, Catalyst, Columbia University, and the London School of Business. For example, a study noted in the report, conducted at Carnegie Mellon, found that workgroups containing “at least one female outperformed all-male groups in collective intelligence tests”, and “group intelligence is more strongly correlated with diversity than with the IQs of individual members.

Throughout the life of the company, Bilue has always strived to be a great place to work for people of all backgrounds, needs, and situations. We’ve always worked flexibly, with options for altered working hours or working from home depending on the needs of each individual, and we’ve even hosted kids (and dogs!) in the office when people needed that option. Despite this, we haven’t always been able to find the diverse set of applicants we’re hoping for whenever we look to fill new positions.


One of the areas we recently identified as an area for improvement is our recruitment strategy. We’ve recently reworded and reworked the language in our job descriptions, job ads and throughout the interview process, to ensure anyone and everyone feels like they would be welcome and could do their best work at Bilue. We found https://www.hiremorewomenintech.com very useful, although it’s just targeted towards increasing gender diversity in organisations, for tips on how to remove bias during the interview process and our job post descriptions to make them applicable to more diverse applicants. Another action we’ve taken in the last 6 months is to take a more proactive approach to finding candidates. Whereas previously it was common for us to post a job ad on LinkedIn, SEEK and other popular job sites and wait for the applications to come in, we now actively look to the community for the best candidates we can possibly find. We use our LinkedIn networks, our social media accounts and our connections in the community to find candidates who fulfil our goals of having the most diverse and skilled team possible. There is definitely still work to do in this space, and we hope to push this further as we reach into the relevant communities to find an even broader set of people with potential to add value to Bilue.


At this stage, it’s important to reflect on where we are today to set a benchmark for where we’d like to be in a year from now. Today, across our Melbourne and Sydney offices, we are a team of 30 in total. Looking at Bilue overall, we represent 10 nationalities and the gender balance is 17% women and 83% men. We will revisit this in 6 and 12 months time.  


So what’s next for Bilue….


We will be focusing our efforts in a couple of key areas. Firstly, the existing culture at Bilue. National Inclusion Week earlier this month was a timely reminder that creating an inclusive environment cannot be forgotten on the quest to diversity, they go hand in hand and need individual strategies. We will be utilitising Culture Amp, an employee feedback platform, to regularly pulse check the business and our progress towards our initiative goals. Culture Amp allows us to be data driven and gives equal weight to every member of the team’s perspective. The first survey through the platform will focus on measuring the level of inclusion, to ensure our culture is at a place where it can support the initiatives being rolled out. The results of this should be finalised by Christmas, after which we will move our focus towards understanding the level of employee engagement, which we believe is tied in with inclusion.


Secondly, as we grow our team the focus will be on hiring for culture add not culture fit. Culture is always changing, we want new employees to bring their own personality and character to build on what we already have. Shifting from ‘culture fit’, which has been an industry practice until recently, to looking for ‘culture add’ is essential for our diversity efforts to thrive.

Bilue – An SAP Partner At Last!

In 1999, I fell into the SAP world by way of a job offer from Deloitte Consulting. I always thought working with SAP would be a temporary state of affairs. But almost 20 years later and still going proves that my crystal ball was somewhat faulty.


I reached a bit of a turning point a few years ago when my daughter, who was about 3 yrs old at that point, came to me complaining that the TV was broken. Turns out the TV was fine - it was just her expectation that swiping across the screen to change the channels that had to be managed…


At that time I was working on  an idea for a startup and had become obsessed by user experience. My daughter and the TV was just another example (albeit louder and more persistent) of how the way younger generations develop expectations about how to interact with technology. I could see a pretty big problem heading towards companies using SAP. I mean, what’s the likely reaction you're going to get to a SAPGui screen from a Gen Y that arranges something as complex as their love life by swiping left or swiping right on their smartphone?


SAP & User Experience?

After almost 20 years working with SAP ERP I can assure you the number of times I had heard the terms 'SAP' and 'user experience' used together were few and far between. And when it did happen it wasn't complementary! Sure, Fiori was released to much fanfare - the first notable UI update since R/3 was released in 1992, almost 30 years prior - but the problem was that Fiori mobile & desktop apps were being delivered by people very experienced with SAP but with little to no idea about how to design with the user front of mind. By 2016 I started to see Fiori apps being built and rolled out with a very low uptake amongst the user communities. Not a great result for anyone.


Who Is Bilue?

Around this time I met Cameron Barrie, the founder of a company called Bilue. Turned out Bilue was a user experience design, mobile and emerging technologies company having worked with some of Australia's largest brands on their consumer facing iOS & Android apps. Companies like Woolworths, NineMSN, Domain and more recently Ticketek, Stan and Cricket Australia.


Unfortunately for me, ideas for revolutionary startups don't necessarily pay the bills so when Cameron called one day on the back of SAP and Apple announcing their new iOS SDK as an output of their Enterprise Partnership we started to talk more seriously.


We agreed that given Bilue's rich heritage in iOS and strong relationship with Apple it would be interesting to look at how to leverage Bilue's experience in the consumer space in the enterprise world especially given the evolving relationship between SAP and Apple.


User centric design is a great example of how to take an approach from the consumer world into the enterprise world - when you roll out a new mobile app for consumers for a well known brand there's a good chance you're going to have a few million users on Day 1. And no chance of a Change Management program to support it! The Bilue approach means that every app built is inspiring, intuitive and easy to use from the start otherwise it's a lost cause.


SAP Cloud Platform & Bilue - A Perfect Fit

When Cameron and I started taking a closer look at SAP Cloud Platform (SCP) beyond the SDK for iOS we found that the capabilities of SAP Cloud Platform around emerging technologies like Blockchain, Machine Learning and IoT mapped closely to the expertise that Bilue had in-house, in addition to design and mobile.


And so in September 2017 I came on board to focus on bringing Bilue's expertise in user experience design, mobile and emerging technologies to the enterprise market. All underpinned by SAP.


In that time we've been quietly working away…

  • We've built one of the first native iOS apps in the region on SAP using the SAP Cloud Platform SDK for iOS
  • We've built native Android apps on SCP as well (and have the scars to prove it!)
  • We've built out a pilot app with Voice UI on Realwear hooked into SAP ERP via SCP
  • We've worked on a number of design engagements, redesigning 1st generation Fiori apps to be relevant and intuitive on desktop and device
  • We've fostered relationships between SAP customers and Apple as we help SAP customers build out their mobility roadmap
  • We're now working with several organisations on their digital journey which encompasses not just mobility but the bigger picture from capture of data in the field using IoT sensors and drones, to aggregation of that data and subsequent automation of related business processes by leveraging insights that machine learning models can bring to dissemination of the resulting information to help people make informed decisions in the field and on the go.

And though it's taken a while, this week we're finally able to announce that we are now officially an SAP Partner as well with our complete focus being on the capabilities that SAP Cloud Platform provides.


All in all, it's been a full-on 8 months! But as I said to someone the other day - I'm pretty lucky that I get to turn up to work with a group of clever guys and girls who think about things a bit differently. And together we get to work out how we apply the latest technology to help solve the problems of our customers.


So yes, I’m still working with SAP and probably will be for a little longer yet!

Seizing the emerging tech initiative

I’ve got a few thoughts on the emerging tech space that I’d like to share. It seems to me that there’s a great deal of opportunity that isn’t being exploited to its fullest, and I’d like to propose a way to unblock this.

First some context. Never before have so many emerging technologies matured all at once, and at such pace. In the past five years alone, a vast array of technologies have burst onto the scene led by the tech giants and found their way into the hands of 100s of millions of users – TouchID and FaceID, AR & VR, Machine Learning / AI, Voice products, and so much more have not just become available, but are cheap, widespread and very high quality. And the cheap, enterprise-grade services that support these technologies are no less extensive – just check out this list of Amazon AWS products by way of example. So how should we go about selecting which technologies are right for us?

This presents us with both a challenge and an opportunity. The challenge is sorting through this breadth of technology, as well as even more nascent tech, to uncover the genuine value. The opportunity is to seize the advantage ahead of the competition. And yet there’s a lot of hesitation to seize this opportunity.

There are several well-established design techniques for bringing emerging technology to life, where the choice of technology is made up front and where this choice is the driving force for the project (‘technology led”). But my argument would be that there are too many projects that start with a limited remit in terms of technology choice (“user led” or “business led”) – the technology constraints are determined too early, greatly limiting the potential for unusual solutions that could return much better results. We need a way to meet the needs of the end users, meet business goals AND consider how emerging tech could potentially serve both.

I believe that the role of engineers within these projects needs to evolve. Up until now, their typical involvement would be to validate a suggested approach e.g. “is this possible?”, and “how long might it take to build?”. But the problem nowadays is that designers – and even individual engineers – can’t be expected to have exposure to the full breadth of technology that could solve a given problem. Engineers with deep expertise need to become involved earlier in the process, and be asked far more open questions e.g. “how might we solve this problem?” before the solution has properly taken shape.

For example, we’ve been building iOS and Android apps on the SAP Cloud Platform, and it was vital that the engineers were involved from the very beginning, not just guiding designers on what was possible, but also highlighting platform features that could help us take a better approach. Another example might be that an engineer would be aware of AWS Sumerian or other Machine learning capabilities, and how building models and importing them into CoreML on iOS could provide users with unique value. Engineers often think in a different way to designers, and combining these perspectives is increasingly not just preferable, but vital – especially if it can happen before projects are too strictly defined or constrained.

In John Maeda’s excellent talk at SXSW, he talked about the need to find Computational Designers. This mythical beast not only has a deep understanding of classical design and Design Thinking, but also knows technology inside out – “has facility with representational codes and maybe programming codes. Knows what is easy and possible, hard and possible, difficult and impossible for now”. I applaud the sentiment, but believe this understanding can more practically sit across multiple people each with deep expertise to get the best results.

In addition though, there’s an obligation on the part of designers to actively invest time understanding the world of emerging tech, to bridge the gap with engineers in pursuit of the best outcomes, and help the team become greater than the sum of its parts. We’re lucky at Bilue having great engineers a shoulder-tap away, and for those who aren’t so fortunate there’s an ever-increasing gap to try and bridge. But there are plenty of resources out there for people who are willing to learn. Carpe Diem!

ChicagoRoboto 2018

ChicagoRoboto is an annual Android conference held in Chicago IL. It’s currently in its second year and in this post I’ll cover some of the highlights from 2018.

The speaker lineup once again is top notch, ChicagoRoboto does a great job of enticing the best speakers from around the globe to attend and present. This year’s speakers included Googlers, Google Developer Experts and other Android Community members. Not all of the talks were tech focused, some highlighted how we can grow the Android community and how we can give back to make things better for everyone.

Speakers 2018

All the sessions were great, here are a few highlights with links to the slides where available.


No More □  —  Mastering Emoji on Android

“Tofus (□) are representations used when a specific character (like an Emoji) cannot be displayed. You have seen them, and so your users.

Thanks to EmojiCompat, now Android developers have a way to provide Emoji compatibility for older devices, but does it solve all the issues developers have with Emoji?

Have you wondered why Twitter counts characters differently depending on the Emoji? Or how gender and skin tone Emoji modifiers work? How can I have a similar functionality as Slack on my app with custom Emoji? Do all your users see the same Emoji?”



Espresso Patronum: The Magic Of The Robot Pattern

“Are you one of the numerous developers who wants to implement Espresso testing but hasn’t? Perhaps it’s for one of the common reasons – not enough expertise or time, it feels like a daunting task, or it feels downright tedious. I have personally felt each of those things. All of that changed once I learned about the robot pattern.”



The Road to Kotlintown III: Delegate 95 to Coroutine 66

“Even if you’re brand new to Kotlin, you might know that you can right-click any Java file and convert it automatically. Score! But wait, what are all these “!!” and why is the code littered with “?”. Sure, the code compiles, but how do you make the code not just compile but follow best practices? How do you get closer to making your code idiomatic?
In the third part of our series on learning the cool and idiomatic parts of Kotlin, we’re going to look at some intermediate Kotlin topics.”


ConstraintLayout 2.0

“ConstraintLayout 2.0 will be introduced early 2018, with many new features. This talk will present those new features and concepts.”



Rinsing the Brush: Picasso 3.0

“Picasso is a powerful image downloading and caching library for Android but since its launch in 2013, other libraries have improved or entered the scene and new requirements have come up.

In this talk, we’ll:

  • Dig into the internals of Picasso works
  • Compare and contrast to other image libraries
  • Discuss latest improvements as we push to 3.0″



In-depth path morphing w/ Shape Shifter

“Writing high-quality path morphing animations for Android is a near impossible task. In order to morph one shape into another, the SVG paths describing the two must be compatible with each other—that is, they need to have the same number and type of drawing commands. Unfortunately popular design tools, such as Sketch and Illustrator, do not take this into account, and as a result engineers will often have to spend time tweaking the raw SVGs given to them by designers before they can be morphed. To address this issue, I built a web app called Shape Shifter (https://shapeshifter.design), a tool that helps developers and designers to more easily create path morphing animations for their Android apps.”



Videos of the sessions will be available in a few weeks, so keep an eye out for those is you want to learn more about the sessions listed above, or any of the other sessions presented this year.

If you missed the conference in 2017, you can watch and listen to the sessions on YouTube

Key learnings and take-aways

The conference presented lots of useful information, its great to see that Picasso is getting an update and the features its going to include look great, especially since they’ll make use of the new ImageDecoder APIs in Android P.

ConstraintLayout 2.0 takes what is already a great library and elevates it even further, the new Helpers and Decorators should allow for some interesting new layout features, they just need to work out how people can package these up as libraries for use by others.

The talk by Alex Lockwood on ShapeShifter and SVG Path morphing was very interesting, digging into how SVG files are structured and the problems that arise when trying to morph between 2 shapes was really well presented.

The conference had about 160 people in attendance and the general feeling in the room on both days was very positive, there was lots of discussion outside of the conference as well which was great. Chicago is a long way to travel from Sydney but this conference is definitely worth checking out if you don’t mind the 20hrs travel time to get there.

Random photos from 2018

Here are some photos I took outside and around the conference.


Ryan, John and Jerrell – the Organisers


After Party @ The Nerdery

VR Beer Pong – After Party @ The Nerdery

Lego – After Party @ The Nerdery

One final story from the conference, as we were leaving the after party several of us got stuck in the lift/elevator and were rescued an hour later by the fire brigade. Nervous humour kept our spirits up, thankfully we didn’t have to resort to eating anyone to stay alive. This took place on the evening of the first day and several of the occupants of the lift were speaking the next day, the whole fate of the conference rested in the hands of the firemen/women who came to our rescue. Thanks!

Trapped….but with Wifi 🙂


Peeking Under The Flutter Covers

There are many great articles that demonstrate some of the more prominent features that Flutter offers, such as hot-reloading and cross-platform development.

In this article, however, I’m going to have a look at some of the lesser-discussed aspects of Flutter.

This article is aimed at developers, tech leads and decision makers that are familiar and comfortable with at least one of the two main native development platforms – iOS and Android.

Getting up to Speed

Diving into a new development framework can generally be broken into three separate areas – language, libraries and tooling.


If you know Java (or Swift or Kotlin) moving to Dart is a relatively easy exercise. Most of the flow control statements, data types and data structures will be very familiar.

One area where the Flutter language diverges is that all UI layout is done in code. If you’ve only ever used Interface Builder this may be a little confronting, however, for Android developers who are used to defining layout in XML (and iOS developers who programmatically construct UI, or React/React Native developers) it will be something that will be easy to transition to.

I’ll talk a little more about the specifics later on, but by and large, using the Dart language should not be a major transition hurdle.


This is the hardest part of getting up to speed.

Built-in Library

Developers will need to forget all about native platform functions and features for dealing with the various types. While the language itself isn’t particularly hard to grok, every small action developers have taken on their native platform will now have to be looked up and remembered.

It was surprising how time-consuming this is. For example, developers need to learn which specific Dart properties and functions are needed to:

  • Find the length of a string
  • Determine the index of a string in another string
  • Handle UTF-8 and unicode correctly
  • Add/remove items from a list
  • Manipulate dates (in a safe and sensible way)
  • Deal with exceptions (throwing and catching)
  • Regular expression handling
  • URL manipulation
  • Send/receive data over the network
  • Parse/create JSON
  • Many others…

While not rocket science, I found that I spent a lot of time looking up property/function names to figure out the Dart way of doing things. The built-in Dart library is large and rich – it will definitely take time to re-learn the Dart-specific idioms and features.


Traditional mobile dev has a number of package management tools (Cocoapods, Carthage, Gradle, Maven). Dart introduces another one. The mechanics are quite familiar in that you edit a control file (pubspec.yml), fetch the dependencies, and a file is created that contains the specific versions that were fetched (pubspec.lock).

Nothing too onerous here. However, it takes time to learn what are the typical packages that might be included in a Flutter app. For example, Dagger and Retrofit are ubiquitous in Android projects – what are the equivalents for a Flutter app? Similarly, libraries such as Alamofire and Result are extremely common in iOS apps.


Notionally, Flutter lets you work in either Xcode or Android Studio. However, with Flutter coming from Google, there is obviously a far tighter integration into Android Studio. For example, Android Studio has support for code completion, breakpoints, image swatches, Flutter inspector, template projects, etc.

Personally, I found that developing in Android Studio deploying to the iOS Simulator and Android Emulator was an extremely efficient workflow.

If you’re familiar with Android Studio, this will be a very easy transition. Xcode users will naturally want to continue using Xcode – which I’d say is possible but not advisable, so there’ll be some cross-skilling required for iOS devs to learn Android Studio.

Articles and Self Learning

To be honest, I think the documentation for Flutter is pretty good. They’ve made a conscious effort to provide documents that are guide-based for gettings started, and reference-based for when you want to know a bit more about the specific parameters.

There are quite a few example projects that Flutter provides that demo a range of techniques. However, there is definitely a shortage of articles and example projects that discuss techniques for achieving specific goals or exploring various architectural approaches for building an app.

To be sure, there are some good articles out there, but (as expected) nowhere near the volume for native Android and iOS development. I also found that many articles or github repos are essentially just extensions of the sample projects that Flutter provides in its documentation. For example, I found it very difficult to find decent examples on animation (other than the ones that Flutter provides).


Flutter is in an somewhat awkward situation at the moment as the officially supported language is Dart 1. However, it is in the process of transitioning to use Dart 2 (it may have already done so) and provides some instructions on how to use the Dart 2 beta.


I must admit I had trouble finding a concise description of the specific changes between Dart 1 and 2 (most references just point to a 10,000 line Tex file). From what I can glean though, the major change (which I welcome with open arms) is that types are now mandatory. Dart 1 is a dynamically typed language that supports type annotations. It also offers an additional mode called strong mode which enforces type safety. In Dart 2, types are now mandatory but the compiler allows them to be inferred. If you were already using strong mode in Dart 1, moving to Dart 2 will be much easier.

Sadly, though, Dart has poor treatment for nullability and optionals – at least, compared to Kotlin and Swift (and even Java/Objective-C with their nullability annotations). There are some operators that give a little bit of syntactic sugar when dealing with objects that could be null, however, the compiler won’t enforce nullability checks. There have been a number of proposals, discussions and prototype implementations dating as far back as 2011, however, as best I can tell, it won’t be introduced until after 2.0 is formally launched. This makes me sad.

As a language, Dart is very similar to Java. Many types and their behaviours look exactly the same as their Java counterparts. For example, Dart uses the same distinction between Exception and Error.

While both languages support generics (eg. List<String>), one key difference is that Dart doesn’t implement type erasure.

One thing that is missing from the Flutter implementation of Dart is dart.mirrors, which means code that relies on reflection/mirrors is not possible.

In terms of support for asynchronous activity, the language itself has first-class support for async and await statements which is really nice. It doesn’t use threads, but rather uses isolates which are similar to Erlang actors. An isolate is like a parallel worker, but doesn’t share memory or variables. You can pass data between isolates using messages and “ports”. Another nice touch is first class support for Futures and Streams.

My personal opinion is that Dart is a big step forward from Java, but a backward step from Kotlin and Swift (eg. absence of optionals and associated values in enums). All in all, though, I found it to be a relatively modern, expressive language. I’m sure there are language purists that may contradict me, but as a developer who just needs to leverage the platform to get work done, it is pleasant enough to develop with.

Virtual Machine

The Dart Virtual Machine (VM) is what is used to execute Dart code. The Dart VM is a bit tricky to describe because there are a couple of modes in which it operates:

  • Just in time (JIT) – This is the mode that is used at development time and is what underpins the hot-reloading functionality. In this mode, the Dart VM both compiles and executes Dart source code (in addition to the provision of the runtime libraries). Obviously, this is a bit slower than a traditionally compiled app, but offers a more dynamic execution.
  • Ahead of time (AOT) – Used when packaging your application for release. In this mode, the Dart source gets compiled to native machine code for the hosting platform. In this case, the VM is really only responsible for providing a set of runtime libraries… execution of the machine code is the responsibility of the hosting operating system.
  • Dart byte code (DBC) – This is where your Dart source gets compiled to an intermediate byte code (DBC) that then gets interpreted by the Dart VM. This mode is more analagous to the traditional Java/Dalvik interpreter where the VM interprets a stream of byte codes. I don’t believe this is used in Flutter.

Network Layer


At its most basic, the code to perform a GET request from a remote server looks like:

import 'dart:io';
import 'dart:convert';

var http = new HttpClient();
var uri = new Uri.https('api.website.com', '/users', { });
var request = await http.getUrl(uri);
var response = await request.close();

// now let's naively convert the raw bytes from List<int> to a String.
var responseString = await response.transform(utf8.decoder).join();

This is a good start, but getting back a String is not particularly consumable by our app just yet.

Converting JSON

Building on the above code snippet, Dart offers some more built-in converters (import dart:convert) to easily transform the String to a List<dynamic> or Map<String, dynamic>.

List<dynamic> listOfUsers = JSON.decode(responseString);

Unmarshalling into Strongly Typed Objects

Although the previous two steps were easily achieved, unfortunately, Flutter has extremely poor support for converting the output of JSON.decode() into strongly typed objects.

Dart does support runtime reflection using Mirrors, however that feature is explicitly disabled in Flutter.

This leaves us with two choices:

  1. Manually write code that maps json to/from our objects ourselves. This is error prone, but relatively straightforward.
  2. Use tooling to generate code that will do it automatically. There are a couple of options here: dson, json_serializable, or jaguar_serializer. While having to generate code is a pain, Dart/Flutter provide some tooling to make things a bit easier.

All in all, converting JSON to/from strongly typed objects is a big step backwards to what most mobile developers are used to with tooling like Android’s GSON/Moshi/Retrofit and iOS’s Encodable/Decodable.

Android vs iOS

One of the great benefits of using Flutter is that you can actually write code once and have it run on multiple platforms. However, it comes at the expense of honouring the native platform interface conventions.

With Flutter coming from Google, it should be no surprise that the design language and widget implementation is heavily slanted towards Material Design. By default, a new project defaults to using MaterialApp which is a bit confronting for iOS users.

Both Apple and Google have spent many years defining both a design language and implementation for how components, screens and transitions are used within an application. Flutter steps away from these patterns in two key ways:

Ground-up Implementation

Because Flutter’s implementation controls the entire rendering pipeline, everything you see on your screen is drawn by Flutter (including animations). As hard as Flutter tries, they cannot guarantee that the widgets and behaviours are the same as the native counterparts. When the operating system changes the way native components are rendered, Flutter apps do not naturally change in the same way native apps do.

If your app is a totally custom design that does not conform to Google’s Material Design or Apple’s HIG, this will not be a problem. However, if your application adopts native platform conventions, this will be a major sticking point.

Platform-specific Widgets

Flutter offers many standard widgets that are platform agnostic. They also offer a set of Android- and iOS-specific widgets.

This sounds really appealing, however, I have two concerns about this approach:

Parity between iOS and Android

At the time of writing, there are only 13 iOS-specific widgets (there are three times as many Android-specific widgets). Given that Google is driving Flutter, I totally understand that it is likely to be Android-first, however as someone responsible for shipping apps with high fidelity across both platforms it remains a concern that iOS doesn’t appear to receive the same level of attention as Android.

A perfect example of this is that the Cupertino widgets do not contain any support for themes, whereas the Material Design widgets contain full support.


In order to use iOS or Android specific widgets, they must be constructed explicitly. ie. you don’t just ask Flutter for a “button” and Flutter decides whether you get a RaisedButton or a CupertinoButton. Instead, you explicitly need to instantiate instances of the platform-specific widgets. Another example is AlertDialog vs CupertinoAlertDialog.

The major consequence of this is that, if you intend to use widgets that conform to the platform’s natural design language, you will need to build two User Interfaces for your app. In theory, you can build a single screen and have your app choose which widget is going to build, however your code will either end up being littered with if conditions or you’ll need to move to some sort of factory pattern that abstracts some of that away.

In addition to having to duplicate your user interface, it is also important to note that neither RaisedButton or a CupertinoButton share any common type ancestry other than StatelessWidget, so there’s no notion of just passing in the button to a function that can do common behaviour.

User Interface

Image handling

As expected, Flutter has support for supplying images based on different pixel densities. They do this using what they’ve dubbed “variants”, which is essentially a fancy name for the existing Android pattern of having sub-directories that contain images with the same name as in the root folder. For example, the following file hierarchy shows how Flutter deals with providing an image for various pixel densities:


The above example represents a single image asset called share.png that has different files for pixel density ratios of 2.0(xhdpi) and 3.0(xxhdpi). This file can be referenced from code by using code like the following:

// return image based on pixel density of current device
var image = Image.asset("assets/share.png");

Note that they’ve moved towards using an explicit ratio. In the case where there is no exact match, Flutter will choose the ratio that is the closest.

Adding images involves explicitly adding them to pubspec.yml and making sure the images are located in the appropriate directory and named correctly. This is quite similar to Android, but a bit of a step backward from using Xcode xcassets.

At the moment, Android Studio doesn’t show images in the gutter (like they do for an Icon), which is a shame.

Also, you run the risk of having typo’s in your image name (unlike Xcode’s #imageliteral). However, I expect that the IDE will eventually get around to supporting these nice features.

I’ve saved the worst for last, though… unfortunately, there is no support for SVGs. There are a couple of open source attempts, but not having first class support for at least one of the vector image formats is a sad story.

Layout management

Flutter’s layout system using an (almost) declarative custom constraint-based framework that is somewhat similar to iOS’s Autolayout and Android’s ConstraintLayout.

An example (taken from the Flutter documentation) of the code to render a small section of a screen might look like:

Widget titleSection = new Container(
  padding: const EdgeInsets.all(32.0),
  child: new Row(
    children: [
      new Expanded(
        child: new Column(
          crossAxisAlignment: CrossAxisAlignment.start,
          children: [
            new Container(
              padding: const EdgeInsets.only(bottom: 8.0),
              child: new Text(
                'Oeschinen Lake Campground',
                style: new TextStyle(fontWeight: FontWeight.bold),
            new Text(
              'Kandersteg, Switzerland',
              style: new TextStyle(color: Colors.grey[500]),
      new Icon(Icons.star, color: Colors.red[500]),
      new Text('41'),

You’ll notice that the layout objects (Row, Column, Expanded, Container, etc) and rules are interspersed within the components themselves (Text and Icon). This is because Flutter’s layout system is built on top of the Widget concept. Layout components are just another part of the widget tree.

Flutter comes with a bunch of bunch of built in layout widgets that let you control things like:

  • Padding
  • Alignment
  • Size
  • Aspect ratio
  • Transformations
  • Flowing
  • Grids
  • Tables
  • Offscreen buffering

One thing that I haven’t been able to find is a way that I can create relative constraints between two widgets. For example, imagine I had two Text widgets and I wanted the first one to be, say, 50% of the width of the second one. If the widgets were peers in the same node of the widget tree, I can see how I might be able to construct my own custom container that defines this layout relationship, however, if the widgets were in scattered in different locations within the widget hierarchy, I feel like it would be much harder. I guess I’d probably start with MultiChildRenderObjectWidget, but it certainly doesn’t look easy.


Flutter has support for several types of animations out-of-the box. It broadly breaks animations down into two general categories: tweening and physics-based.


Tweening is essentially animation between two values. For example, between a height of 50 and 100, a location of (5, 5) to (10, 10), green to red, etc.

Like most animation libraries there are quite a few ways to achieve the same objective. Basically, though, animating in Flutter means externally managing the timing (using something like an AnimationController) and periodically calling setState() in order to force a rebuild.

At first glance, this personally feels a little awkward in that the sequencing of the animation is defined in a different place to the actual application.

Two good articles on Medium give some concrete examples on how to apply some custom animations to your widgets: part 1 part 2

Fortunately, Flutter provides some native transitions that hide some of the boilerplate code, such as SizeTransition, RotationTransition, FadeTransition, and so on.

Even with all that though, I feel like there’s still a little bit of state management (or superclassing) required to perform animations. I’m yet to find a simple, boilerplate-free Flutter version of something like:

UIView.animate(withDuration: 1.0, delay: 0.5, options: .curveEaseOut) {
  // animate a property

Physics Based Simulations

There are a number of simulations that can be used to generate animations with springing, friction and gravity. I must admit, though, I haven’t spent much time investigation these in detail – however, I do know that these simulations are used to provide the iOS-specific scrolling feel.

Writing Custom Components

Flutter heavily promotes component creation by composition. To this end, it is very easy to create new widgets, override the build() function, and away you go.

You can also create “paintable” components that allow you to totally customize the drawing using a canvas that is passed. A very simple example of this that simply paints a red rectangle looks something like:

class RedRectangle extends CustomPainter {
  void paint(Canvas canvas, Size size) {
    final rect = Offset.zero & size;

    final paint = new Paint()
      ..color = Colors.red[400]
      ..style = PaintingStyle.fill;

    canvas.drawRect(rect, paint);

  bool shouldRepaint(RedRectangle oldDelegate) => false;

Interestingly, instances of this type cannot be embedded directly in the render hierarchy as they do not subclass Widget. You need to use a CustomPaint widget to wrap them:

Widget build(BuildContext context) {
  return new Scaffold(
    appBar: new AppBar(
      title: const Text("Demo App"),
    body: new Container(
      child: new CustomPaint(
        painter: RedRectangle(),
        size: new Size(200.0, 100.0),

Initially, I thought this was a bit cumbersome, but after a bit more digging I found that you can also add additional children to the CustomPaintwidget. Being able to provide your own painter to the widget tree opens up a huge range of possibilities.


As mentioned earlier in this article, Flutter “supports” the notion of themes – as long as you only use Material widgets. If you use iOS widgets, or write custom components, you have to do theming yourself.

A Theme is, like most UI-related elements, just a widget. By default, the MaterialApp wraps the app within a Theme which will be used by the Material widgets from that point on. At any point, you can have a nested theme from that point forward by wrapping components within your own (perhaps modified) theme.

An example of how you can override the colour of specific widgets is shown below:

body: new Center(
  child: new Container(
    color: Theme.of(context).accentColor,
    child: new Text(
      'Text with a background color',
      style: Theme.of(context).textTheme.title,

Even though the Theme classes are part of flutter/material.dart, in theory, I’m pretty sure it would be possible to extend so that you could still wrap your non-material widgets in a Theme widget. However, as I mentioned before, applying the various theme settings would need to be done on a widget-by-widget basis.

I’m quite disappointed in the decision to only provide theming to Material widgets. Make no mistake, theming across multiple platforms is difficult, however, it would have been nice to at least have the theming infrastructure implemented in a common way even if they only provided a Material implementation.

Tablet support

Unfortunately, I don’t believe there is strong support for iPads and tablets. Doing a search of the flutter codebase, I found less than a handful of lines of code that specifically mention iPad or tablet.

Adaptive Layout

One part of offering tablet/iPad support is adaptive layout and one strength of Flutter is its support for flexible layout. I have a lot of confidence that an individual screen can be coded that expands to fill the available space.

Tablet-Specific Layout

However, these types of devices have different navigational patterns and layouts should be changed in order to make the user’s experience as seamless as possible.

Android offers this capability by using custom layout folders based on qualifiers such as width/height/etc (eg. res/layout-w600dp) which will automatically take into account system decorations and whether the user is in multi-window mode. Similarly, iOS has strong support for this using Size Classes, which native components such as UISplitViewController will automatically recognise.

Writing widgets that are adaptive based on screen size in Flutter is definitely possible but as far as I can tell, there’s no built-in widgets that offer this functionality immediately.

Dependency Management

Dependency management in Flutter is managed using the Pub Package Repository. You can add dependencies by adding them to the pubspec.yaml file in the root of your Flutter project. This file, among other things, controls the list of dependencies that will be installed when you run flutter packages get.

An example of the dependencies section for a very basic app might look like:

    sdk: flutter
  collection: 1.14.6
  device_info: 0.2.0
  intl: 0.15.5
  connectivity: 0.3.0
  string_scanner: 1.0.2
  url_launcher: 3.0.0
  cupertino_icons: 0.1.2
  video_player: 0.4.1

    sdk: flutter
    sdk: flutter

The dependencies section describes the specific packages (and the version number) that will your Flutter app is dependent on at runtime. Like most packaging systems, you can specify a range of version number constraints depending on how strictly you want to control the version number of your dependencies.

You can also define dev_dependencies which are for dependencies that are only needed during tests (or perhaps for build-time processes / code generation).

Once packages have been fetched from the remote location, they are generally stored in $HOME/.pub-cache/hosted/pub.dartlang.org.

In terms of what files should be committed, the Dart folk have a pretty good page that describes what the various pubspec files are and which should be checked in.

If your development practice is to commit dependency graph artifacts into your app’s repo, it is a bit tricky to support. I haven’t found a specific technique that works yet, but as best I can tell, you’ll have to fiddle with using a path directive in your pubspec.yaml file and manually move files from $HOME/.pub-cache folder.

Bridging to Native

Calling Native Code

You can call back to native code quite easily using a platform channel. Essentially, this is just an RPC-style call where you can invoke a “remote” method passing some optional parameters.

At its simplest, this looks something like:

const nativeChannel = const MethodChannel("my.app.com/mychannel");

try {
  final int result = await nativeChannel.invokeMethod("someNativeFunction");
} on PlatformException catch (e) {

and then on the native side, you’d have code that looks like one of the following:


@objc class AppDelegate: FlutterAppDelegate {

  override func application(
    _ application: UIApplication,
    didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
    // ...

    let channel = FlutterMethodChannel(
      name: "my.app.com/mychannel",
      binaryMessenger: controller)

    channel.setMethodCallHandler { (call, result) in

    // ...


class MainActivity() : FlutterActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    // ... 

    val channel = MethodChannel(flutterView, "my.app.com/mychannel")
    channel.setMethodCallHandler { call, result ->
      result.success(123);  // return 123

You can use a similar technique to invoke methods from your native code back into Flutter code. Asynchronicity is supported in both directions.

There’s very little information that I can find on the relative performance of calls through these channels. Without writing exhaustive performance tests, my impression is that the performance overhead is imperceptible for the occasional call, however, I wouldn’t be wanting to make thousands of calls in a tight loop.

It is worth noting too, that you can create a Flutter plugin that lets you write a Flutter interface that bridges across into native code.

Launching Native Transitions

You can launch native transitions (eg. a Kotlin Activity or Swift UIViewController) by using a platform channel (per above) and manually starting an activity (using an Intent) or pushing to new view controller.

While technically functional, I find this to be a really awkward handoff. Essentially, Flutter is just deep-linking into your native app. If you just need to display a single screen, this will work fine however if you have a mixture of Flutter/native, I can see this creating all sorts of navigational/experience challenges.

Embedding Native Widgets

As far as I can gather, there is no capability to embed a native component (ie. a subclass of UIView, Fragment or View) within a Flutter widget. On one hand, I can see how this would be extremely difficult given that most native components expect to be running within the context of a native run loop. On the other, it means that the extensive open source libraries of native iOS/Android UI components that have been written (and battle-tested) over the past few years are not available for use within a Flutter app.

I can see there is a proliferation of open source libraries being released every day that target Flutter, however it will be quite some time until these components are as rich and well-tested as the current plethora of iOS/Android open source components.

One thing that I have noticed is that many open source Flutter widgets only target a single platform, depending on the author’s familiarity with one or the other. This is totally understandable as many devs are more comfortable/knowledgable in a specific platform, however, it is a question that consumers of Flutter open source libraries will constantly need to be asking.


At the moment, there are a number of coarse grained limitations that I’ve discovered along the way. I’m sure that some of these will be solved at some stage, but at the moment, a couple of the bigger ones that I’ve found are:

  • No flutter code can run in the background. This means there’s no Alarm Manager, Geofencing, or UIBackgroundMode. Now, you can always add this code to your respective native codebase, but for now, there’s no Flutter equivalent.
  • 3D OpenGL is not supported.
  • You’re on your own with ARKit and ARCore
  • Native integration into platform services (eg. HealthKit, Google Fit, Apple Pay, Android Pay, et al) are either not implemented or are unlikely to ever be
  • Maps are not supported
  • As mentioned earlier, vector graphics are not supported.

Secure Storage

Out of the box, Flutter doesn’t provide any libraries that can access iOS’s Keychain or Android’s Keystore. There is one particular 3rd-party plugin that does provide access, but the API is basically a very simple key/value store.


One of the great things about Flutter is that having a single source for the majority of your code means that you only need to write your test cases once. I’ve lost count of the number of times I’ve discovered two slightly different sets of tests for the same business logic in the comparable iOS and Android apps.

Flutter lends itself to super fast unit testing because you should only need the Dart VM to run the majority of your business logic and domain model tests.

If you want to perform UI automation testing, Flutter has a Selenium-like driver that you can use to write scripts that look something like:

test("tap on the button, verify result", () async {
  final SerializableFinder label = find.byValueKey("My label");
  expect(label, isNotNull);

  final SerializableFinder button = find.text("Press Me");
  await driver.waitFor(button);
  await driver.tap(button);

  String value;
  while (value == null || value.isEmpty) {
    value = await driver.getText(label);
  expect(value, isNotEmpty);

Tests can easily be executed from the command line (and thus from a CI environment) by using the following command:

flutter test test/my_tests.dart

When running the tests from within Android Studio, the results are presented using the standard test runner UI which is quite nice.


Flutter has some initial support for accessibility, however the documentation is pretty thin.

Following on from the initial implementation, there are a few outstanding tasks before it could be used in anger.

At the moment, some widgets (eg. Icon) have built-in support for a property called semanticLabel which can be used to provide TalkBack and VoiceOver text. For thos widgets that don’t support a built-in property, you can always wrap it in a Semantics widget. For example, below shows how to wrap a CircleAvatar inside a Semantics widget:

new Semantics(
  label: 'View John's Profile',
  child: new CircleAvatar(
    backgroundColor: Colors.red,
    child: new Text('John'),

While it is great that you can wrap any widget to provide some accessibility assistance, having to add that extra layer for every widget is a discipline that I don’t think will extend to most apps.


Internationalisation(I18N) and localisation(L10N) means many things to many people.

In terms of providing localised content for your app, Flutter has lots to offer as most of its I18N and L10N is provided by the Dart intl package.

For Android, the supported list of built-in locales is relatively modest but I expect that to grow over time.
Sadly, iOS again takes a back seat to the Material Design widgets. Most of the Material Design widgets have some level of built-in localisation, however, I cannot see any localisation logic for the equivalent Cupertino widgets. This makes me sad.

Initially, I thought that the Dart intl package provides pretty solid support for ICU date handling… however, upon researching a bit more, it seems that they do not support timezones. Note that this issue has been around since 2012… wut?! Not only that, but the documentation for DateTime.parse() says:

If a time zone offset other than UTC is specified, the time is converted to the equivalent UTC time.

In today’s world, where apps are used 24×7 all over the world, having no timezone support is bad enough, but to knowingly return incorrect data is simply horrible.

Editors note: I would love to be wrong about this. Please let me know if this is the case.

Packaging and CI/CD

The build process for producing a deployable artefact is slightly different for each platform.


Simply run the following command:

flutter build apk

Under the covers, this just runs gradlew assembleRelease. For a super simple app, this produces an APK that is about 8MB, of which about 2.5MB is in the icudtl.dat localisation file and it looks like the flutter engine libflutter.so takes up about 3.5MB.


The iOS build instructions are a bit unusual. They suggest that you need to first run flutter build ios and then go into Xcode and manually create an Archive. This is great for “Hello World” apps, but is not a workable flow when you’re using a CI server to produce automated builds. Until this gets resolved it looks like the iOS build process consists of running flutter build ios, followed by a subsequent call to xcodebuild.

Interestingly, I can see that flutter build ios is actually running xcodebuild under the covers and it does create what looks like a sensible Runner.app package in the ./build/ios folder. I’m not sure why they don’t go the extra step and create the ipa file.

The world’s simplest Hello World app creates an un-thinned IPA that is about 60MB. About 30MB of that is the standard Swift library, 20MB is in Flutter.framework, and the remaining 10MB is in App.framework which looks like it contains the compiled Dart code.

Interestingly, the contents of the Runner.app/Frameworks/App.framework/App framework leaks a heap of internal information about the build machine. For example, if I run strings App.framework/App, I see stuff like:


The contents of Flutter.framework/Flutter framework is a bit more benign and appears to contain the flutter engine.

Aside: It bothers me more that it should that one of the flutter build commands uses the artifact extension apk and the other uses the platform name ios. Consistency matters. But I digress…

Multiple Flavors

In theory, Flutter supports different flavors by passing --flavor xxx to the flutter build command, however it doesn’t look like it works properly for iOS (works fine on Android – it just runs gradlew assembleXxxRelease).

For iOS, the recommended way to use a scheme is to pass the scheme name in the --flavor parameter (not sure why there isn’t a --scheme parameter). The problem is that it doesn’t use the configuration that is defined in the scheme, but instead looks for a configuration with the same name as the scheme with a -Release suffix. So, if you run flutter build ios --flavor xxx then you must also have a configuration called xxx-Release.

I’m sure this situation will improve eventually, but from what I can tell, there aren’t any concrete plans to address this. If you use schemes/configurations to manage your Xcode build config (which many apps do), this leaves things in a state of uncertainty.

Push Notifications

Generally speaking, there are two types of notifications: remote and local.

Native remote notifications are caught by the wrapping native application and will need to be passed into the Flutter components using a channel as described above. Alternatively, there is a Firebase plugin provided by Flutter that does a lot of the heavy lifting for you.

There is no native support for local notifications within the Flutter SDK, however, there are several open source plugins.


The Flutter docs are pretty good. Not only do they provide loads of getting started guides and tutorials, they also provide a jumping-off point to the reference documentation for the SDK itself – which, depending on where you look, is also well documented. There are a few examples, though, where the details are pretty slim. All in all, though, I found the documentation to be good.

What is missing, though, are the plethora of blog posts that are available for iOS/Android detailing specific problems that have been encountered and solved. To be sure, there are some good blogs however, given the relative age of the Flutter community, it will take time for these types of articles to appear.

When I’m searching for examples of how to use a particular component, I often just search github.com for projects where others have (hopefully) done something similar. What I found when researching many of the Flutter widgets, though, was that the vast majority of github projects were just clones of the example projects offered by the Flutter team.

The last thing that just isn’t there at the moment are articles describing alternate architectural patterns and how/where they are (or are not) appropriate. For example, there are many iOS posts contrasting MVC, MVVM, Viper, et al. I’ve seen very few of these so far.

I’m sure this will improve over time, however, at the moment there is a definite need for additional blog posts.

There is also a gitter community and a google group that offers some level of support too.


One of the things I’m very conscious of is how apps built with Flutter will be supported in the future. For example:

  • Will Google even support Flutter in a years time?
  • Flutter is evolving very quickly (which is great), however, what is their backward compatibility story? Will an app written today compile in a year’s time? Swift already has a similar problem at the moment, and it isn’t evolving as quickly as Flutter.
  • If I have someone from my team build an app using Flutter, and then someone else has to pick up new features afterward, how is that handoff going to work? How hard will it be for me to cross-train developers?
  • At the moment, iOS developers have a lot of experience in the native UX patterns of an iOS app, just like Android developers do for their platform. If I develop an app using Flutter, are my developers expected to have deep knowledge of the UX patterns for both iOS and Android – as well as knowledge of Flutter. How hard will it be for me to find developers that satisfy these skills, or are motivated to learn them?
  • What is the current skills market like? At the moment, both iOS and Android are beginning to become commodity skills and developers generally have many apps under their belts that contribute to their knowledge pool. How hard is it going to be to find similar developers for Flutter?
  • I work for a digital consultancy. Choosing Flutter for a client app over the native platforms is an important technical decision that could have long-term impacts on my client’s technical stack.

Code reuse

If I think about how an app would be built for two platforms, there are five main components:

  1. Object model, business logic, network unmarshalling, error handling, validation rules, et al. This is the “functional core” that Gary Bernhardt talks about in his Boundaries talk.
  2. Pieces of screens that are common across iOS and Android. For example, imagine you have a screen where the user can update their profile details. The contents of the screen itself (not the chrome outside or the transitioning to/from the screen) are very often exactly the same.
  3. iOS-specific transitions, animations, and patterns
  4. Android-specific transitions, animations, and patterns
  5. Unit tests

Out of these five items, Flutter has an excellent code reuse story for three of them (1, 2, and 5). The exact percentage that these three items represent within your app will vary, but it is almost certainly non-trivial. Having to only build those components once is definitely a time saver – leaving only the platform-specific components to be individually developed.

Note that if your app doesn’t conform to native platform conventions (eg. it is a game or a highly branded UI), then you could argue that even items 3 and 4 could benefit from code reuse.


Ten Point Summary

Things I Like

  • This isn’t just something that has been smashed together. It is an engineered solution.
  • I love the react-style declaration that is used for widget definition
  • Hot loading is amazing!
  • Being able to reuse object models, unmarshalling, network handling, and all of the tests that go with it is excellent.
  • Dart 2 is a strongly typed language making refactoring much easier

Things I Don’t Like

  • Cross platform SDKs mean a dip in native platform convention fidelity. UI components that are built from the ground up are just not the same as native components.
  • Poor iOS support. There are so many things where support for iOS is just tacked on as an after-thought.
  • Supportability is a big question mark that will rule out a bunch of scenarios
  • Network unmarshalling is a big step backwards to what we’ve been used to with Codable and Retrofit.
  • Lack of timezone support. While seemingly a single micro-issue, this really bothers me.

Would I Use It?

As always, “it depends”.

Apps I Wouldn’t Use It For (yet)

  • A flagship app that is going to be extended and maintained for years to come. I’m still very wary of the support burden.
  • Apps where following native UI/UX conventions is important.
  • Apps where my client is conservative with their tech stack approach.
  • Apps that have high native interaction (HealthKit, ARKit/ARCore, etc). I would, however, consider in-app purchasing and just bridging between the native code and Flutter.
  • An app that only needs to be built on one platform, although, having said that some of the productivity gains (React-style widgets + hot loading) may swing me in the other direction once I have a bit more experience.

Apps I Would Consider Using It For

  • Apps that need multiple platforms, but have a limited budget.
  • Brand-heavy apps that eschew native platform conventions in favour of a custom UI
  • Proof of concept apps
  • Short lifetime apps that need to be built quickly, but not necessarily maintained
  • My personal projects
  • Games

In Closing

Wow, this turned out to be much bigger than I thought! Hopefully I’ve covered a lot of topics that we should be thinking about when considering a new technology such as Flutter.

Of course, a lot of my summaries are subjective and prey to my personal biases and preferences. I do hope, though, that all of the information I’ve discussed is accurate. Please let me know if I’ve missed something or if you have thoughts you’d like to share.

I can’t wait to see what Google announces in a few weeks at Google I/O!!

Bilue expands leadership team with appointment of Experience Design Director


Sydney – 18 October 2017: Mobile agency Bilue has further strengthened its senior leadership team further with the hire of Jason Massarotto as Experience Design Director.

Jason brings over 20 years’ experience, building and leading award winning teams, which has enabled him to deliver best-in-class experiences across a variety of brands representing Tourism, Banking, FMCG and Telecommunications.

Jason commented: This is a great time to be joining Bilue. For the past seven years they’ve always had mobile experiences at the heart of their business, and we’re now at a point where the mobility of people is central to all experience design. Bilue is now perfectly positioned to help customers take advantage of technology advances on all fronts, including mobile payments, artificial intelligence, augmented reality, voice apps, and of course the web. It’s an exciting era in the technology space and we’ll be helping our customers to thrive on these opportunities.

Jason will report into Phil Whitehouse, General Manager of Bilue.

Phil commented: We’re delighted to bring Jason on board – having worked with him previously I’m keenly aware of what he brings to the table. His arrival at Bilue represents a significant step change in our capabilities. Not only are we continuing to focus on the quality of the experiences we ship, but our continued investment in building an integrated, sustainable, high quality and valuable experience design practice attracts customers and talent alike. This puts us in a formidable position to build inspired experiences on intelligent platforms, in pursuit of our goal to deliver change to millions through mobility.


About Bilue

Bilue is The Mobile and Emerging Technology Company, based in Sydney and Melbourne.

We are the only company which focuses exclusively on cutting edge technologies that give our customers a competitive advantage. We offer a full stack of services from strategy, product design and content development through to build, test and launch. Our approach to data ensures rapid and continuous business growth for our customers.

Don’t believe the hype (cycle) – part 3

Following on from part 1 (AI everywhere) and part 2 (Transparently Immersive Experiences) in this series, it’s time to take a look at the remaining group of technologies in the 2017 Gartner Hype Cycle – “Digital Platforms”.

This catch all includes several game-changers, such as:

(maybe 10x faster than 4G, and much more reliable)

Digital Twin (digital replicas of physical assets, processes and systems, providing people with more powerful monitoring, analytical, and predictive capabilities)

Edge Computing (data processing at the edge of the network, such as on people’s laptops)

Blockchain (“federated ledgers” – devices allowing decentralised management of transactions. Not just crypto currency!)

IoT Platforms (joins the dots between IoT devices)

Neuromorphic Hardware (an AI chip that can operate on your personal device)

Quantum Computing (next level computing speeds)

Serverless PaaS (bit of a misnomer – you still need servers, but less work is required to optimise them for demand)

Software-Defined Security (reducing the human intervention in security)

And here’s where they all sit on the cycle:


Picking through that list, it’s interesting how almost all of them represent incremental improvements over what we’ve already got.

Some appear on the surface to simply help us do what we can already do, but more quickly or more effectively.

Others are still very much in R&D (Quantum Computing), with the true value and implications of these technologies difficult to understand. For example the security implications of Quantum Computing is likely to be quite asounding, and not in a positive way. What the value of Quantum Computing will be for everyday people is hard to fathom, it’s an exponential change in processing power, and it’s very hard for humans to think in terms of exponentials.

Admittedly many of these improvements will be exponential in nature, and there’s no denying that they’ll have a impact far greater than we can imagine.

Here at Bilue we focus on building “inspired experiences on intelligent platforms”, and so we take the time to consider these emerging technologies alongside some which we use already – think Google Actions, Apple HealthKit, Amazon Web Services and the like. What’s interesting for us is that the barriers to participation for these existing platforms are incredibly low. At the risk of blowing our trumpet, our talented iOS and Android developers have no trouble turning their hand to Apple HealthKit, Google Actions, TensorFlow or AWS Lambda. So when we look at this list, we wonder both how easy they’ll be to put to use, and when they’re likely to be ready for production.

It’s also interesting to think about how some of these platforms might be combined to create new value as they become available. For example, we’re willing to bet that the current usage of IoT devices is trivial compared to where we’re going. Connected sensors sitting on an Intelligent IoT platform would be revolutionised with Neuromorphic hardware, with distributed, contextually aware intelligence creating something more akin to Skynet than turning lights bulbs on and off. It’s no surprise that we’re observing a doubling down on ethical AI; with Elon Musk already making great strides in this area, and a lot more work to come.

So…onto Blockchain. A good place to finish as this is arguably the buzziest technology on the whole hype cycle, and it’s remarkable that it hasn’t really captured the public imagination yet. The obvious reason is that it isn’t the easiest concept to grasp – the notion of a secure, de-centralised, distributed ledger is sufficiently unique to resist simple comparisons. Over simplified explanations might be doing more harm than good (we think this explanation from Business Insider is a decent starting point, and there’s enough opportunity that Gartner has developed its own readiness tool). Nevertheless, there’s enough activity in this space to claim that we’re past the tipping point, with an eye-popping $100 billion and climbing tied up in cryptocurrency managed without banks involved. The genie isn’t going back in the bottle.

Given that most of the Blockchain buzz has focused on crypto-currencies such as Bitcoin and Ethereum, newcomers might be forgiven for thinking this is Blockchains sole purpose. But the utility of Blockchain goes much further than this. For example, in this Huffington Post article they’ve highlighted five other uses including contract management, digital identity, cloud storage, digital voting and timestamped notaries. One of our clients is using it for agricultural exchanges, to reduce the time between providing grain and getting paid for it (long and complex supply chain). A vast number of start ups are starting to leverage this technology, and the possibilities are endless. Well worth staying close to, and we would argue that Blockchain is past the trough of disillusion, well beyond where they’ve placed it on the graph.

That concludes our assessment of the technologies on the 2017 hype cycle! We hope you’ve enjoyed reading it as much as we’ve enjoyed writing it. If you want to have a chat about any of the technologies on the hype cycle and how they could help you to grow your business, please feel free to get in touch. And if you disagree with our assessment, feel free to add your comments below – we love a good debate!

Don’t believe the hype (cycle) – part 2


Following on from the previous assessment of the Gartner hype cycle technologies falling under the AI everywhere banner, it’s time to look at the second batch of trends – “Transparently Immersive Experiences”. This category has a number of distinct technologies shoehorned under it – AR / VR, Connected homes and Nanotube electronics, not to mention Brain-Computer Interfaces, 4D printing and more.

Let’s take another quick glance at the hype cycle to see where these technologies sit:


It’s a solid eight years since the UX community coined the phrase “the best interface is no interface”, and it’s taking a while for reality to catch up with the sentiment. While technology continues to get cheaper and lighter, we’re still a fair distance from being able to wear something comfortably for any length of time. But therein lies an interesting thought…as with Machine Learning in last week’s post, we have to ask; what constitutes success? Widespread, mainstream appeal of VR / AR is a long way off (read this BBC audience research if you’re in any doubt), but there’s still high potential for the technology to be very successful in narrow use cases in the near term. For example, we’re excited about Google Glass Enterprise Edition – you wouldn’t want to be the glasshole wearing one in a bar, but the benefits are obvious for someone needing to do their job hands-free, such as a surgeon, a mechanic or a field services worker – especially as it’s much lighter than, say, an Oculus Rift VR headset (36 grams vs 470 grams). And on the software side, Apple’s release of ARKit in iOS11 firmly marks Apple’s intention in the Augmented Reality market, and some of the work developers have produced to date is nothing short of amazing.

Brain Computer interfaces and Human Augmentation are other matters entirely. If you’ve read Tim Urban’s fascinating Neuralink blog post, you’ll know that the  10+ year timeline for BCI is probably fair. That hasn’t stopped our company founder & CEO Cameron Barrie giving it a good go though.

Connected homes and Virtual Assistants are another matter. Gartner has both those technologies at the “Peak of Inflated Expectations”, with the plateau being 5-10 years out. I’d like to challenge that – surely Google Home, Amazon Alexa and Apple’s HomePod devices qualify, and will be firmly in the mainstream if not this Christmas than next year?

My family has 13 devices connected to our wi-fi, but the Google Home device is the only one in the home automation category. When a couple of lightbulbs sets you back $139 and don’t solve a real problem (turning on a light switch is hard because..?) then I would tend to agree it’s a way off. But with devices such as Wemo mini smart plugs lowering the cost of entry, it’s a great time for the early adopters.

What’s really interesting is that voice assistants will be handing over a lot more power and control to the user and the intermediaries. Let’s say for example that you operate an airline. Right now you would market your product through a sophisticated marketing mix of paid, earned and owned media. But how on earth do you ensure that Google Voice sends business your way when asked about flights to London? A verbal answer isn’t ‘scannable’ in the same way a list of results is – the expectation is that the response will be more tailored and human. We’re expecting answers rather than a list of places where answers might be found. What’s more, everyone in the advertising ecosystem – including Google – will need to come to terms with these new dynamics. I put up with banners because they can easily be ignored, but no-one’s going to listen to an advert while they wait for their search result(s) to be read out.

One set of technologies notable by their absence are those driven by gesture based control. Products such as Leap Motion and Knocki were all the rage at one point, especially at a time when Microsoft Kinect and Nintendo Wii were becoming popular, and the arrival of wearables was expected to drive more of this behaviour. Fair to say this is no longer the case, and in the case of Leap Motion they’ve now firmly hitched their wagon to the VR horse

As a mobility focused company, our perspective is that it’s interesting to see “Transparently Immersive Experiences” being heralded as the next big thing. However, one of the main reasons smartphones have become so popular is that humans interface with them with virtually no movement or fuss. The interface is already very powerful, if not transparent. People can scan large quantities of data very quickly and discreetly, without worrying (too much) about who else is being nosey – it’s much quicker than listening to speech, and much more discreet. It’s great for just about every location you can think of, even under water. Our view is that it’s best to think of these emerging technologies as augmenting and supporting the mobile device rather than replacing it any time soon, at least until Brain Computing Interfaces come online in ~20 years! Time will tell…

Next up: the third and final part, the ultimate catch-all: “Digital platforms”. 

Don’t believe the hype (cycle) – part 1


The 2017 Gartner hype cycle is out and, while it should always be taken with a pinch of salt, it’s fun to decide for yourself whether or not you agree with their assessment. Before we get stuck in, it’s worth having a quick read of this interesting analysis of past hype cycles where the author assessed how accurate these predictions had been previously (spoiler: not very). But still; I think we can all agree that technology is moving at an ever increasing pace, which not only makes predictions harder to make with certainty, but also presents increasing opportunities to outflank competitors – or be outflanked.

So with that in mind, let’s take a look at this year’s hype cycle:


Interestingly, for the first time Gartner has decided to sub-categorise most of these technologies into three buckets; AI everywhere, Transparently Immersive Experiences, and Digital Platforms. We’re going to tackle each of these over a series of three blog posts, starting today with AI everywhere.

The hype – or, dare we say, hysteria – behind Artificial Intelligence reminds us of the Big Data excitement about a decade ago. Everyone’s talking about it, very few can show results, it’s rarely well defined, and it’s being heralded as the answer to whatever problem you care to mention. This enthusiasm contrasts with a sense of the shine coming off, such as the AI poster child IBM Watson being uncovered as overhyped, and the lack of self-driving cars on the road several years since they were first announced with excitement. Such activity is highly indicative of the trough of disillusionment.

Maybe it’s because we’re early adopters by nature, but our feeling is that quite a few of the AI technologies are even further along the hype cycle than the trough of disillusionment. When we look at virtual assistants, the connected home and machine learning, we can see a positive pattern of widespread experimentation and learning, with the occasional commercial success and the broader pragmatism you’d expect to see on the slope of enlightenment. In any event, take note of the key used in the diagram: “Plateau will be reached in:”. What represents a plateau when it comes to e.g. machine learning? We expect it will fragment significantly for many years to come. Some basic aspects will become mainstream soon, while others will continue to build on these breakthroughs over the next few decades. In any event we feel we’re entitled to a more optimistic outlook than the graph indicates.

The aforementioned analysis of past hype cycles refers to the Gartner hype cycle as “mostly a reflection of industry consensus”, which is fair, but which also masks wildly varying opinion. The typical Creative Director might be quite comfortable inflating expectations around AI as part of a creative idea, but they’re not always around when such an idea needs to be built. In contrast, practitioners such as those at Bilue are already on the slope of enlightenment due to their first hand experience and their ability to better interpret the various success stories, such as with AlphaGo. We mix proven technology (especially e.g. Tensorflow and CoreML) with strategic intent to find utility with a decent prospect of success, rather than encouraging non-technical creatives or business leaders to go wild with ideas that may not be viable.

Our recommendation would be for companies to invest in multiple, small experiments to figure out which of these technologies can generate a return. Typically these might be in a narrow context, such as reducing call centre costs by a modest margin, or helping position your brand as being more innovative, but putting a strategic lens over the opportunity space can improve your strike rate return a stronger ROI.

Phil wrote an introduction to Artificial Intelligence a little over a year ago. The closing paragraph still stands up:

“In the short term…we can start looking for opportunities to exploit AI technologies as they mature and generate new forms of value. It’s important to get an early, solid understanding of how the opportunity can be exploited and, as with the technology waves that came before, a few well-chosen bets may pay off handsomely. She who dares wins, but let the buyer beware.”

Next up: Part 2 – Transparently Immersive Experiences

CXO Leaders Summit


We attended CXO Leaders Summit in Sydney last week, here are some top line takeaways for those who couldn’t be there.

The event kicked off with Steven Marks, founder of Guzman Y Gomez, talking about how he spun up 80 restaurants in Australia and 94 worldwide. He’s a charming and ambitious Brooklynite, a really scrappy hustler, who arrived in Sydney via Wharton Business School and Wall Street. Got fed up betting on other people’s companies and decided to launch his own. Big focus on brand to start with – notice that their type font is made with black tape, like the cheap and cheerful taquerias in Mexico.


Also a big focus on culture during the early days – “I look after them, and they’ll do anything for me”. Then, once they’d established the building blocks of great product and culture, they focused on optimisation – remove all the bottlenecks.

Key quotes:

“When did fast food become bad food? We’re gonna take on McDonalds”

“The app took 20% of their orders when it launched, but has been growing quickly ever since.”

“I didn’t even know what CPA stands for – all our marketing is word of mouth, and personal. We want to nurture and cultivate raging fans”

Our take: Culture is king but often fades from sight in large organisations. It takes guts and perseverance to drive impact in large organisations, and it’s virtually impossible without leaders setting the right examples.


Then a panel with LJ Hooker, Catch Group and ANZ, moderated by Prudential – How to build a customer centric brand.

Interesting, the person to my right spent the session complaining about ANZ, and on my other side was someone complaining bitterly about LJ Hooker. So it begs the question, what’s the point in doing CX work if it doesn’t affect public sentiment? 

ANZ and LJ Hooker talked proudly about their activities – customer journey mapping, experience walls, customer for life programs and the such. In contrast Catch Group were talking about getting closer to actual customers, and working hard to get the voice of the customer fed back into their actual products (not just design assets).

ANZ and LJ complained about the challenges changing the culture – “Waffle” said the lady to my left. The previous speaker had just illustrated that cultural leadership has to start at the top. In contrast Catch Group are more nimble, and talked about empowering customer facing staff, and hiring entrepreneurial people as a strategic priority.

Our take: Focus on business outcomes, rather than design outputs. Get something in market quickly with feedback loops in place. Design Thinking has it’s place, but Lean UX will more often get you where you need to go.


Then a game of buzzword bingo with Genesys. He spoke quickly and said little. #cloud #experience #customers #empowerment #AI #BigData #Uberisation #trends #IOT. Then some very badly designed screens with tiny illegible writing, and a weird voice driven interaction that forced people to juggle between text and web. So much for Digital CX!

Our take: sponsor presentations shouldn’t focus on the sponsor. Pick an interesting topic and let your knowledge speak for itself. Standard presentation rules apply.

Then we were in 121 sessions with some of the speed dating sessions the organisers had set up with attendees. On a personal note, they were great! Pretty much everyone who came to these had a current need relevant to our new Mobile & Emerging Technology positioning. Big thanks to Blake and Nick for the organisation. It did mean that we missed some of the workshops, but we got what we needed out of the event.


In the afternoon, Karen Ganschow of nab gave a talk on Customer Life Cycle vs Product Life Cycle which should trump? She observed that what you measure is what you manage. All banks focus on the same metrics e.g. number of products (how’s this for an acronym: CW4P – customers with 4 products!). As a result, propensity models are receiving a lot of focus, but even still only a tiny number of customers are likely to respond to upsell and cross sell overtures. Rule of thumb: let the customer behaviour trigger the focus. Death to propensity models.

Our take: Karen’s an engaging speaker, and it was good to hear some challenging thinking about propensity models. Data and the use of it is simply a means to an end – you still need a great product to drive behaviour. Perhaps her talk focused a little too much on customer journeys and life stages, and not enough about actually shipping software and increasing speed to market / customer feedback loops.


For day two, the opening keynote was held by Stephanie Myers of Prudential – Customer Experience and Engagement: How to Build a Customer-Centric Brand. It was refreshing to see a focus on mobile first as a principle and attitude and not just an approach to design. Our customers are constantly connected to the internet and to each other, and this should drive the way we approach the design of services. However the danger is that native apps exist in a very competitive space – the top eight apps are all owned by Google and Facebook, and about half of all apps are deleted within one month of installation. Unless your app offers significant utility and or entertainment, it won’t get used. She recommended focusing on, and designing for, mobile only journeys. Think of desktop as optional – it drives the best mindset.

Our take: We couldn’t agree more! We’ve had clients who want us to build e.g. marketing apps for the top of the conversion funnel and we’ve generally pushed back on this. Think through what value the customer needs via mobile devices for their whole lifecycle, and let this influence the role of web vs native. Rule of thumb: go native for repeat users, and earn their trust and respect to drive the loyalty and advocacy that you’re looking for.


Next up was a panel discussing CX: The new battleground for marketing Organisations. All present agreed that a key current trend is a focus on value of advocacy / advocates, and how this ties back into repeat business. It begs the question: Who owns the customer? It’s trite to say ‘everyone’ – better to break it down. Marketing can be the catalyst for improvement, but initiatives and teams driven by the C-Suite can also have a big impact.

The panel discussed ‘voice of the customer’ initiatives – suggested focusing on the positive feedback as much as the negative. For example, “we’re hand writing a note to thank a member of staff after you left positive feedback”. Customers and staff given a boost.

Our take: customer feedback in a native app context is a balancing act. Yes, you want people to rate your app highly in the app store, but ideally you also want to drive advocacy and potentially up-sell / cross-sell / refer-a-friend activity as well. Do too much of this and your customers will become fatigued, so plan this activity with taste and due consideration for your customers’ time. And then track engagement carefully.

One more thing. ‘Who owns the customer?’ is a trite question. If you have to ask it, then your challenge isn’t organisational, it’s cultural. Your whole business needs customers more than they need you, so a better question to ask is ‘How can we all help the customer to achieve their goals?’.


Into the final straight now, with a panel on Winning over the skipping generation. Much has been written about millennials and their habits, but it’s rarely constructive. One solid piece of advice came from Chris Dodson, Head of Marketing for Youtube, who said that authenticity comes from putting creative power into the hands of the creative people in front of the camera. For example,  Casey Neistat is an influencer appointed by YouTube as part of his “Do what you can’t” program. He was given complete creative control over the content, rather than being told what to do by a traditional Creative Director at an ad agency. Interesting.

Our view: Millennials aren’t as unfathomable or fickle as people like to make out. They’re just better at filtering out bullshit. And as this forces companies to deliver actual value on the terms of the customer, I’d say we all benefit from this.


Finally, Facebook gave a talk called Eventually, everything connects. Promising title, but it was just Facebook talking about bots. They showed a few case studies where call volumes dropped, but we know the score – while the situation is gradually improving, none of these bots would pass a Turing test. But their handover protocol looked interesting, breaking down the problem and then combining automation tech and conversation tools to make it happen.

Our view: there’s definitely value to be had with bot implementations, but also a danger of over-reaching. By properly understanding the limitations of the technology you can find the right balance between bot and human interaction, the sweet spot where customers will be happy to interact with a human back-up at their discretion. Then you can carefully push the boundaries.


That’s all! The main takeaway for us at Bilue is that speed is of the essence. Get a tasteful and respectful product in customers’ hands quickly, and make sure solid feedback loops are in place. Plan product development around this feedback. Your customers will thank you!


Contact Info

Level 1 6 Bridge Street, Sydney, NSW, 2000

Level 1 520 Bourke Street, Melbourne, VIC, 3000

Copyright 2018 Bilue Pty Ltd ©  All Rights Reserved