16 July 2018

Disruptive Technologies – Predicting the Future

Too Many Lists!
The world of social media is littered with lists. There are lists for leaders, lists for motivators, lists for technologists, and even lists for haters of lists. Most of these lists claim to be definitive and declare themselves to be incontrovertible fact, and the worst of these lists are the ones that predict the future…

…And so in the spirit of having too many lists that predict the future, here is my contribution.

My blog for today has been prompted by a look back at a tweet from three years ago by @ForbesTech that listed the top 5 disruptive technologies that “will shape the future”. Their list was as follows:

1. The sharing economy (e.g. Uber)
2. Cloud computing
3. Digital supply chain
4. 3D printing
5. Internet of things

If you take the time to follow the link from the tweet to the article you will find out that the actual point of the list is to highlight technologies that will create jobs and provide entrepreneurial opportunities, but I’m not beneath deliberately ignoring this point to pursue my own agenda.

My immediate response to this list was “those aren’t technologies of the future; they are very much with us now” and that led naturally to considering which technologies I considered to be the ones to watch over the next ten years. Having considered this for at least ten minutes I tweeted “IMO it's robotics, implantables, life prolonging drugs, and fast charge batteries. Blog post coming...” and committed myself to justifying myself. I also clearly owe an apology for the dubious act of adding “able” to the end of a verb to create a new noun for which a perfectly good one already existed. 


Apology over; now for that justification.

Look into my Crystal Ball
The first thing to note is that just like everyone else (even those with credentials), I’m guessing, but guessing leads to discussion which leads to argument which leads to new thinking, and that’s what I’m really interested in. So, for clarity, my list is:

1. Artificial Intelligence & Robotics
2. Implantables
3. Life prolonging drugs
4. Fast charge batteries

AI & Robotics – back in 2015 there was a NASA robotics competition which prompted everyone to joke about the fact that robots couldn’t even open doors and fast forward to 2018 where a Boston Robotics video showing robots opening doors for each other goes viral. The simple truth is if you state robots can’t do something, someone will make them do it but this is not the point. The real future in robotics is not about replacing humans; it’s about automating the things we use to make our lives easier, and augmenting our natural abilities with things we find difficult.

Implantables – it’s about convenience. Making the technology just happen. An evolution from the clunky world of machines that make us work their way, to devices that allow us to use touch, gesture and voice, to solutions that simply interface with us in the way we naturally interact with the real world. I blogged about this in my post entitled The Augmented Human.

Genetic Medicine – it’s about being healthy and lasting longer. It’s inevitable. In my grandfather’s youth, antibiotics didn’t exist and the national health service hadn’t been created. If you got a bit sick, you died; the idea of being ill was a universally accepted reality of life. Nowadays, ill health is a relative rarity in western society and that which is unfamiliar is also frightening and must be eliminated. More and more, the problems left to solve are inherently printed into our DNA, but recent advances open the door to creation of new treatments, and the creation of biologically sympathetic products.

Fast charge batteries - Proliferation of mobile devices and a push towards the electrification of transport has created a deep dissatisfaction with battery technologies. This in turn has triggered an explosion in research and development focussed on energy storage technologies as highlighted in the PocketLint article, Future Batteries. The tipping point will be when a battery can achieve the holy trinity of miniaturisation, long life, and near-instant charging; out of this will spring a new generation of technical solutions.

The Sum of the Parts
So, we can all make lists of disruptive technologies, but essentially they are not predictions of disruption. That’s not to say the lists are wrong; quite the opposite. Every single list of technology advances will come true in the end, but they do not represent the change, any more than a list of ingredients describes the meal.

Real disruption is the unexpected new thing that emerges when all these technologies are brought together to create a thing that was previously unachievable. Mobile devices have been “invented” many times, but the real explosion only happened when a whole list of technologies came of age and someone squeezed them all into a single handheld device, connected it to the internet and created an ecosystem around it. We just call the result the smartphone, because that’s the bit we own and hold in our hand.

What’s Next?
You may have noticed that, despite all the noise and publicity around virtual and augmented reality, it didn’t appear on my list; there is a reason for that. I do not consider VR/AR to be a technology as such or just a device, but instead a potential disruptive change similar to that of the smartphone ecosystem. VR is not the headset, and AR is not just a smartphone app. These are, in my opinion, just the PDAs of the 21st century, and they will go the same way as the PalmPilots, Psions and Apple Newtons of the 1990s. Remember them?

Over the course of my life there have been many attempts at introducing virtual reality to the market. To be accurate, VR significantly predates my life. I was born in 1966, 127 years after the invention of the first stereoscopic viewer (View-Master - 1839), 16 years after the first “immersive” virtual reality machine (Sensorama - 1950), and 5 years after the first motion tracking head mounted display (Headsight - 1961). This article on The History of Virtual Reality makes for interesting reading on the topic.

As a genuine fan of virtual and augmented reality, I’m willing to put up with a lot of discomfort to experience it, but even I would say that the most advanced offerings currently on and soon to appear on the market are still not ready to disrupt our lives in the way the iPhone and its successors did. To succeed, the technology would need to become far less intrusive.

However, if you bring a variety of technologies together you could have a genuinely disruptive outcome. Imagine implantable devices made biologically compatible through advances in genomics and powered by subminiature fast charge batteries (perhaps even charged by the body’s chemical processes). Now couple these with an information ecosystem enabled by AI recognition systems and you can bring an augmented world directly into the senses of the individual allowing them to be more intuitive and informed in everything they do.

Now, this could be utopia or dystopia; in reality it will probably be a mix of the two, but for me, there is real promise in this. It could also be the ultimate alternative to reality into which we all disappear and stop interacting completely. The smartphone is accused of exactly this sin, but perhaps those who say this are forgetting the trains full of people reading books and newspapers, and the father at the breakfast table hidden behind his broadsheet. Maybe they’re not aware of the moral panic that arose around “these foolish, yet dangerous, books” during the rise of novel reading as a pastime in the 18th century? (“The Novel-Reading Panic in 18th- Century in England: An Outline of an Early Moral Media Panic” has an interesting take on this). 

Alternatively, and perhaps with the right ethical framework, human augmentation could become a disruption in which technology does all the emotionless, mechanical things so that people have more time to actually be human.

Who knows; only time will tell. Maybe it’ll be something else…

…like faster horses, for example.

The Enterprising Architect

13 July 2018

Freudian Data - Security Slip

Update - 16th October 2018 - Tim Berners-Lee is working on something called Solid; an ecosystem that does pretty much what I'm discussing below. Sometimes, the threat comes sooner than you expect.

Update - 8th July 2019 - British Airways fined £183m for data breach (https://www.bbc.co.uk/news/business-48905907)

Tell Me about Your Mother
In the ever increasing digital push, we are all encouraged to interact on the internet (and yes, that does include mobile) to perform essential daily tasks. Some of us do this willingly and welcome the increased efficiency and convenience that it brings; others are forced into this world as alternative approaches become less available, and harder to access. Either way, we’re doing important stuff online, and to do that stuff we have to share our data. 
Some of this data is fairly innocuous (names, titles, ages, preferences) and we have learnt to share this information without even thinking. In fact, many people now go out of their way to share this data via social platforms even when the sharing is not necessary to perform a transaction. Some of this data, however, is sensitive and there is real risk in sharing it. Banking details, passwords, and shared secrets all involve us trusting those with whom we transact, but share we must if we are to interact in the modern world.
In the early days, this wasn’t such a problem. The organisations we had to trust were few in number (the bank, our supermarket of choice, and maybe a major online retailer or two) and the chances of that data going astray was low. We were encouraged to use “strong” passwords which we changed regularly and were reassured that the organisations in question had strong security “perimeters”. We could trust them to keep us safe. 

But they didn’t...

Addicted to Gambling
Not that I’m blaming them you understand, nor am I suggesting that the risk we took with our information was not one worth taking. Digital services have made my daily life infinitely less painful and I want more of them, not less – and herein lies the problem. As any statistician will know, risk is a numbers game and the more times you play it the worse the odds get. The game is made all the more risky by the fact that for an intelligent species, humans are remarkably unoriginal, and all the digital services we are offered work in pretty much the same way. They all require the same username/password approach, and most people return that lack of originality by using the same username and password wherever they go. This means that one leak is all it takes to compromise your online life.
So, over time we share the same critical information with increasing numbers of organisations and the odds go up and up. What is more, as the number of things you can do online increases the value of that information to those who might steal it grows steadily. With so many points of attack, so few variations on the information used, and so much value resting on it, theft of that information becomes more than likely; it becomes inevitable.
Not surprisingly, significantly loss of customer information is becoming a regular occurrence and cyber-security is the hottest topic in town.
Introvert or Extrovert?
There is a widespread belief that this continuously increasing openness with data is a trend that will continue in its current direction, but it is just as likely to be a fad; part of a cyclic process. It is reasonable to expect that we will react to the data losses in a negative way by becoming much more introvert as a society. After all, if the data isn’t out there it can’t be stolen, can it?
Why does the data need to be out there at all? At registration, we are often expected to give companies all the data they need for all the services they offer, but we may only use a few. By interacting with apps and websites, we share our behaviour so that services can be customised, “in our best interests”. We share our credit card data every time we want to buy something, and everyone has our address even if all the parcels come to us from a small number of couriers.
We do this because we have to. The current ways of working in the world of e-commerce provide us with no alternative.
Control Freak
But what if there was an alternative? It’s not hard to imagine an independent decision making app owned by the user (an avatar that acts on my behalf), containing my preferences and data. Such an app could act as an intelligent agent, doing the comparing and completing the trades on my behalf. It could work to my personal preferences and not to a generic model and I could trust the recommendations it makes because it is owned by me and acts on my local behaviours and preferences. With increasing processing power on handheld devices it is quite possible that AI engines will be able to run locally, pulling down the data necessary to reach their conclusions, with no external visibility of the decision making process. 
The proliferation of such a capability would start to remove price as a differentiating factor and providers would need to move to the next desirable USP. My avatar might start to choose to trade only with the services that request the minimum amount of personal information, and providers would respond by offering data-lite transaction completion. In essence, the transaction requiring the least data wins.
For example, a transaction that allows you to make the payment directly through your bank (e.g. via mobile payment or bank transfer) and then share the payment reference with the seller so that their systems can watch for the payment in real time would remove the need for sharing of card or account details. Anyone offering this type of transaction would put themselves at an advantage in a data-introvert environment.
Peer Pressure
But why would anyone bother? Suppliers want to keep your data and track behaviour; it is valuable, and even when it isn’t, executives have been led to believe it is by the “data is oil” mantra. Why would they give this up and offer a transaction based data free service? Well, with the increasing awareness that data breaches have a long term impact on share price, organisations might start to see the keeping of data as a very costly burden. The high fines available through GDPR legislation could push the risk of keeping personal data above acceptable levels.
It only takes one provider to offer a transaction based service with no registration and no data storage and the market will decide. The most likely contender would probably be a start-up that wants to avoid the cost of data storage, data protection, insurance, etc. etc. and also exploit a new market. Storage is cheap, but if you’re not using any it’s even cheaper. Once a disruption like this happens, all others have to follow to survive.
It’s not impossible, therefore, to conceive of a world in which the wheel turns again, and this time from distributed to local; local processing and local data storage (or at the very least single trusted location for data storage).
So the real cyber-security threat to businesses is not that they’ll leak your data… It’s that they’ll lose access to it, altogether.
The Enterprising Architect

3 July 2018

A Blast From The Past - Enterprise Architecture Mantra

Recently, and quite unexpectedly, a set of my tweets from 2009 resurfaced on Linkedin. It appears they've been gracing the walls of a Gartner partner and are still in use. Needless to say, I was surprised and flattered so I thought I'd resurrect them from Twitter and post them all here for posterity. It turns out there are 29 in total (rather than the 21 on the wall) so here they are in full (including original hashtags).

#EAMantra (1) Finding out you are wrong is one step closer to being right

#EAMantra (2) Just enough, just in time, justified

#EAMantra (3) Future first

#EAMantra (4) If you can't draw it you probably don't understand it

#EAMantra (5) If you can draw it but can't explain it you still don't understand it

#EAMantra (6) If you can't find any gaps in your architecture, you missed something

#EAMantra (7) If no-one is questioning your architecture, no-one is using it.

#EAMantra (8) Architecture is like alcohol. Just the right amount gives you confidence, but you need to know when to stop

#EAMantra (9) Implementing architecture is like buying a car. First question is not "how much will it cost?" but "how much can I afford?"

#EAMantra (10) Measure the success of your architecture by counting how few architects you have, not how many.

#EAMantra (11) Failing to deliver perfection is not a crime. Failing to deliver is.

#EAMantra (12) If you know it is the right architectural choice, add it to your architecture. If you think you know it is, add that too.

#EAMantra (13) Plan EA in days and weeks not months and years

#EAMantra (14) A good architecture is like a Bonsai tree. Growing it is the easy part; the real art lies in the pruning.

#EAMantra (15) Do not mistake low complexity for lack of detail

#EAMantra (16) If your future architecture isn't changing, your people have stopped thinking. Trust me; this is not good.

#EAMantra (17) Know your place... and believe in its importance

#EAMantra (18) Do not expect EA to please those who know nothing, as you are removing their blissful ignorance.

#EAMantra (19) Do not follow the path of least resistance... Create it.

#EAMantra (20) They won't really get it until they use it. If they use it and still don't get it, you failed.

#EAMantra (21) He who favours a complicated framework is unlikely to produce a simple architecture

#EAMantra (22) Don't get so caught up in the journey that you forget the destination

#EAMantra (24) When documenting your architecture think "graphic novel" not "war and peace"

#EAMantra (25) When scoping EA responsibilities, remember: If you mow your neighbour's lawn, you may get a turf war instead of a thank you

#EAMantra (26) Remember - an architecture is not a product... but it should be the blueprint for one

#EAMantra (27) One sure sign your enterprise architecture isn't happening - you're not benefiting from it yourself.

#EAMantra (28) I am an enterprise architect and on the door of my ivory tower is a sign that reads "It can be done!"

#EAMantra (29) Make your enterprise architecture practical AND attractive - it may not be a building but people will have to live in it


The Enterprising Architect

10 May 2018

Rise of the Machines – Making a Safer Robot

I Am, Therefore I Think

For as long as stories have been written, authors have considered the possibility that if intelligence can come into being through nature, then perhaps it can also be created by artifice. The ancient Greeks wrote of Automatons, Jewish folklore introduces the concept of Golums and contemporary science fiction describes a wide variety of sentient machines. In all of these sources, one theme emerges again and again… artificial intelligence is dangerous!

One of the most proliferate creators of stories focusing on this question is Isaac Asimov, and it is from his books that we find the first real attempt to formulate a safe approach to the creation of robots in all their guises.

The three laws of robotics. For those not familiar with the three laws here they are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
At first glance these laws look pretty solid. Isaac Asimov exercises them in a wide variety of situation in his novels, and in the majority of cases the laws work flawlessly to safeguard the safety of the humans with which the robots interact. In fact one of the consistent lessons from Asimov’s writings is that it is the humans who are the dangerous.

So that’s it then? The three laws of robotics are all we need. Safe robots.

Well, no. Let’s assume that in principle the 3 laws work, and let’s also assume that we can build these laws into a robot, and finally, let’s ignore the fact that these are very big assumptions in themselves.

There is still a problem, and any parent will know instinctively what this problem is. No good parent wants to harm their children; quite the opposite in fact. Every good parent wants only the very best for their children – and every good robot should want only the very best for their human creators.

But what exactly is the very best?

Moral Robots

Let’s look at those three laws again, but this time from the perspective of the child’s expectations of its parents and the parent’s responsibilities to the child.

  1. A parent may not injure its child or, through inaction, allow its child to come to harm.
  2. A parent must obey orders given it by its child except where such orders would conflict with the First Law. (Consider obeying orders in this context as meeting needs).
  3. A parent must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, looking at these laws through the parent/child perspective immediately demonstrates the dilemma faced by the intelligent robot; it arises when the child demands a bar of chocolate.

What does the parent do? Law 2 requires the chocolate to be given to meet the need or obey the order, but Law 1 requires that the child not be harmed. At this point it might be worth taking a look at the Wikipedia article on “Parenting Styles”. You won’t be surprised to discover that the massed wisdom of the human race and the careful considerations of highly trained psychologists has failed dismally in reaching an accord on the right answer to this question.

Does the parent give the child the chocolate bar, or withhold the bar and provide a healthy alternative? Is the parent causing more harm by providing a substance that is both fattening and damaging to the teeth, or is there psychological damage in denying the child a pleasurable experience and creating an aversion to “healthy” food? And what exactly is healthy food and what is unhealthy?

While the parent considers this dilemma, the child throws a tantrum and starts lashing out verbally and physically. Is the parent now breaking rule 3 by allowing him or herself to come to harm? As humans we resolve this problem relatively easily by simply making a decision based on personal beliefs conditioned into us by our own parents or by society. If we didn’t, we would be unable to make a decision at all, and would cease to function.

So… to make decisions within a moral framework we have to jump to conclusions based on beliefs. If we want robots to do make decisions we will have to allow them to do the same. Unfortunately, history tells us that making decisions on this basis can justify all sorts of immoral acts and so in adopting this model we create robots who can also perform immoral or harmful acts.

End result: Dangerous robots.

Amoral Robots

What if we rewrite those laws, and instead of adopting the core principles of human morality we try a more amoral approach. In other words, do as you’re told regardless of consequences. Here are two rules that might achieve that:
  1. A robot must not do anything to a human being or its property unless that human being gives permission to do so.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Now let’s look at that chocolate bar scenario again.

Child: “Give me a chocolate bar.”
Robot (to parent): “May I give your child a chocolate bar?”
Parent: “No. Give him a carrot instead.”
Robot to child: “May I give you a carrot”
Child: “No. Give me a chocolate bar.”
Robot does nothing (as it already has all the answers it needs).

Result: Safe robots. Brilliant!

Ah. Hang on a moment. How did the robot know to treat the child as property, because for the robot to be safe it has to treat the child both as a human being and as property? Then there’s the dilemmas of property and timeliness in general. How does the robot know what you do and don’t own, and what happens tomorrow? The child asks again for a chocolate bar and the robot does nothing as the previous answers still apply – or do they? How long before a “no” or a “yes” no longer applies? The robot would have to keep checking and rechecking everything ad-nauseum.

Result: Annoying and ineffective robots. Not so brilliant. 

Obedient Robots

Let’s make this even simpler. Let’s just have one law:

1) A robot must obey orders given it by human beings.

This simply makes the robot an obedient tool that is as good or as bad as the adult controlling it. Let’s hand the morality decisions back to the adult. Great. We’ve now created a robot with no complex dilemmas whatsoever – a bit like a gun…

Result: Robots that you need a licence to own and that can’t be taken out in public.

Independent Robots

In essence, the only way to produce intelligent, effective and safe robots is to make them fully autonomous. We then have to hope that because they are not fundamentally an organism designed to compete for resources (as we are) that their intelligence will reach different conclusions to us and they will solve the “safe robot” question for us.

Of course, this would be an enormous gamble and it’s never going to happen…

So, dangerous and/or ineffective robots it is then.

The Enterprising Architect

4 June 2015

Chained to the Desk – The Sequel

Not Now Jon
In a previous post entitled “Chained to the Desk - TheParadigm Shift” I discussed the importance of questioning everything and the danger of accepting the current paradigm I gave examples of how the status quo can perpetuate for years and even decades before someone introduces an innovation that once suggested seems so obvious it is amazing that it was never thought of before.

I referred to a modern day situation that I believe is just such a paradigm and one to which we have all become blind; the desktop user interface. The current model has perpetuated since 1973 and so it definitely falls into the “decades” family of paradigms.

I’ve had some responses to this suggestion and understandably the majority refer to the value of the desktop environment over that of the tablet. They quote the limitations of tablets, and the strengths of the desktop and I certainly don’t disagree with them…

…or at least I don’t disagree when it comes to the physical aspects of the desktop environment. Greater screen sizes and multiple screens creating a larger working space. A full size physical keyboard providing a far better typing experience, but that is a comparison of the physical provision and not the operating system or user interface.

When it comes to the user experience there is so much that has always frustrated me, and even more now that we have the pleasure and ease of the tablet experience.

Poison Arrows
To illustrate, let’s look at some basic questions about the typical desktop UI.

  • Why do I have to use a little arrow to grab that tiny area in the bottom corner or at the very edges to resize the window?
  • Why is it that the only way to move the window is to use that same arrow to grab the title bar (even if the title bar is off the top of the screen)?
  • Why do I have to hit a tiny icon the in the top right (or top left) corner to minimise or maximise the window, not to mention the dangerously co-located “close” icon? Who am I, Robin Hood?
  • Why are all the options in pull down menus that require dextrous handling to get to and then hit the right option?
And now to the desktop itself. Even with the larger screens and multiple display options there are some annoying limitations.

  • Why am is that space around the screen not reachable?
  • Why do I have to drag a window into the screen in order to look at it?
  • Why am I forced to choose between large and readable (but limited space) and small and illegible (but loads of space) via tucked away display options that when selected are treated like major engineering decisions? “Do you want to keep these highly dangerous settings? You may never get your display to work again.”
  • And why, now that I’ve paid all that extra money for a touch screen do I find myself resorting to using the mouse all the time?

The answer of course is simple… because that’s the way it’s always been; that’s the way desktops and laptops work! Get with the program Jon!

Moving Pictures
What I want from my user experience, regardless of the equipment I’m using, is a natural human interface. Something that fits the natural, instinctive way in which we interact with the physical world. In the real world, when I want to move something around, get rid of something, or open something, I use my hands in a natural way. The success of the iPad, in my opinion was driven primarily by Apple’s ability to tap into these natural gestures and to add a sense of the physical to the virtual, the “list bounce” being just one of these.

So Apple, Microsoft, Google, or whoever; please can you give me a desktop on which I can, using touch alone to:

  • Zoom the whole display area (not just the window contents) using pinch to zoom
  • Resize individual windows using pinch to zoom gestures
  • Move the virtual desktop around allow to bring those areas off the edges of my screen into view simply by dragging the viewing area with my fingers.
  • Drag windows around by touching any part of the window, not just the title bar
  • Close or minimise windows using simple swipe gestures
But as I’ve said before, if people can’t see it and live it they’re not going to get it, so a blog post won’t break the desktop “I need my windows paraphernalia” paradigm that has existed since 1973.

I guess I’m going to have to go and build a demo…

The Enterprising Architect

3 June 2015

Easy Does IT – The Complexity Conundrum

It’s Bigger on the Inside!
In large organisations there seems to be a painful reality that IT is horribly complicated. There is much discussion as to whether this is an inherent and unavoidable reality or if it is one inflicted on the organisation by a failing IT department.

Well, in my opinion it is neither. There is nothing inevitable about labyrinthine IT solutions, nor do they arise from incompetent technologists. Instead, they exist to solve a complex problem and complex problems require complex solutions. The one true failing of IT is in selling the idea that technology can perform the impossible magic of delivering complexity in a simple way.

I have referred to IT’s love of silver bullet solutions in previous blog posts and I’ve also pointed out that just like the werewolves they are supposed to kill, these silver bullets are mythical in nature. They are mythical, not because we are unable as technologists to keep things simple, but instead because we are trying to fulfill an impossible promise. Somehow, despite failure after failure, the industry seems unwilling to learn this lesson. Attempts are even made to push the complexity down a level and refer to the adaptations required in our silver bullet solutions as “configuration changes”. The lure is attractive as the product now appears to be simplicity personified.

In practice, however, all we have managed to do is to push the complexity back onto the customer who is now required to “programme” the solution to work to their specification. We sell this as “putting control back into the hands of the business” but all we are really doing is subconsciously punishing the user for daring to be complicated in the first place. This is a “not my problem” solution.

Simple is as Simple Does
The answer to keeping things simple is simplicity itself, but simple is not the same thing as easy. The best way to demonstrate this is through example, but for the words I’ll turn to Steve Jobs who summed it up well in his statement:

“Simple can be harder than complex. You have to work hard to get your thinking clean to make it simple, but it’s worth it in the end because once you get there, you can move mountains” – Steve Jobs

It is from this source that my first example of simplification arises. In 1998 legend has it that Steve Jobs, in a moment of frustration, drew a simple grid on a whiteboard (the one at the start of this blog). This act radically reduced the Apple product line to four key products, shelving all other offerings regardless of status or demand. The original number of products is not clear; it is possible that even in 1998 it was not clear, and it was this confusion that led Jobs to make the decision he did. (There were at least a dozen variants of the Macintosh computer alone).

There is a smaller example much closer to home. Whenever I attend Cabinet Office presentations by members of the GDS Team they all bear a striking resemblance to one another. (It’s almost as if they’re working to a style guide!). Each slide contains just one simple sentence, and where the message needs to get more complicated they embed a simple picture, or more often a well-crafted video. It has always struck me that this approach focusses attention on the presenter, simplifies the message and creates engagement in the audience. More recently, whilst looking at mobile alternatives to desktop tools, it also struck me that this simplified approach to presentations removed the dependence on proprietary presentation applications (such as Powerpoint).

I would hazard that Steve Job’s example, although presented as being wise and well thought out, is less about evidence and more about gut. He made it simple because he liked simple and just knew it was right. He made the IT simple by making the business simple. Similarly, I’m sure there is robust evidence based logic involved in the GDS style guide, or could it be that the decision maker just knew that a decision to be simple in approach had a ripple effect on the simplicity of the implementation.

In both cases, the decision makers did something that some seem to find radical; they made a decision, and the decision they made was to give the user what they needed, not what they were asking for (there is a difference). They gave them something that they hadn’t even realised they needed and as a result they made them happy.

Paint it Black
Henry Ford did this too. He is credited with many quotes relating to user (or customer) needs, but the one I am going to repeat here is this:

“A customer can have a car painted any colour he wants as long as it’s black.” - Henry Ford

Henry Ford is not being rude here, nor is he ignoring the customer. Far from it. What he is doing is simplifying his offering in order to simplify the solution. It was this type of gut decision making that allowed the first Ford motor cars to roll of a production line in volume and at affordable prices giving the customer very much what they needed at the time.

So, to simplify the solution you have to simplify the offering, and to do that you need to make gut decisions. You will of course do all that good stuff like user research, prototyping and you will even run trials, but at the end of the day simplification will come from the brave decision to create the product or offering you believe in. You will offer this product to the customer, and trust in the fact that they will love it because you knew what you were doing.

You do know what you’re doing… don’t you?

The Enterprising Architect