On walkabout in life and technology

Your Idea Sucks, Now Go Do It Anyway

Lovely article by Jason Cohen, worth a read, called Your idea sucks, now go do it anyway makes a great point on how the original idea for something usually evolves into something completely different, and success comes from embracing that evolution.

“My idea isn’t good enough yet” explained a friend who is thinking of starting his own company. He’s waiting for the idea to be completely fleshed out before taking the leap.

Newsflash: Your idea probably sucks, and it doesn’t matter because your business will probably turn out to be something completely different.

I often find myself talking to people about their their software ideas, and my usual response is “go for it”, even if I do not wish to participate. Some of the examples in the article prove that the original idea may have sucked, but by going for it, by adapting as they learned more about the idea and its use cases, by evolving the idea, they were successful. You’ll know their names.

Pursue your ideas. Know that they will evolve into something unexpected. Pursue the evolution. At worst, you’ll gain some experience and go nowhere. At best, you’ll be the next big thing.

Cheap iPhones, Wealthy Workers, Pick One?

Randy Murray presents a different view in An Observer From Shenzhen—Thoughts on Apple’s Recent Bad Press.

Do workers in Foxconn factories who build products for Apple, Dell, HP, and others, work long hours at tedious tasks? Yes they do. Do they work for a fraction of what a worker in the US, Japan, or even Korea would? Yes they do.

Are they being enslaved? Clearly, they are not.

I suspect that a lot of the heat from the recent press comes from a lack of understanding about Chinese culture, their rapid change from manual agrarian life to high-tech manufacturing, and the significant differences and scales of our economies.

I don’t know what the truth is, but I am glad to see a reasonable dissenting discussion on this issue.

News Logic

Paddy Harrington, writing in Wanna Figure Out If Your Product Is Any Good? Think Like A News Editor

At its heart, news logic is about value creation and the primacy of the design output instead of a story that traditionally gets applied after the fact. Where in the old days (i.e., five years ago), you designed something and then told a story about it in the hopes that it was a story worth telling, more and more of tomorrow’s design projects will have a story worth telling built into their hearts right from the get-go.

I try to apply this thinking here, only posting links to articles I think are great, or creating articles I think are newsworthy. If there are any topics you’d like me to expound upon, comment here or tweet me.

The Paradox of Choice

People desire more choices, yet are unable to choose when the selection is greater, that is the paradox of choice.

Sheena Iyengar, a professor of business at Columbia University, conducted an interesting study in 1995. She set up a display of 24 samples of jam for customers to taste, and every few hours, she switched to a 6 sample set. The results were astonishing: 60% of customers stopped to try the jams when the selection was large versus 40% when small; and 30% of those that stopped when the selection was small purchased a jam, versus 3% when the selection was large.

In short, more people were attracted to the greater selection, but 10 times more people actually made a choice when the selection was smaller.

That study “raised the hypothesis that the presence of choice might be appealing as a theory,” Professor Iyengar said last year, “but in reality, people might find more and more choice to actually be debilitating.”

In a similar study, Surgeon Atul Gawande found that 65% of people surveyed said if they were to get cancer, they’d want to choose their own treatment. Among people surveyed who really do have cancer, only 12% of patients want to choose their own treatment.

Too many choices in software

In software, we have the same disconnect between number of choices and user’s ability to make those choices. Our users declare that they want extensive choices, but they rarely, if ever, make or change those choices. For example:

  • The Mac has hundreds of choices in the System Preferences application, yet very few users ever venture in there and change them.
  • Apple recently released a simplified version of its Wireless Router software that removed the advanced menu of choices from the application, and almost no-one noticed.
  • Lots of software comes with customizable report writers. Very few customers ever use them, yet all RFP’s require it. As an advanced tech geek user, I have only once in the last 12 years used one of these (and that was in Billings to customize the layout of my invoices).

Letting the user decide

One of the key burdens of a Software Designer is to make decisions on behalf of users. What features and functionality to implement, how to structure it, what the flows should be. When software designers get stuck, a common pattern is to let the user decide, create choices for the user to make and implement the necessary features for each choice.

Each and every setting or option in your application preferences is a decision by the designer to let the user decide. SAP software used to have 22,000 of these. And users love this, they call your software customizable and seem more comfortable to buy it.

But there are some problems with this. First, you have to design and implement the functionality for each choice, which means more programmer time, testing time, integration time, debugging time and of course, cost.

Secondly, if the designer cannot decide what is the right or best way is, how is the user supposed to make that decision? Your expert user, the 3% of your user base, who understands the problem domain, and knows what they want, will make a choice and be done with it. But your regular user will be just as stuck as the designer as to what choice to make, which of the 24 jams to choose. We designers solve for this by choosing a random default setting. Our users solve for this by assuming we did that on purpose (see Almost No-one Changes Their Settings), so they leave it alone.

Letting the user decide is the software equivalent of creating all 24 jam flavors, putting them on the shelf and assuming that users will be capable of choosing the right one. Evidentially, only 3% of them can. Its a cop-out.

More Choices, Simple Software, Pick One

Users also want simple, intuitive software. If there is no choice on how do to something in your software, then that’s all users can do. Try printing from your current application, or opening a file. You always get the same operating system dialog (unless you use Adobe products, of course). Users do not get to choose which print dialog to use, or which file browser, they always get the same (Adobe users excepted). Imagine if each time you wanted to save a file, you first has to say which file browser to use before you saved. It would make the software cumbersome to use and complex to implement since the developer has to support all different file browsers available. By using the Operating System supplied print and save dialogs, developers have removed choices and made the software simpler.

Lets take another example. I’m writing this post in Byword, but I could just as easily be writing it in Microsoft Word. What’s the difference? Well, Byword has no toolbar, limited formatting and makes it easy to write. Microsoft Word has a ton of menus, ribbons, colored squiggly lines, page breaks, formatting options, change detection, stylesheets and the like, oh, and you can also use it to write. I launch Byword, and write, that’s it (Byword has already chosen the font, view and styles for me). But because of all the choices in Microsoft Word, my usual process is to launch Word, choose a template, change the font and screen views to hide page breaks, check my stylesheet, and write, and format and write some more and format some more. Byword is simple, Microsoft Word is complex, both enable me to write, but Microsoft Word forces me to make more choices as I go along. Hence, I prefer Byword for writing.

It goes even further when I realize I use 80% of Byword’s functionality, yet less than 15% of Microsoft Word’s.

Note that I am not saying software should have no choices, that may be too few, and may make the software too simple to even do its primary function. For example, I also have a copy of iAWriter, which is a writing application that has even fewer choices than Byword. Why not use that? Because iAWriter made too many choices for me making it harder for me to use (but not for a lot of other people).

Is it not information overload in the 24 jam case?

I don’t think so. In some cases, too many choices just overloads the chooser, if and only if the choices come with too much supporting information.

For example, as a new arrival in the USA I went to the local supermarket to purchase milk. Where I come from, milk comes in two varieties, whole and skim (the soy stuff is not called milk). In my local supermarket, I can choose from whole, skim, 1%, 2%, organic vs non organic, local vs interstate, fresh vs reconstituted, soy vs cow, pasteurized vs non, and from several brands. It’s a multidimensional matrix of choices with many axes, and all I want is milk.

On the other hand, too many choices may come with insufficient information, the overload being the sheer number of choices. I recently had to choose a new healthcare plan, and all the plans I looked at had hundreds of options, yet most of those options offered were not explained. I had to call up the company and ask a whole bunch of questions just to understand a few of the options available. I picked a plan because the few options I did get explained to me sounded right, but I have no idea if its the right plan for me, or what I really purchased.

So what about how the choices are presented?

If fewer choices are better, so the jam study proves, how come people still struggle to choose from a smaller selection? It boils down to how the information about each choice is presented, just like in the overload case.

If you want an iPhone in the USA, you have three choices, AT&T, Verizon, and Sprint. Seems a simple choice, pick a carrier, get the iPhone. Yet it is almost impossible to choose a carrier, because the carriers present their services in complex, incompatible and vague terms, make no promises, hide their restrictions and limitations and therefore leave the chooser in the dark as to what they are really getting. The chooser has insufficient information to make a choice, no way to get that information and has, therefore, no way to make the right or best choice for them. Cable companies, airlines, electronics vendors and the like do the same. It gives the seller the power to persuade choosers versus giving them actual choices.

A whole industry has arisen to try to make these choices easier. Kayak, Hipmunk and Expedia, for example, all exist to help people choose flights from the morass of airline data and deals, but even they cannot simplify it enough.

Making choices easier

As a software designer, our first goal is to make the best choices for our users. If there is more than one way to present information, calculate a number or execute a process, we should choose one and implement only that one. Almost all of our users will be happy with our choices, and we’ll only hear from the few who wish we made the other choice.

We also sometimes do need to offer the user a plethora of choices. It’s how we present those choices and the supporting information we provide that will help users choose. For example, reports. Most systems have a reports section. Most systems have a lot of reports. And most systems rely on the report name to tell the user what the report does. But most report names on incomprehensible to users. What is the difference between an aging and a collections report, they both show overdue balances? If the designer added a 1-2 sentence blurb about what the report contains, and when to use it, users will be able to choose the right report quickly and easily. Otherwise, they will have to run heaps to find the one they want, or give up.

The paradox of choice

The paradox of choice means our users want more choices, yet cannot choose between these choices. We need to find the balance between choices for users to make and the choices we make for them. If we get that balance right, we create happy users and brilliant software experiences.


Childlike Wonder

Ever watch a child with an iPad? They seem to get it immediately, they prod and tap and swipe and rotate and in no time at all seem comfortable with it.

Ever watch an adult with an iPad? They hold it, and stare at it, and, well, stare some more, and maybe wave a finger near it, but hesitate to touch. And after all that staring and hesitating, they remain uncomfortable with it.

What is going on?

What is happening is that the adult is attempting to map the item to their own pre-existing mind models. If the item fits, for example, the new toaster works kind-of like the old toaster, they are immediately comfortable with it (albeit they will not use any of the new toaster features). If the item does not fit a known mind model, most adults get stuck in a loop. Its kinda like a computer, but kinda like a phone, but kinda like a book, but kinda like a computer, which model to use … and the brain enters an infinite loop. After being shown a few things, the adult takes this training, merges it with an existing mind model, and uses only the features they were shown. They ever even look at the new toaster features, they never even look at all the other iPad features.

Children, on the other hand, start with no preconceived mind models. Each thing they encounter is a new thing which requires a new mind model. The new toaster has knobs that need to be figured out, oh, this one makes the pop higher, this one makes the toast darker, this one burns in a Hello Kitty face. The child is building a mind model for this device. The iPad contains a plethora of things to build a mind model on, and children gleefully spend hours prodding each button, swiping each screen, and quickly building a new mind model for the device.

At some stage in our lives, we stop having this sense of childlike wonder and start trying to map the world to the mind models we already have. Some call it maturity, I call it sad.

So how does this apply to computer software design and my areas of expertise?

Our users are not children. One cannot deliver unto them a product and expect them to use their childlike wonder to explore the application and build the right mind model for it, no matter how intuitive we think it is. One cannot even give them a manual to explain the product, because they will not understand the text, the terminology or have time to read it.

Instead, software designers have two choices here. The first is to make the product fit a common mind model, the second is to provide demos and training.

In the first case, your application needs to look like, work like, and use the same terms as the user’s mind model. Which means knowing what that that is, and conforming to it. It applies design and functionality constraints on the product, and it makes it harder to innovate. Ever wondered why all graphics manipulation apps look like MacPaint (yes even Photoshop!), because they target the same mind model. Or all spreadsheets still look and work like VisiCalc, same reason. Email clients and Eudora. You get the picture.

In the second case, where you create something new that requires a new mind model, you know that your users are going to stare at the product, not use it and complain that the old way was better. Get off my lawn and all that. You need to train them. The very best training is hands on, give them tasks to complete, walk them through it the first few times, and wait for the time when they proudly announce that they know what to do. It is at this stage that you know they have built a mind model, albeit a very weird one. Create tools to then help them learn new features. Screencasts are great for this, and easy to create. And before you know it, you’re back on the lawn, the new way is the better way. It just takes a lot of time and patience.

To help figure out the nest way to train adults, next time you are ready to release a feature or a product, put it in front of a few children and watch where they go first. Ask them what they see, why they prodded that first and what have learned by playing with it. Then do the same with adults, but this time, lead them where the children went. It may not bring back a sense of childlike wonder to the adult, but it will help them learn, build a mind model and get comfortable with the product sooner.

But it is in yourself that you can make the change. Next time you are faced with an unfamiliar thing, don’t try to map it an existing model. Instead, try to find your sense of childlike wonder, play with the thing, poke it, prod it, rotate it, open it, don;t be intimidated or afraid of it, try to figure it out and see what happens. You will have fun, and you will build a whole new mind model.

Fragility of Free

A great article by Ben Brooks called Fragility of Free, well worth a read:

The fragility of free is a catchy term that describes what happens when the free money runs out. Or — perhaps more accurately — when the investors/founders/venture capitalists run out of cash, or patience, or both. Because at some point Twitter and all other companies have to make the move from ‘charity’ to ‘business’ — or, put another way, they have to make the move from spending tons of money to making slightly more money than they spend.

I also like to, and do prefer to, pay for things.

See also: Developers, a Love Story

Test Driven Development Really Works

In 2008, Nachiappan Nagappan, E. Michael Maximilien, Thirumalesh Bhat, and Laurie Williams wrote a paper called “Realizing quality improvement through test driven development: results and experiences of four industrial teams“ (PDF link). The abstract:

Test-driven development (TDD) is a software development practice that has been used sporadically for decades. With this practice, a software engineer cycles minute-by-minute between writing failing unit tests and writing implementation code to pass those tests. Test-driven development has recently re-emerged as a critical enabling practice of agile software development methodologies. However, little empirical evidence supports or refutes the utility of this practice in an industrial context. Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

In 2012, Ruby on Rails development practices assume TDD. I personally rely on tools like rspec for writing tests and mocks, factory_girl for creating objects, capybara for browser automation, simplecov for code coverage and guard for automating these tests.

As a result of using this methodology and these tools, I tend to agree subjectively with Nagappan et al:

  • It does take more time to write tests. But a 15-35% increase in time seems excessive. Most tests are short, quick and simple to write. Using mocks and factories helps a lot.
  • It takes no time to run them if you use a tool like guard that continuously monitors and runs them as they change. Add growl notifications on the Mac and you only need to pay attention when you see red.
  • Tests help me write better code. Because thinking up and executing tests often shows up code defects early, and this is a very good thing. I often find myself writing a function that I think is great, then throw a few additional tests at it and find my logic or execution was flawed.
  • My defect rates are much lower, just like Nagappan et al state. I think this is simply due to the use of TDD to identify and catch defects earlier in the process. And I get far fewer bug reports from Beta testers.
  • I am no longer afraid to refactor mercilessly. The tests will tell me if the refactor breaks anything. This is probably the most important point. I know that its safe to change code, I know if the change affects other components and I can see where and how it does so, so I can fix it.
  • I can trust the code base. No need to tiptoe around API’s or functions. I can trust that calling an API or module will work, and that the tests will tell me if they break. I can trust the work of others because their tests work too.
  • I seem to have found, empirically, that writing tests after the fact is just as fine as writing them before. The important thing is to have the tests, and to have good tests, to have tests that go after both expected and unexpected cases, outlier cases, failure modes and are in some cases ridiculous tests.
  • It also helps when taking on legacy or other people’s code to spend time writing tests to help you learn that code. Or at least trust that code by testing the components you use.
  • Testing for 100% coverage does not seem to be all that worth it. Using my toolkit in Rails, I can test the workflow, UI and all models and business logic. In reality, getting 100% coverage on models and libraries pays off, but the UI and workflow stuff changes so much during development that the tests for that stuff get in the way.
  • Adding tests for bugs found is another great way to document not only the bug but your bug checking and fixing process. You know something is wrong when a bug is found, tests can help you to corral it, and then be sure that it never happens again. Tests also ensure that your fix does not break anything else. I often find new bugs by fixing one bug and watching other tests fail.
  • I don’t release unless all tests are passing, obviously.
  • I also don’t skip testing to meet deadlines. I estimate with tests in mind, I develop with tests in mind. And if I have a deadline, the few minutes more it takes to create and run tests make no significant difference.
  • I make no secret that I use this methodology. My clients understand that the small increase in development time pays off handsomely in later iterations, and reduces beta testing and maintenance issues. Based on my experience alone, TDD actually reduces the net time to develop and ship quality software because you don’t spend time later in the project trying to figure out what went wrong.

For many developers, introducing Test Driven Development, is a challenge. You have to explain it to your team, to managers and clients, then find the time to write tests, then find the time to learn how to write good tests, build a process and infrastructure to automate testing, and the payoff is difficult to see until far later in the project timeline. I hope just some of the benefits I have found above help you get there. With TDD, you will make better software, you will be a happier developer and your manager and client will see the benefit in the end.

Where the Light Is Better

A woman comes across a man crawling under a street lamp. "I've lost my car keys," he explains.

The woman tries to help the man find his keys. After a few minutes of searching, she asks "Where exactly did you drop them?"

"Down the street, next to my car."

Puzzled, she asks "Then why aren't you looking over there?"

"The light is better here."

People often look where it seems easiest or most convenient to look, rather than in a more difficult, but more correct place. Opinions regarding where the best place to look often comes down to “Where The Light Is Better”.

  • Support teams base their knowledge of a software system upon specifications and documentation, rather than observing the actual system in action or examining its source code. Documentation is usually incomplete and out of date.
  • Analysts make decisions based upon easy-to-collect metrics, rather than on detailed study of the complexities of a situation. Not enough time to do it right, so find a short cut.
  • DBAs prefer that database query logic be written as stored procedures in the database, where they can get at it; whilst developers prefer that it be in the application code, where they can get at it. More convenient location, not necessarily the right location.
  • When a bug is found in someone else’s code, developers will generate complex workarounds in their code, rather than trying to get the bug fixed. Either it takes to long to get it fixed, its too hard to fix it yourself or something else may depend on it. Workaround instead of finding out.
  • Almost everyone uses a tool they know well to try to solve a problem, even if that tool is poorly suited to that problem, rather than to try and learn an unfamiliar tool that is far better suited for that job. Use your “hammer” tool instead of the right tool. We’ve all see abuses of Excel and Powerpoint, no?
  • Architects try to gather existing knowledge informally, through conversations, online forums, and wikis, rather than reading papers and books. More convenient to ask than to learn, see the success of StackExchange.
  • Consultants try to gather information on industry practices through reading academic papers, rather than examining real-world work and case-studies. Hoping someone else saw and solved the problem first.

Having alternate places to find and gain knowledge is a good thing. But you need to make sure that the alternate location is referring to the same knowledge. And that you truly understand the problem and solution space. Searching where the light is better does not necessarily mean you are searching where the keys are, where the solution exists.

Expedient is not necessarily efficient.

Rephrased without requesting permission and hoping for forgiveness, from http://c2.com/cgi/wiki?WhereTheLightIsBetter (Last updated November 2004).

Less Wasting Time

The sub-theme this week is on trying to find ways to improve developer happiness and to reduce unpleasant time wasters.

Rafe Colburn in Don’t order your team to work more hours talks about bosses who ask staff to work more hours:

Every time you’re tempted to do so, sit down with members of the team and ask them how many hours a week they spend dealing with stuff not related to shipping and work to make those things go away instead.

David Heinemeier Hansson gets specific in Refusing administrative minutiae, picking on expense reports.

Optimizing your business for happiness is about a lot of things, but taking out all the needless administrative minutiae seems like one of the easiest. Why aren’t you?

Pass these on to your boss.

Trade Trade Secrets

A brilliant essay by Danielle Fong called Trade Trade Secrets, very worth a read, covers everything from Intellectual Property, to explaining the differences between theft, transcription, transformation and inspiration, to how the law stifles all four of them.

The great danger of laws that ignore these is not that they will prevent theft, but that they will so heavyhandedly prevent transformation and inspiration: the engines of our entire civilization.

Just read it.