Hiltmon

On walkabout in life and technology

Text Expansion Using Keyboard Maestro (First Cut)

This post presents how I have set up Keyboard Maestro to replace basic text expansion from TextExpander … so far. This post covers (More to follow at some point.):

  • Basic text expansion
  • When to use copy vs typing
  • Limiting to applications
  • Basic variables

Basic Text Expansion

The basic text expansion macro looks like the macro on the right.

  • It is triggered when a string is typed, in this case ;gma
  • It has a single action, Insert text by Typing, containing the text to be typed, in this case git pull; make clean; make -j 8.

Thats all there is to it. Nice and simple.

Type ;gma anywhere and Keyboard Maestro makes the replacement.

Insert text by typing vs pasting

Almost all the time, Insert text by typing is the right way to go. Its fast enough and does not affect the system clipboard. However, for long strings, typing may be too slow.

In these rare cases, Insert text by pasting is way faster. But you need to add another step to the macro. Add a Set Clipboard to Past Clipboard step after the paste to reset the clipboard back by 1 in Keyboard Maestro’s history. (Thanks to @TheBaronHimSelf for this tip.)

Limit To Application

Many of my snippets apply only to specific applications. To limit snippets to an application (or set of them), I create a new Group and make it available in a selected list of applications.

The snippets in this group only expand in Xcode.

Basic Variables

Keyboard Maestro has many of the same variables and abilities as TextExpander (and a whole bunch more, of course), including

  • Position Cursor after typing %|%
  • The current clipboard %CurrentClipboard%

So, for example, to create a Markdown Link using a URL on the clipboard and place the caret (the text insertion cursor) in the description area, I can use my ;mml macro.

[**CARET**](http://localhost:4000/blog/2016/04/08/text-expansion-using-keyboard-maestro-first-cut/)

Or to create a date heading in an Assembly Note, I can use my ;mmd macro.

This types:

---

**2016-04-08**

You can format the date any way you like, of course.

To see what variables are available, click the Insert Token dropdown on the right of the action panel. As you can see, there is a huge number available.

I have managed to replace the majority of my TextExpander snippets using the basic text expansion macro described here, and it’s working great.

Next to do these with sounds and with more advanced abilities.

Hope this helps.

Follow the author as @hiltmon on Twitter.

Apple at 40

Apple turned 40 this week, and it got me thinking about the past 40 years of our individual computing experiences.

In many ways, my own journey to now parallels that of Apple.

And I’m willing to bet your journey is the similar.

The 1980s - Youthful Experimentation

In the early 1980s, Apple was young, surrounded by a wide range of competitors and the Apple II was it. Everybody who could, had one. They used them to work, to play, to learn programming and to experiment.

When the Apple Lisa project was announced and plastered all over Byte magazine, we all devoured each word written about it. We argued whether the Apple III or the Lisa was better (I was a Lisa), but both disappointed.

In 1984, Apple released the Macintosh. And changed the world.

In 1987, Apple released the Macintosh II. If there was ever a computer I wanted in the 1980s, that was it. That plus the LaserWriter recreated the entire publishing industry.

I view the 1980s Apple as a time of youthful experimentation. They experimented with several new platforms, took major risks, created unique products (some great, some horrible) and set out to change the world. The world fell in love with the GUI and the mouse.

In parallel, I was doing the same. I had Sinclair computers back then (talk about a unique platform) which were the only ones we could afford. When I went to university, I built my first PC clone, ran MS-DOS to learn programming using Turbo Pascal, and Xenix (later Minix) for everything else. I fell in love with computing and UNIX.

The 1990s - Suit Wearing Corporate Life

The 1990s were Apple trying to be corporate and becoming quite miserable about it. Time after time, Apple produced the same boring beige boxes, boring updates to the operating system, and struggled to complete against IBM compatible systems and the Microsoft juggernaut.

Apple was trying, as all young folks do in their first jobs, to fit in to a society they did not understand and felt powerless to change. They simply did what they thought the world expected of them. They tried to act like grown-ups and play the corporate game against older, powerful, entrenched interests, and had their spirits crushed.

It’s not that Apple did not create great things in the 1990s, its just that they were few and far between. The Powerbooks of 1994, the Newton and System 7 (IMHO) stand out in my mind.

In parallel, I started programming, managing projects and consulting — and wore a business suit every day. Since the corporate world was on PC compatible systems, that’s what I used. MS-DOS at work, Minix at home, Windows 95 at work, System 7 at home. I did this because I thought that was what was expected of me. To act like a grown-up, settle down, suit up and play by the rules of others. It crushed my spirit, and I was miserable.

By the late 1990s, Apple was doomed. Something needed to change.

By the late 1990s, I was miserable. Something had to change.

The 2000s - Finding the Bliss

The return of Steve Jobs via the reverse acquisition of NeXT was the trigger for Apple to Think Different again. Its moment of change had come. The new iMac design language took hold, from the Bondi blue model in the late 1990s, through the beautiful iMac G4 lampshade model to the current slab design on the desktop, the powerful PowerMac G4 Quicksilvers with their unique handles leading to the amazing all-metal G5 models, the new Powerbooks G4s and later MacBooks.

And more. OS X was introduced and blossomed. The Intel transition happened. And the iPod became the most iconic, must-have product for our generation.

Apple’s products became Apple’s again. They had found their bliss. And the market found it with them. Apple changed to doing what it wanted to do, what it loved and that showed. It found its market wanted the same and shared their love of great design, music, experience and reliability.

In parallel, so did I. I replaced the suit and meetings and Windows PC with jeans, an IDE and a Titanium PowerBook G4. I changed countries (twice) and worked on the products that I wanted to work on and make great.

I had found my bliss. I was doing what I loved and was free to also live my life surrounded by people I loved doing fun things at work and especially at play.

By the late 2000s, Apple was a successful and confident organization. It had proven itself to itself and the world and was surrounded by friends. It was ready to expand its reach. And it did so in the most incredible way, by launching the amazing iPhone. No other firm could have done it, it required the unique kind of creativity and operational chops that only a happy, confident Apple could delver. The iPhone became the one icon to rule them all.

As was I, well, successful I mean. Its because of this bliss that I was able to move to New York, do the work I wanted to do, creates some of my best product and run my own consulting business here.

All using Apple products.

The 2010s - Living the Life

By the start of the 2010s, Apple was confidently living the life. The passing of Steve Jobs and the handover to Tim Cook did not change who or what Apple was. Apple had gotten better at things it traditionally was terrible at, like services, even better at things it was good at, like design, manufacturing and innovation. Yet it was still finding more bliss. The iPad, Apple TV and Apple Watch may not be seen as super-successful products compared to the iPhone, but each on their own would be a Fortune 500 company!

Apple has gotten confidently comfortable with who they are, what they do and how they go about it. They continue to innovate in other areas, continue to press forward, continue to enjoy what they love. They have not stagnated or settled down. They continue to youthfully experiment yet deliver like a mature firm.

In parallel, so have I. I live where I want to, do what I love to do, use the products I love to use. I am continually working on becoming an even better software designer, programmer and person. And finding more bliss. Like writing and traveling again.

I am confidently comfortable with who I have become, what I do and how I go about it. But I am not ready to settle down. I continually try new tools, languages and approaches. I continue to youthfully experiment yet deliver like a pro.

Onwards and Upwards

Apple at 40 (and in parallel, myself a few years older) is a master of many things, it has put in its 10,000 hours. But being a master of one, two or even ten things is not good enough for either of us. We continue to experiment, to try, to put 10,000 more hours into new ideas, experiences and technologies.

I cannot see Apple slowing its pace of innovation, change and expansion. It’s who Apple is now and who Apple always wanted to be. The path to here was long and winding, and full of bumps. The path forward will be too. And because Apple Thinks Different, it will always be different and misunderstood and underestimated. Apple at 40 does not care what others think, it has found its bliss and will continue to push forward, writing its own story.

I intend to do the same. Et vous?

Follow the author as @hiltmon on Twitter.

Dependency Limited and Conflict Free C++

TL;DR: Beware of libraries you need to compile yourself and copy-pasted code, the performance, maintenance and other hellscapes you create are not worth it in the medium and long run:

  1. Do not use dependencies that have dependencies that you have to compile.
  2. Do not use libraries depended on by dependencies anywhere else.
  3. Solve your own problems and understand the solutions. Do not copy-paste from the web.
  4. Always write your own code where performance and maintenance is critical.

This post specifically targets C++, but is really a general set of rules and advice. See NPM & left-pad: Have We Forgotten How To Program? for a NodeJS scenario that this would have prevented.

I write a boatload of C++ code these days, developing on the Mac and deploying to Linux servers. All of it is dependency limited and conflict free. What does this mean? It means that I never have to deal with multiple versions of dependency libraries or on dependencies that have their own conflicting dependencies. At best, I code in straight-up C++11, use the STL and rely on vendor precompiled libraries that are dependency limited. The result is fast code that I and my team can read and maintain, deploy and run on all platforms with ease.

Dependency and Conflict Hell

When I started writing these products, I went mad and using a ton of third-party libraries. The theory was that these libraries were out there, already written and tested, everyone used them, and I could leverage them to short-cut my development time and effort.

And it worked.

For a very short while.

Within weeks of starting, I found one of the libraries I was using stopped compiling. It started throwing really odd errors, yet nothing had changed. It turns out that this library relied on code in an old version of another library that had been deprecated in later versions. I had added another dependency that needed a newer version of the same dependent library, and all hell broke loose.

The UNIX operating system solves dependency hell by allowing you to link your library to specific versions of libraries, so I could link to, for example, Boost v1.30.1 for the first dependency and Boost 1.52.0 for the other dependency — as long as they were compiled into separate libraries! Which means maintaining two installs of Boost and two environments just to get my dependencies to compile. And if I add another dependency that say requires a third version of Boost, complexity increases.

There are many problems with this:

  • When it comes to design and architecture, I need to split up my dependencies into separate libraries that compile with their own dependencies and then link to them in the main application, or use static linking which is not the preferred option.

  • When it comes to maintenance, I need to document where each dependency is, where its used and somehow describe the Gordian Knot to myself and my team for use in 6 months time without context.

  • When it comes to setting up a development environment, I need to somehow save old versions of dependencies and make complex Makefile trees to generate the correct versioned libraries.

  • When it comes to compiling, I have to compile a lot more components or use static linking to ensure that the right library is linked and the right functions called, increasing executable size, memory use and complexity.

  • And when it comes to deployment, I have to build this hellish mess for each platform.

Aside: Debugging and Maintenance Hell

Solving for the above takes time, it’s not hard, and once it’s been done, you could argue for smooth sailing. I expect this is how most teams do it.

Until something goes wrong and you start to debug. I don’t know about your tools, but mine always seem to find the “Jump to Definition” code in the wrong version of the wrong dependency every time. Which means that trying to find where something fails becomes that much harder. Is the error in the dependency, the dependency version or in my code? Ouch.

Or until time passes, like say six months, when a new error is being thrown in Production. Six-month-later-me does not remember what current-me knows now, leading to maintenance hell. Not only do we have a production problem and unhappy users, but I would have forgotten all the little tricks and hacks to get back my dependency hell knowledge.

And most importantly, I have lost the chance and ability to know and understand the application.

Dependency Limited and Conflict Free

So how to do this? I follow the following rules:

  1. I do not use dependencies that have dependencies that I have to compile. That means using vendor and open-source precompiled libraries that require no additional software installs to use them.
  2. I do not use libraries used by dependencies that may conflict. If a vendor library uses another library, I avoid using that other library in any of my code anywhere.
  3. Where necessary, I solve my own problems, not rely on third-party, unmaintained code I find on Stack Overflow or Github.
  4. I always write my own clean code when performance or maintenance is critical.

I am not saying I do not use dependencies or never look to Stack Overflow or Github for ideas. I’m not that smart. I simply limit my exposure and maximize my ability to read and understand my code and environment, now and in the future, with these limiting rules.

Looking at Rule 1

For example, lets talk about one of my database library dependencies. Its is written using Boost. Which means that the client library that I need to compile against has a dependency on Boost. Following the first rule, I use their precompiled libraries, not their source code, and since Boost is a dependency, I do not use Boost anywhere else (Rule 2). It’s up to the lovely folks at the database company to deal with their dependencies and create good stable libraries and installers for all platforms, and all I do is use their stable binary versions. A nice clean separation of my code from theirs, easy to maintain and easy to deploy.

Looking at Rule 2

Since we are on Boost, let me stay on Boost. These days, its almost as if you cannot do anything in C++ without Boost. Every answer to every “I’m stuck” question in StackOverflow on C++ seems to be answered by the phrase “Use Boost”.

I’m not saying Boost is a bad library, it’s not. It’s so awesome that the C++11 standards team nicked all the good stuff from Boost to make the STL for C++11 and C++14.

But every darn library out there, every example, every potential dependency seems to use different versions of Boost in different ways. And eventually they conflict because some code somewhere uses a deprecated call or side-effect that conflicts with the same call elsewhere. Following rule 2, I do not use Boost because everyone else seems to. Half of my problems came from other people’s code that used Boost badly.

To reiterate, my problem is not with Boost, it’s awesome, my problem is with how badly it’s used and abused, and rule 2 protects me.

Looking at Rule 3

We all know there are the piles of code that are just too tedious to write and have been done over and over again. Loggers, data structures, parsers, network interfaces, great ideas, and simple Stack Overflow solutions. It’s so tempting to just copy and paste that code, get it to compile and move on. I mean seriously, why rewrite a logger class! 1

Just use one that’s out there and move on, no?

My experience with these have been a case of short term gains with long term pains. Oh sure, I can get it going with less work on my end. But when things go wrong as they always do? Or when the application starts to perform so slowly that nothing seems to fix it? Or its six months later and the pasted code starts to act funny in production?

Rule 3 ensures I avoid these situations.

Keep in mind, example code or Github projects were written with no context or to solve the writers specific problem, scenarios that almost certainly do not apply in my environment or yours. And when things do go wrong, we have no understanding of the code or know how to fix it. Understanding code is more important that saving a few hours or days of developer time.

Looking at Rule 4

Given that I am developing real-time applications for Finance, hence the C++, performance and memory management being critical. The products process vast amounts of data, which means tight control over RAM, CPU caches and even cache-lines are important, and any wasted compute cycles add up in performance degradation. All my vendor dependencies have been tested and put through this wringer, so I can trust them to be as fast and memory safe as possible, or I would have selected a different vendor.

But not much else is, mostly because it was not written or tested to be that way. Under rule 4, the only way I know how to get the fastest application is to write it myself. That way I can see, and most importantly understand, where the bottlenecks are, where the memory is going crazy, where threads are running wild and fix it. Copy-pasted code or Github code rarely cuts it.

My Situation seems Unique. It’s Not.

I do understand that my situation seems to be reasonably unique. My applications are large and complex and need to interact with many systems and technologies which means dependency management is critical. The large code base and tiny team environment means that a simple development setup is best. Maintainable and understandable code is more important than getting it written quickly. Production issues will cost us a fortune, which means readable, simple and understandable code is critical to being able to detect and correct issues quickly. And the application needs to be fast and correct and reliable.

For most of you, many of these attributes seem not to apply. Crazy deadlines mean that dependencies and copy-paste code are perceived as the only way to get there. Maintenance is probably not your problem. Apps and requirements are straightforward, hardware is ubiquitous and cheap and if it takes a few seconds longer to process, who cares. Good for you if thats your environment.

But mission critical systems need rock solid foundations, architectures and maintainable code. And any additional dependencies, any additional complexities, anything that slows down deployments or maintenance need to be eliminated mercilessly. Sure, it will take longer to write and test. But the cost and time to build dependency limited and conflict free systems pays off handsomely in reliability, maintenance speed and application performance.

No matter your situation, if you cannot clearly understand your development environment and all the application code, you’ll never figure it out when the production system gets slow or goes down. Especially after time has passed and you have been working on other projects.

Not so unique after all.

In Summary

If you are writing anything mission critical, where future maintenance, performance and teamwork is critical, brutally limit dependencies to simplify the development environment, deployment process and maximize your ability to debug and maintain the product. Ensure that all code added to the project needs to be there and is fully understood, and that it does not conflict with any other code in the system.

It means that you will have to write a few more modules yourself, but that investment pays off incredibly later on.


  1. A Logger Class: I referred earlier to a logger class as an example of code that has been done hundreds of times over and who the heck is silly enough to write another one. Me, that’s who. Why? Because I needed a real-time logger that would not in any way slow down the processing on the primary threads of the application. Almost all logging classes are synchronous, you have to pause the thread while the logger spits its results out to screen and file system. That makes sense when you have the time to wait and ensures that log entries are saved before moving on and potentially crashing. Async loggers often collect log entries, then pause everything to batch send (even if they are on alternate threads). But in a real-time system, the huge number of milliseconds (1/1,000 of a sec) needed to append to a file or send a log message kills performance that is measured in microseconds (1/100,000 of a sec). I needed to know just how the logger impacts thread performance, and need to optimize its performance too, and that’s why I wrote my own logger.

Hiltmonism - Talk to Drivers, Not Mechanics

How many people really know how their motor vehicle works, or even care to. Very few.

But they all drive.

And when their car breaks down or makes a noise or that ridiculous engine light comes on, they need mechanics. Nobody, except other mechanics, understands the explanation of whats wrong with the car. And therein lies the problem.

Mechanics need to learn to talk to drivers, not mechanics.

Techs are the Mechanics

Technology people are perceived to be painfully shy. I guess its a movie meme. They are not. Just observe a bunch of technology folks get into it on a topic they understand. You’ll never get them to stop talking, arguing, jousting and challenging. Mechanics are speaking to mechanics.

Technology people are also perceived as disconnected, strange, different, hard to speak to and harder to understand.

Unfortunately, this is not a meme. It’s true.

But not because techs are disconnected, strange, unintelligible folks. Or painfully shy for that matter.

Its because techs simply communicate differently to the way their audience does. And since techs do not speak to people the way they normally prefer and understand, this perception is supported by the evidence. Which makes it real.

Mechanics are speaking to drivers as if they were mechanics.

In business, successful technology teams understand this disconnect and learn to speak to their audience, to talk and relate the way their audience in the business does.

Successful mechanics speak driver-to-driver as fellow drivers .

Mechanics vs Drivers

Lets take a look closer at tech talk (the language of the mechanic) vs audience talk (the language of the driver) to see why this disconnect exists:

1. Techs talk in details, the audience talks in generalities. As a result, techs talk too much about the detail of what they are explaining and either confuse or bore the audience. Who cares that brakes have linings that soften, wear out and burr; but we all know when they squeak. Techs need to adopt necessary generalizations to address their clients properly and to have a shot at understanding them.

2. Techs also get lost in being accurate and pedantic, something the audience never does — they have better things to do. Whether there are N items or N+M items makes a difference to techs yet makes no difference to the audience. For a mechanic, the engine timing, tuning, air flow and seals are critical, for the driver, having a working car is all that matters. Techs need to loosen up and focus on how it will be used and not how it works. The tech and the client can always come back and drill down into the details later.

3. Techs use specific language to communicate, our audience uses common language and relies on context or experience to share what they are taking about — and understand that the right terminology does not matter as long as the core points of the conversation are understood. To a driver the doohickey is rattling, to a mechanic, that could be anything and the rattle a symptom, a result or something else! We techs get confused when the language is not our own, missing the gist of the conversation, which is what the audience wants us to understand. Techs need to learn their language patterns, and to focus on the gist of what is being said, on what the client is trying to say, not to imagine what the client may mean or what may be happening and what they just missed the client saying while doing all that imagining.

4. Techs explicitly express assumptions, the audience barely registers that they are making assumptions in conversation. Mechanics feel the need to explain the purpose of tappets and push-rods and how they react to different octane fuels which have different explosive properties, and that is why the car pings and feels sluggish. The driver wants the car to just go well. This one is hard for techs learning to speak to their audiences because they need to know what assumptions the audience usually makes. Working with your audience, listening to them interact, and asking them questions is a good start.

5. Finally, techs seek rigid exacting perfection, its necessary to make correct digital programs. The audience thinks and lives differently in an analog world where things change, move, shift, adjust and make — or fail to make — sense in unusual ways. Techs need to understand their audience’s analogue nature, senses, rate of change and direction, finding ways to communicate and adjust in analog while still operating in digital space.

Seeing the Signs

Its easy to spot the signs when this communication breaks down. If the audience starts to repeat itself, if the eyes glaze over or slow-blink, or they start pacing or making impatient motions, then the communication has failed. Just picture a frustrated driver trying to explain to a mechanic what is wrong with the car.

It works both ways. If the tech rambles on too long, finds themselves needing to say somethings, stops listening, or says “I understand” just to get rid of the audience, that too is bad. Just picture a mechanic detailing all the possible moving parts that could be the rattling doohickey to a puzzled driver!

Keep in mind that, unlike mechanics, techs do deal with different audiences. Each audience is vague or detailed in its own way, uses its own terminology, has its own assumptions and its own measures of success or failure. Each different audience has its own norms. Yet none of these audiences has the time or patience to discuss or learn all the dark details. They may seem different, but they are all essentially drivers.

The tech team needs to understand this about their audience to become part of it. They need to know how to speak to each audience in the language they understand, using the terms and levels of accuracy the audience expects.

To talk like drivers to drivers.

Mechanics can be Drivers too

The tech team needs to know what to tell their audience, and most importantly, what not to tell them . Explaining how a program or technology works, what an error message means, why something cannot or does not work, why this one case in 100 is possible and needs to be solved now, is interesting to techs, and not at all interesting to the audience. Drivers want a working vehicle, they do not need an explanation why it’s not working.

The tech team needs to know when to shut up. To the audience, perception being reality means they build their own mind-model of how a thing works. Letting them live in their own model is hard for techs because we need to deeply understand our own models and assume, incorrectly, that others do too. The driver does not need a lesson on internal combustion engine thermodynamics when knowing it turns on and makes a “vroom” sound is good enough.

And finally, the tech team needs to know when to speak up. Especially when the audience draws the wrong conclusions. If the driver is operating the vehicle incorrectly or using the wrong fuel, the mechanic needs to find a way to reach them in a way the driver can understand. The tech team needs to know how to effectively communicate the issue without going into boring details or terms, and draw the audience back in, regain their trust and understanding.

Talk to Drivers, not Mechanics

Finding that balance, the balance between detail and vagueness, between the correct term and the common one, between enough information and too much information, between saying more and shutting up is hard for tech teams.

But a good team can find this balance as long as it knows what the communication issues are.

And how to deal with them.

To talk to Drivers as Drivers, not Mechanics.

I use this Hiltmonism, “Talk to Drivers, not Mechanics”, to remind me and my team how to listen and communicate with those we design software for and how to build better product in the future. After all, great product is what they want and need too. We all need to be somewhere.

Oh, and to make the tech team seem a little less weird, strange and alien.

Click to see other Hiltmonisms in the ongoing series.

Follow the author as @hiltmon on Twitter.

Minimal Project Management

With my team starting to grow at work, its time to add some Project Management to our process. However, I do not want this to add any additional time, meetings or burden on them (or myself) and so all of the popular formal processes are no good for my needs.

In this post, I will outline the Minimal Project Management process, its steps and how it works. I will also cover the issues of change and interruptions. This process only works, however, because of the quality of my team.

The Team

My team is a group of experienced, smart folks who know how to design, program and ship product. Most of all, they understand the business, its priorities (our one weekly meeting covers that) and how their deliverables will impact. And they know how to manage their own time, interruptions and schedules.

They do not need to be micro- or even macro-level managed, they do not need systems and tools to tell them what to do when, they do not need someone else creating stories, setting tasks and getting on their cases when things slip. They know how to manage themselves, how to figure out what is needed, how to communicate effectively, and how to get it done.

That’s why I hired them.

I can assign work and know it will get done, done right, done on time with minimal overhead. And I know they will communicate progress and issues as needed. They need almost no project management from me.

Minimal Project Management

But as the team grows, so the number of projects and tasks that can be performed grows. When it was two of us, a quick conversation could cover all topics. When we grew to three, a weekly meeting and a single live-document sufficed.

Now we are five. Capacity is up. And the list of tasks and projects that can be tackled in parallel grows rapidly. I, as manager, need to stay on top of a growing mountain of this stuff.

This is when I level-up to Minimal Project Management as I have done on so many previous occasions.

Minimal Project Management consists of only a few steps (and one change process):

  • Assignment and prioritization of work.
  • A Statement of Work to define the task or project.
  • A single weekly review of progress.
  • A Management of Change process.

That’s all there is to it. The team gets to manage their own time. They know the priority and impact of the work, the dependencies, the business needs and their key deliverables as its discussed in the meeting. They know the standards expected of their work, the tests to run to be sure their deliverables are correct and the pressure we are all under.

That is, again, why I hired them.

Lets look at each component.

Assignment and Prioritization of Work

As the manager of the group, it is my responsibility to determine what work is needed, what the priority of that work is and who will do it.

I do not do this alone.

The Management Team of the business meets weekly to discuss issues, plans, challenges and review strategy. It is at these meetings where I contribute that which tech is bringing to the flow. The Management Team agrees on business priorities, and I convert that into prioritized work for my team to perform.

My team meets weekly too, and one part of this meeting is a review of the current business and work to be done — so they see the big picture. They too advise on dependencies and priorities.

With all this advice and help, I can easily determine what work needs to be done first, what next, what can remain on-hold, and can assign work to the team.

It’s not hard to determine what work implies a large, complex project and what work is small and easy. And there is no need for complex estimations or deadlines, we all know what’s at stake.

Each team member gets at least one large project and several smaller ones. This is intentional. There is much downtime on a single project as developers wait for data, people or dependencies. By loading each developer with several projects, they can work on another while waiting on one. Not only that, they can schedule their time to switch between projects to keep their work fun and interesting. It is my experience that developers with several projects “on the go” tend to mix-and-match their time and yet deliver and ship great code on time. Developers with nothing to do become disruptive or lazy.

Once work is assigned, each assignee is responsible for getting that work done. And they start with a Statement of Work.

The Statement of Work

A Statement of Work (SOW) is a short document that defines the objective, tasks and deliverables of a task or project. It can optionally contain assumptions and dependencies, and it may have details added later (designs, appendices, sample data, notes). But in essence, it’s a document that defines what needs to be done and how we know when it has been done.

My Statement of Work template is a simple Markdown text document as follows:

Kind: Statement of Work
Title: <Task or Project Name>
Author: Hilton Lipschitz
Affiliation: <Company Name>
Date: 2016-03-04
Version: 0.1

# SOW: [%Title]

## Overview

<What is the objective/goal is of this work>

## Tasks

<A list of the tasks to be performed — very high level, clearly written>

## Deliverables

<The deliverables of the project, so we know when it is complete>

Optional additional sections are

## Dependencies

<SOWs that need to be done first for this to be viable>

## Assumptions

<Any assumption made in why this work needs to be done>

That’s all there is to it.

A good Statement of Work is less than one (1) page long, complex ones get to a massive 2 pages, never more. It limits the scope of work and defines the tasks to be performed and deliverables to create. It does not detail each task, merely adds them as simple checklist items to be sure they get completed along the way. It does not say how, when or who will do the work. The how is up to the developer, the when and who is up to me, the Project Manager.

Most importantly, the person tasked to do the work writes the Statement of Work, not the Project Manager, not the user, not some Business Analyst.

There are many reasons for this:

  • The person doing the work needs to understand what needs to be done, what needs to be delivered, to whom and what is going on if they are to do it right. The best way for others to know if they have this knowledge and understanding is to get them to write the Statement of Work themselves and then review it.
  • The writer of the document also takes ownership of the project. It’s their scope, their tasks, their deliverables to make, their problems to solve.
  • The writer of the document usually starts with no assumptions. Asking an analyst or a user to write it leads to information missing because they assume the developer will know things. If the developer is writing it, they assume nothing.
  • And honestly, it frees me up to advise, coach, discuss, approve and work on other things, taking on my own Statements of Work. I’d rather be programming too.

Once the Statement of Work is completed (and reviewed, filed and approved by me), the person starts executing the tasks and we move into a work and review phase.

Weekly Review

Once a week, the team meets.

More than that and we’ll be interrupting their ability to get into the zone and work on projects, reducing their productivity.

The weekly meeting never goes beyond an hour. In it, we:

  • Each update the rest of the team on the projects we are doing and their progress.
  • Discuss new potential work that has come up.
  • Discuss any issues, ideas and changes that apply.
  • Discuss any key interruptions and bugs found.
  • Completed projects are closed.
  • Any new assignments are handed out.
  • And with the remaining time, we talk general technology and ideas, this being the part we like the best.

A weekly, semi-formal meeting does not mean we do not talk about work during the week. On the contrary, we are always discussing what tasks, documents and deliverables we are working on. If any issues arise, the team member goes to the people they need (especially me) to discuss, plan and resolve. While this is going on, the rest of the team is free to remain in the zone and continue to deliver on their work.

Dealing with Change

One thing you can always be sure of is that things change. The business changes, priorities change, even small project scopes change. Whether the change is external (Management, User) or internal (Tech Issues, Dependencies), the Minimal Project Management process needs to deal with it.

And I boil it down to two basic change types: Scopes change and Priorities change.

Scope Change

In scope change, the tasks and deliverables of a project as documented in a Statement of Work change. Users may ask for more features “while you are at it”, or, as the developer learns more about a topic, the nature of the tasks and deliverables change — what we thought we needed when writing the Statement of Work does not match what we actually need.

Scope change is managed by versioning the Statement of Work. A new version is generated by the developer and reviewed by me. If the scope change fits priorities, I will approve it, and the programmer will start working off the changed SOW. If not, or if the work should be a separate task or project, then a new task is generated and put aside for the next weekly prioritization and assignment.

If the scope change stops work on that project, no problem, because each programmer in the team always has several “on the go”, and can switch to working on another in the mean time.

Priorities Change

Like all programmers, I prefer to finish what I start. Given the limited scope of Statements of Work, this is usually and regularly achievable. But in the real world, priorities change.

This is where the manager needs to step in. Projects may need to be stopped mid-stream, with deliverables incomplete and undelivered, so that other projects can get some time and resource. Partial code branches need to be committed in case we return to this task. That’s all the team member needs to do, they then move on to other work.

I, on the other hand, need to track what was not done and why. I need to archive the SOW, with a note as to why it was stopped, by whom and when, and I need to be sure the partial work is safe. I usually add this information as a notes appendix to the document.

And, when the time comes, I can later assign this work back to the original programmer to continue and complete. Note that I rarely assign a SOW to anyone other than the writer of the SOW as they know best.

Interruptions

One thing that does ruin the simplicity of Minimal Project Management is interruptions. Users find bugs and interrupt, folks ask questions and interrupt, systems fail and interrupt, or some urgent, must be done now, yes bloody now, the place is on fire, task interrupts.

This is where the caliber of the team kicks in and where, as manager, I need to be patient and understanding (and occasionally a hard case). The people I hire and work with expect interruptions, they are realists and are experienced enough. In most cases, they deal with the interruption and move on, managing their own time and getting back to their work, without anyone being the wiser or anyone else being interrupted (until the weekly meeting where these are discussed).

If the interruption is a bigger thing, or happening to frequently, they make a judgment call and bring me in. Between the causer of the interruption, the programmer and myself, we’ll quickly discuss the issue. If it is something urgent, we’ll agree to perform the interruption, knowing work will be delayed, to get the interruption resolved. But if the interruption is a bigger issue, and it is up to me to determine that, then its also up to me to get the programmer back to work and the cause documented, scheduled for later resolution and interrupter satisfied — which means I need to be a hard case.

Mostly though, the team can and does deal with interruptions all day, and can still get their work done in spite of them.

Delivery

When a task is completed and the deliverables shipped, I archive the Statement of Work. Often, I will add a Notes section at the bottom to discuss how the task went, who did it and any issues that we faced.

This library of archived projects helps me to assign work in the future. I can see who is good at what, and who needs to learn what. We as a team can learn from the issues of past projects how better to think about, document and plan future ones.

Summary

So, as my team is growing, I am moving from a semi-formal weekly meeting to a semi-formal weekly meeting and Statement of Work Minimal Project Management model.

I will use the same task recording and prioritization tools as before. I will hold the same weekly meeting as before. But now each team member will be responsible for managing their formal Statements of Work and more formally communicating, managing and tracking change.

Yet we’ll remain the fast, nimble, and exceptionally productive tech team as always, doing what we do best, delivering systems that multiply our firm’s capabilities and competitive edges.

Follow the author as @hiltmon on Twitter.

Build Only What You Need

A note as a result of a discussion with a colleague.

I had quickly assembled a simple class that triggers periodic function calls from a timer on to a single worker thread. I need this class to ensure that periodic functions get called regularly. Since each call is quick to run (takes under a second), only needs to run every few minutes and can happily be queued behind another quick function, the simple single worker model is perfectly fine for this task.

I then reviewed the code with a colleague.

The colleague immediately declared that someone would abuse this class. They may create additional threads in the called function, he argued, they may trigger long running tasks in the called function, they may create hundreds of competing calls, they may abuse this class for all sorts of non-canonical use cases. He felt the code was therefore too basic, open to abuse and therefore a bad design.

I could build a fully featured, multi-threaded, load limited, time limited, repeating event trigger class architecture. It would take me days to do. And it could easily be coded to prevent the above potential abuse cases. One could successfully argue for this architecture.

But there is no point in developing that if the use-case is as documented in the class for short, periodic tasks. Adding worker threads adds complexity that is unnecessary (and the scope of work is OK with a slight delay on calls). Adding limiters increases complexity further and are only needed under abuse circumstances.

I believe you need to build what you need and no more.

If the use-case changes, you need to review whether to extend and expand an existing class or create a new architecture for the new case.

Coding based on possible, potential or abuse use-cases of code is silly and a waste of time.

Code what you need, design it well, and if a new use-case ever emerges, deal with it then.

Aside: He and I are the only two folks with access to this code, and so only he and I could abuse the class anyways.

“Keep It Simple Stupid”, all day long.

Follow the author as @hiltmon on Twitter.

Now in HTTPS

My host, Dreamhost, is offering free web site certificates through Let’s Encrypt, a new initative to make encrypted connections the default standard through the internet. They started with free SSL certificates.

So I turned it on.

Most browsers will be warning against unecrypted web sites real soon now, so I thought it best to do this now.

The only change I seem to need was to change the Google Fonts URLs to https as well.

Note: Since the Let’s Encrypt root certificates have not yet been deployed to all browsers (especially older ones), you may get a browser message that this site cannot be trusted or verified. Hang in there, and let me know via the comments (with screenshot and date).

Follow the author as @hiltmon on Twitter.

We're Better Than This

My thoughts on the toxic hell-stew that my Twitter feed is becoming. I follow (and occasionally interact with) a bunch of intelligent, opinionated, sensible tech folks whom I respect immensely and whose timelines and lives are being ruined by an impersonator, a gang of misogynists and their flock of followers.

We’re better than ganging up, taking sides and judging or expressing negative public opinions on people we do not know personally. Topical constructive disagreement is great, we thrive on that, personal attacks are not.

We’re better than letting one arsehole impersonating someone else from disrupting our sense of community, discourse and expression. You know who I mean.

We’re better than sniping at each-other over made-up shit, clickbait, snark and snide remarks created intentionally to sow discord in our community.

We’re better than those who treat women, LGBT folks and minorities as second class citizens. Because we do not.

We’re better than those who dox, swat, spread hate and discord. Because we do not.

We’re better than to give attention where it is not owed or deserved. We have more important things to do with our time.

We’re better than to get angry over insignificant stupid things where war, refugees, child killings, racism, guns, insane politics, a slow slide into the dark ages, climate change and a hundred other real issues deserve our attention and intellect.

We’re better than letting a few bad people ruin our community, one we have built over years of communication, trust and honesty.

We can and should unfollow, muffle, mute or block. We can shut them down together, as only a community can. Then ignore them.

Lets get back to being who we are, to the real discussion, to sharing our interests, to discussing tech topics, and to making Twitter enjoyable again.

Lets tweet a namaste (🙏🏽) to each other and put this behind us.

Maybe, just maybe, if we set a better example, as we have done in the past, they will find us implacable, unruffled, united and not worth messing with.

Follow the author as @hiltmon on Twitter.

Dangerware

Dangerware is common in business and government. Dangerware is just ordinary software, but the way it comes into being creates the danger.

  • It starts with a basic prototype written in a hurry.
  • This is quickly put into production to run the business.
  • The prototype screws up repeatedly when faced with new scenarios.
  • Resources are tasked to add (not update or correct) the prototype to deal with the latest screwup.
  • This process repeats until the resource (or original business person) is tasked to a new project, or the cost of screwup is less than the cost of resources to mitigate.

I call this software dangerware.

And sadly, it runs most businesses and agencies. Dangerware is software written without requirements, design, tests, validation, checks and balances or even an understanding of the business, the big picture or the nature of the problems being solved.

Its software without responsibility.

Shocked?

You should be.

But its as common as desk chairs in the real world.

Think about it: the Excel models, VBA projects, Access databases, SQL queries, built by non-professional programmers, hobbyists, interns, outsourced programmers and juniors that control and manage your business are all dangerware. Where the need to ‘get something out’ completely outweighed the risks, both financial and professional. And where its was easier to blame someone else for the the screwups (or for not recovering from them).

Dangerware is everywhere in business and government. Every single finance person has a horror story of a bad Excel formula that cost someone else their business. And yet they still trust in their own dangerware.

Can you imagine if your MRI machine or autonomous car’s software was created this way? You’d be dead.

The evolution of dangerware into bigger projects and the rush to start larger projects is a fair explanation as to why the vast majority of corporate and government software projects go so horrendously over budget and fail so badly.

Dangerware is easy to detect and prevent.

Detection is simple:

  • If the user is the programmer and not a professional full-time programer, you will get dangerware.
  • If the programmer does not understand the business problem to be solved within the bigger picture, you will get dangerware.

Solving the first is easy. Get a professional to develop the application. Trust them, listen to them and allow them to do it right.

The second is a lot harder, but not as hard as you think. It boils down to process and communication. And it was taught to me when I was a cocky kid by a middle-aged man with thick glasses and a cane. Sadly, I do not remember his name.

He taught me a simple process to gain an understanding of the business. It was the first step in what used to be called Business Process Engineering and it is all about finding and following the workflows.

To understand a business or a business problem, you need to know that it exists and understand what it is. To do so, you need to learn the workflow, how it starts, how it does (or should) flow and where it ends up. And the first step is to walk through the first one you identify, and then each one it exposes. Follow the existing paperwork, see who gets involved, centrally and peripherally. See which flows depend on this flow and are triggered by it. Follow each variant of the flow, run scenarios on each, both success and failure, to understand the nuances.

And do this with real people. Not the managers and consultants, but with the actual people involved. Work with them to find out what you do not know. Assume nothing. Ask lots of questions, listen to them talk (and complain), ask about what happens before and after, ask why they do what they do to see if they even know. Its amazing what you will find and just how much you did not know to start with.

What will emerge is a picture, often confusing to start, of intertwined people and processes, of contradictory and seemingly irrelevant steps, and a huge pile of exceptions to the rules.

And a lot more questions.

Unravel this picture to understand the flow.

You are not trying to reproduce the flow. Nor blame or replace the folks running it. Pull out what needs to be done, why it needs to be done, where it works and where it fails. And it always shows up what you would have missed had you not gone through this process.

Then, and only then, design software to help.

That will protect you from dangerware. Because you understand the business problem and environment before solving for it and coding it up, you reduce the risks of failure, screwups and blame games.

The counter argument for this is that there is never enough time to execute this process. “We’ll get something out and then, if we have time, we’ll figure it later” is the bat-signal of dangerware. Even a single walkthrough and a few conversations with the folks involved that takes less than a few hours will show up just how much you do not know. And the time and cost spent learning is insignificant compared to the time to add more danger to dangerware and the cost of screwups.

You’ll never know everything, but at least the big nasty dangers will be identified early, exposed and can be solved for in design before releasing dangerware.

A professional programmer will check their code. A professional programmer who understands the business flow will generate product that is not dangerware.

And you, you can focus on building a better business instead of being distracted by the huge number of problems dangerware causes.

Follow the author as @hiltmon on Twitter.

A Yosemite Markdown Spotlight Importer

All my text and writing is in Markdown formatted files and I would like to search them using Spotlight. The editors I use do not have an importer (they have Quicklook only), so this is not available directly.

Changing the RichText Spotlight importer trick worked in previous versions of OS X (see how in A Simple Markdown Spotlight Importer), but since System Integrity Protection in OS X El Capitan, this no longer works.

Never fear, there is another way.

The great Brett Terpstra to the rescue, again! Read about it at Fixing Spotlight indexing of Markdown content on his amazing site.

What I did was the following:

  • Downloaded this Zip file from his site and uncompressed it
  • Moved the Markdown.mdimporter file to ~/Library/Spotlight. I had to create the folder under my User’s Library folder. To find this folder in Finder, hold the Option key when pressing the Go menu to see the Library folder option.
  • Started a terminal shell

In the command prompt, I executed the following to activate the importer:

mdimport -r ~/Library/Spotlight/Markdown.mdimporter

And then, when nothing seemed to have happened, I recreated the entire Spotlight index on my computer.

There are two ways to do this.

The GUI way is to open System Preferences, select Spotlight and the Privacy tab. Drag and drop your Macintosh HD onto the big open space. Wait 20 seconds or so, then click the minus sign to delete it. OS X will start to recreate your Spotlight index.

Or use the following command:

sudo mdutil -E /

To see if this is working, run

mdimport -L

I get

2015-11-17 12:40:40.400 mdimport[53046:588670] Paths: id(501) (
    "/Users/Hiltmon/Library/Spotlight/Markdown.mdimporter",
    "/Library/Spotlight/iBooksAuthor.mdimporter",
    "/Library/Spotlight/iWork.mdimporter",
    ...

After a long while, all my Markdown files were once again searchable in Spotlight. Thanks Brett!

Follow the author as @hiltmon on Twitter.