On walkabout in life and technology

On the New MacBook Pros

Much has been written, tweeted and complained about the new MacBook Pros released by Apple last week. Complaints about the 16GB limit, all-in switch to Thunderbolt 3 (USB-C), the removal of the SD-card and MagSafe, the new keyboard, the aged CPUs, the slow GPU, dongles, that they are not “Pro” level machines, and more. More and more influential folks are writing that Apple has forgotten the Mac, that the Mac is doomed or dead.

They are probably right. The new MacBook Pros are not ideal, nor for them.

I believe the real issue for these folks is not what Apple did not do, but that there is no viable alternative product out there for them that has the one feature they need and all the benefits of the Mac.

Linux on the Desktop is getting better, even Windows is improving, but it’s not macOS. The applications professional-level users prefer run better in the Apple ecosystem. Several only exist in the Apple ecosystem. And even if the primary applications are cross-platform, the tools, utilities, and productivity applications we use are not available elsewhere.

If there were a better alternative to Apple’s ecosystem, professional users and developers like myself would have already switched.

In the mean time, Apple released new MacBook Pros that are according to Apple’s base, horrendously compromised ones.

Its all kremlinology whether this was intentional on Apple’s behalf.

Some believe Apple compromised now because the line was aging, they needed to do something and Intel was falling too far behind. But then Microsoft released Surface the day before and it was the same platform, nothing newer inside (except for a better GPU in a thicker, heavier body).

Some believe Apple intentionally made the shift to a new design and ports now, just as they did with USB, floppies and CDs before. Their first machines with the new ports were always compromised, but they got better.

And some believe Apple simply does not care about the Mac. That one does not compute with me. The new design, the new touch-bar, the effort that went in to the look, weight and feel of the device proves otherwise.

I am a professional programmer, writing multithreaded, complex, big-data applications. I should be using a Mac Pro at work (and another at home and in the coffee shop) with maximum cores and RAM in order to compile and run my code and maximize my productivity. But I am also a graphics and print designer, a web developer, a writer, an amateur-photographer and a productivity nut. The MacBook Pro, as imagined by the complainers, should be the perfect machine for me.

The reality is that the perfect machine does not exist for professional you or professional me, it never has and never will. I have always wanted a UNIX-based operating system with a GUI, more cores, more RAM, faster storage, better battery life, a better screen and a thin and light laptop because I walk and work everywhere. You may need a machine with certain ports, more RAM, longer battery life, bigger screen, whatever. Our needs are mostly the same, but differ in significant areas.

I have tried Dell Windows, MacBook Pros, Mac Pros, MacBook Airs, Lenovo’s running Linux and combinations thereof, and the one computer that has met the most of my needs – but never all of them – has been the MacBook Pro. I am writing this on my maxed-out trusty 15-inch Mid-2014 MacBook Pro. The cores have been plenty fast, the RAM sufficient, the SSD good enough, the display great, the battery life the best ever, the ports just fine. But it never was my ideal computer. It was and remains the best I could get to meet the most of my needs at the time.

I have ordered the new 15-inch MacBook Pro, with upgraded CPUs, faster GPU, new larger SSD and the same, but faster, RAM. I do not expect the new laptop to be perfect, none ever has, but I do expect a reasonable performance improvement in compile times and database reads, way better color on my display and a lighter load when walking. It may not sound like a lot, but these small improvements when added up over days and weeks turn into major productivity gains.

What I will not be doing is giving up on the stuff that already makes me so productive. The operating system that I love, the tools that I know and love, and the processes and workflows and muscle memories that help me fly. I see nothing else on the market right now that I can change to that can help me perform better.

I also think that Apple, Microsoft and Google are all being frustrated by Intel, who in turn is being frustrated by their issues with their latest process. Knowing Intel, we know they will solve this. Sooner than later. And so I do expect next year for all of Apple’s, Microsoft’s and Googles PCs to move to the next generation of Intel chip-ware that will meet more of professional users needs.

Until then, I intend to enjoy the beautiful new MacBook Pro and its productivity improvements when it arrives, and use a few adapters to replace the ports I need to keep going. But I also will look closely at the 2017 MacBook Pros when they come out. And keep an eye on the pro-level iOS/Pencil/Keyboard solution in case it becomes part of a better solution for my needs.

Follow the author as @hiltmon on Twitter.

The Gentleman James V

Last evening a package arrived from Amazon. A package that neither I nor my wife had ordered. A mysterious, enigmatic package. From the outside, there was no indication of its content or providence.

We discussed where it could have come from. What could it be. Should we open it. Maybe Amazon sent the package to the wrong person. Yet the delivery address was certainly mine.

Finally, I opened it.

It contained a bubble wrapped box, a bunch of packing bubbles and three slips of paper. The first slip was a packing slip describing the content of the bubble-wrapped box. The second was a return slip. Where this package came from remained a mystery.

It was the third slip of paper upon which we hoped to gain the key clue, the source of this package, our mysterious benefactor.

It did not help.

It contained a personal note.

An unsigned personal note.

A note clearly written by someone who knows me and how I live my life.

Someone who spotted an emptiness in my existence that I was unaware of.

The bubble-wrapped box contained a gift, a perfect gift. One borne of great kindness and understanding of my lifestyle and of unknown unstated needs.

From the note and content, it was clear that the sender knew me well. It was also clear that the sender was considerate, kind, wise and understanding. They had taken the time to observe that necessary items, those in the bubble-wrapped box, were missing from my life. That the quality of my life and that of many others would be improved immensely by this gift. They had taken the time to research and select the perfect gift to fill this unknown unstated void. And they had executed, purchased and shipped it.

With a personal, yet unsigned note.

Who could this wonderful, kind, generous, person be? Why had they not signed the note? How does one accept such a magnificent gift from an anonymous source without the opportunity to express heartfelt gratitude and the soul-filling joy such a gift brings.

A mystery was present, the game was afoot.

This angel of awesomeness was to be unmasked and gratitude expressed.

Whatever it took.

However long it would take.

All leads would be followed.

As far as they would lead.

This mysterious messenger, this masked angel, would be found.

And unmasked for all to see their true generosity.

In the end, an email was sent. A sleuth engaged. A night passed. And an email received.

I knew who the culprit was.

I had unmasked the angel of awesomeness.

And had a good night’s sleep.

The message on the third slip of paper in full:

Hilton… something for your office. I couldn’t bear the thought of you drinking from shitty plastic cups. Enjoy!

The bubble-wrapped box contained four stunning glass tumblers. They presented in the style of crystal Manhattan glasses. The perfect compliment to the office whiskey collection. The perfect implement to hold and enjoy the Scottish Nectar at the end of a hard day’s work.

A simple call and thank you is not enough.

A note on Facebook neither.

This kind, considerate person needs to be immortalized.

A plaque perhaps.

Maybe have something named after them.

A bridge, a ship, a building, a space shuttle.

I have none of those things.

But I do have a bar.

One I attend regularly.

It is stocked with quality whiskey and bourbons.

All comers are welcome.

It is a place of relaxation, conversation and comfort.

It brings joy to many regulars and guests.

I hereby declare that The Gentleman James V bar open.

All glasses will be raised in his honor.

His name will be whispered with reverence.

His contribution to quality of life and joy known and remembered.

And his presence at the The Gentleman James V is much desired.

For those of you who got this far and do not know who I am talking about, allow me to introduce James V Waldo. He is man with the ferocious visage of a viking biker, an arse that emits a toxic hellstew of gasses that are not present on the periodic table, the soul of a poet, the intellect of a debater, the wit or a writer, and a heart the size of the moon. A husband. A dad. And a very good and special friend.

TL;DR: Thanks for the Whiskey Tumblers, Jay.

Follow the author as @hiltmon on Twitter.

The Annual Dependency Library Upgrade Process

At work, we write a lot of code. In order to remain productive, we reuse the same proven dependent libraries and tools over and over again. Which is fine. Until we start seeing end-of-life notices, vulnerabilities, deprecations, performance improvements and bug-fixes passing us by. At some point we need to update our dependencies for performance and security.

But its not that easy.

Take some of the libraries we use:

  • Google’s Protocol Buffers are amazing. We’ve been on 2.6.1 for ages, but 3.1.0 is out and it supports Swift and Go, two languages we surely would like to use. But the proto2 format we use everywhere is not available in the new languages. We need to migrate.
  • ZeroMQ moved to a secure libsodium base in 4.1, making it much safer to use. But the C++ bindings from 4.0.5 are incompatible. We need to migrate.
  • g++ on CentOS 6 is ancient, version 4.4.7 from 2010. We’ve been using the devtoolset-2 edition 4.8.2 from 2013 to get C++11 compatibility, with a few library hacks. But that version of g++ produces horribly slow and insecure C++11 code. We skipped devtoolset-3 even though g++ 4.9 was better. devtoolset-4 is out, using g++ 5.2.4 from 2015, still not the latest, but it is much better at C++11 (without our hacks), more secure and faster. Yet is ABI incompatible. We need to migrate.

The amount of work seems staggering given we have well over 100 protobufs used across our code base, ZeroMQ everywhere and everything is compiled for production using devtoolset-2. The old libraries and tools are a known, proven platform. The current code is stable, reliable and fast enough. It ain’t broke.

The benefits are also hard to measure. Given all the effort to upgrade, do we really get that much faster code, that much more secure code? And what about the code changes needed to support new interfaces, formats and ABIs? What does that get us?

For most IT shops, the discussion stops there. “It ain’t broke, don’t fix it!”, or “All pain for no gain, not gonna play.” They stay on the known tools and platforms forever.

For my IT shop, things are different. We want to use new tools, new languages, new platforms yet remain compatible with our existing services. We need to be secure. And we really do need to eke out each additional microsecond in computing. No, if it ain’t broke, break it!

So, once in a while, generally once a year, we update the platform. Update the libraries. Update the tools. Update the databases.

And we do it right.

Firstly we try the new libraries on our development machines. Homebrew installs make that easy for the dependencies. Rake tasks make it easy to upgrade our Ruby dependencies and Rails versions. We build and test our code in a migration branch and make sure it all works, changing to new interfaces and formats where necessary.

We then spin up a virtual machine on our production operating system (CentOS 7 now), install the new compiler and dependencies, and rebuild all there. Given that issues are mostly resolved in development, we only find g++ quirks in this test.

And then one weekend, we run the scripts to update our production servers to the new tools and dependencies and deploy the new versions.

And since we do this every year, it runs like a well-oiled machine.

It helps that we have tools to recompile, run and test our entire code base. It helps that we have tools to stage and deploy all our code automatically. And it helps that we have done this before, and will do it again.

Long term, the benefits are amazing. We can try new platforms with ease. Our code gets better, faster and more secure all the time. The need for workarounds and hacks and platform specific coding becomes less and less. The ability to nimbly move our code base grows each time.

Many of the projects we want to take on become possible after the annual upgrade. That’s why we do it.

If it ain’t broke, break it.

And it really is not that much work!

Follow the author as @hiltmon on Twitter.

Minimal Project Management - 6 Months Later

Just short of six months ago, I wrote about how I was transitioning to Minimal Project Management as my team was growing at work. So, how did it go? Did it work? Any Problems?

In short, after a few false-starts getting our heads around the intent of the Statement of Work document, it went — and continues to go — very well. Projects we used to start and never finish are now completing and shipping. Communication within the team and with our users is better. And our throughput is up.

In fact, now that the progress and task lists are more visible to management and users alike, assignment and prioritization is also better. The Management Team is more aware of the complexities in projects — from the Statements of Work - and why they take so long — sometimes — to get done. We are also less likely to kill an ongoing, assigned piece of work, when the progress and involvement is clear. We also think a bit more and harder about what we really need to get done next instead of just assigning everything to everyone on a whim.

The Statement of Work has evolved into a thinking tool first, and a communication tool second. My team now uses the time writing the Statement of Work to think through the options, the details, the knowns and unknowns, the questions needed to be asked and answered. They are spending more and more time working the document up front instead of diving into coding and “bumping” into issues. Just the other day, one of the developers commented that the programming for a particular project would be easy now that the way forward was so clear.

I do also see our productivity going up. We may take more time up-front to write a document, but we are taking way less time when coding, testing and shipping as we’re not futzing around trying to figure things out. The total time to ship is dropping steadily as we become more habitual in thinking things through and writing them down clearly.

Our users also look these over. This leads to discussion, clarification, and the setting of expectations as to what will actually be shipped. It also leads to more work, but we track these add-ons as separate change requests or future projects. When we ship, our users are far more aware of what the changes are and how it impacts them.

The weekly review is also easier because, since the whole team reads all Statements of Work, we all know very well what each other team member is working on. For fun, I sometimes get team members to present each-other’s deliverables for the week, a task made easier by the process we follow.

Some things have not changed much. We still get a large number of interruptions, but my team is far more likely to triage the request and decide whether to accept the interruption and fix the issue, delay it until they get out of the zone, or push it off as a future project to deal with later. We still get a large number of scope changes, and these too get triaged better. And we do get fewer priority changes, mostly because those that change the priorities see the work already done and are loathe to interrupt.

Of the issues faced, most have been resolved through practice.

Programmers would rather code that write documents. So the first few Statements of Work were a treated as a speed bump to get to coding up a solution, and necessary step to please the “old man”. After running through a few iterations, the benefits of doing the thinking, checking and discussions up front became quite clear. Writing is still a drag, but the benefits are now clear and there is more enthusiasm in writing and reviewing these Statements of Work within the team.

The other issue, the level of detail to be written, is also being resolved through practice. Initially they wrote very high-level Statements of Work, hoping to resolve assumptions and misunderstandings during coding — the old way. But as the early reviews by me and by users showed them, their readers were confused, identified missing components, pointed out areas not documented and therefore not thought about (or though through), and some were just plain wrong. The next iterations were more detailed, and the next more detailed in areas where details were needed. We’re still evolving where and when to dive deeper in a Statement of Work and where not to, but the documents are certainly getting better and the coding process way faster.

The result of the change to Minimal Project Management is easy to see. More projects getting shipped correctly and quicker, with better discussion and problem solving up front and faster coding to the finish line. And our communications and prioritization processes run smoother.

Follow the author as @hiltmon on Twitter.

Attractive Multi-Line SQL Statements in C++

I often need to embed SQL statements in C++ code. Unfortunately, many of mine are long and complex. Even the simple ones are wide. This leads to the following ugly code:

std::string s8("SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;");

… which means I need a massively wide screen to view it (and it violates my 80-column rule), its not formatted legibly and even with wrap, its hard to understand — and this is a simple example. Its a maintenance nightmare.

Going multi-line, which is how SQL is usually written, makes things worse:

std::string s8(
  "SELECT id "
  "FROM lst_quotes "
  "WHERE route_id = $1 "
  "AND lst_request_id = $2 "
  "AND quote_id = $3; "

C++ compilers helpfully merge the strings, but I need to put in the quotes around each line (and have a space at the end of each line before the closing quote).

Or this monstrosity:

std::string s8("\
  SELECT id \
  FROM lst_quotes \
  WHERE route_id = $1 \
  AND lst_request_id = $2 \
  AND quote_id = $3; \

… where the end of line slashes still need to be added or the compiler gets upset.

Ugly. Hard to maintain. Hard to read. Impossible to copy and paste. Unmaintainable.

I want to be able to paste in SQL. Just SQL. As Is. From my database query tool.

The Solution

The solution is a simple C++ variadic macro placed at the top of the file:

#define SQL(...) #__VA_ARGS__

When used, this macro concatenates all lines between the parentheses, gets rid of newlines and, as an additional bonus, converts multiple white spaces into single ones. So this code (note the sexy formatting and excellent use of white space):

std::string s8( SQL(
    SELECT id
    FROM lst_quotes
    WHERE route_id = $1
    AND lst_request_id = $2
    AND quote_id = $3;

Looks and works great. I can format, make legible and paste SQL in as necessary — the way SQL was meant to be.

When compiled, the resulting string is:

SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;

… which is the desired compact version to pass on to the server.

Legible. Easy to maintain. Easy to read. Simple to copy and paste. Very maintainable.

Follow the author as @hiltmon on Twitter.

Apple Watch - After 1 Year

The Apple Watch turns 1 year old next week. If you follow the popular press, you’d think the device was rubbish and a complete failure.

I vehemently disagree.

It may be a limited version one, but it is a flawlessly engineered timepiece that is conducive to small, yet significant, life hacks.

I wear, and continue to wear my Apple Watch every day. I have done so since the day it arrived, the result of a late night wake-up alarm, a few taps on the Apple Store app and a return to sleep on opening night.

The Rings

If there is a top — by far — reason I wear the Watch as much as possible, it’s those perky exercise rings. I have set the calorie and exercise time goals just above my average, with gym exercise, day. After a year, I am still gamed into walking the long way home and feeling bad on those lazy Sundays when the rings gain almost no color.

The Apple Watch has quietly encouraged me to move and exercise in ways that I have never been able to do myself. I have watched my average heart rate for a 30 minute walk drop to normal levels. And the stand-up reminders, which I use to get up and refill my water glass, seemed to keep me healthier this past year.

The Notifications

I can easily replace the Apple Watch with a dedicated fitness device to replace the rings that rule them all, but nothing saves me more time than having notifications on my wrist.

As described in last year’s How the Apple Watch Has Changed My Behavior for the Better, the process to view notifications is much faster (and less error-prone) and the need I feel to react to them is much smaller when viewing them on the watch.

But over the year I have done something few people have. I have added new notification sources to my world, without noticeably increasing my notification volume. Most of the new notification sources are internal to work, notifying me when systems fail or have issues. I feel these vibrations on my wrist and know — just know — whether to interrupt what I am doing and respond. The result, fewer business issues and faster response when they do occur.

The Time

I was a watch wearer before, and will be for life. The device on my wrist needs to be an excellent time piece, in design, feel, engineering and in allowing me to glance and “know” the time. Prior to the Apple Watch, I wore a Titanium Special Pilot Edition Citizen Eco-Drive watch, a solar-powered engineering marvel. Its face was as familiar to me as my own.

For much of the year, I used the Utility face on the Apple Watch to transition with an analog display. It took no time to get used to. And the wrist flick needed to activate the screen works every time for me. It’s the same movement I guesses I used with the old watch. I still switch to Utility for dress-up.

These days I run modular to see more data in the same glance. And reading that, too, has become habit.

The Next Appointment

It’s not unusual for people to pop appointments in my calendar at work. My fault, I gave them access to my calendar for just that purpose. When deciding what task to work on next, I need to know when my next appointment is. If its far away, I will select a programming challenge and enter the zone. If its near-by, I will work on something smaller requiring less focus.

Before the Apple Watch I would go to my computer and launch the Fantastical menu bar applet to see what’s next. But even that requires the eye to move down the list and read down. Fantastical fades past appointments to make this process easier.

On the Watch, I just flip my wrist up and the next appointment is in the middle of my Modular view. Way quicker.

The Weather

I live in a high-rise in Manhattan. The best weather report comes from looking out the window. But I used to have no idea the temperature range outside as the building is heavily heated. A bright sunny day may look warm from a heated room, but be blisteringly cold.

Having the current temperature (in celsius - I am not an animal) complication on the watch face has saved me many times from going out without an appropriate coat on freezing sunny days.

The Apple Pay

It took a while to figure out the double-tap needed to trigger Apple Pay on the watch. But once figured out, I use it more than Apple Pay on the iPhone. Even late at night after a few drinks, I can Apple Pay for a Yellow Cab with ease.

Aside: I just wish more retailers in the USA supported contactless payments. Some, like my local supermarkets, do. Many, like my local big-chain chemist (who has it on the scanner but stupidly disabled), restaurants and take-out food places do not. Would someone please drag these neanderthal companies into the twenty-first century to join the rest of us.

The Band

I purchased the Black Apple Watch Sport with the black sport band on day one — the nerd version. While that was awesome, I missed having a metallic band like my old watch. I seriously considered purchasing the black link bracelet but felt it just was too expensive for my tastes. I loved the look and feel of the Milanese Loop, but the silver looked terrible with the black Watch.

Recently, Apple released the black Milanese Loop. I tried my luck and the Grand Central Apple Store had one the next day which I purchased. And it’s amazingly great, from a quality watch band feel and engineering perspective. My plan was to wear the Milanese Loop for work, and switch to the Elastomer bands for gym. I did that once. The Black Milanese is now the permanent band, er, for now.

The Apps

As expected, I rarely use Watch Apps unless I want to drill more on a notification. The slow launch times have improved with watchOS 2, but are still too slow.

I do use the Exercise app most days and love that each exercise is now saved and shown on the iPhone.

I also, rarely, answer the phone on the Watch. It works great, but holding my wrist up awkwardly while talking feels weird.

And that’s about it.

A Grand Start

For a device that requires charging every night, has the slowest setup and app launch times, and is tad bulky, it is still a grand start for a product. Its capabilities and utility far outweighs its first version flaws. The press has it wrong about it being rubbish.

It also sold more than all swiss watches in Q4 2015 and would probably make it into the Fortune 500 as a stand-alone one-product business. And its only been one year. The iPhone product business was in a worse state at the same stage of its evolution. The press has it wrong about it being a failure.

I am very happy with my Apple Watch, as much with the device as with how it has immeasurably improved my quality of life and behavior. No other device, including the iPhone, has hacked my ways as quickly, efficiently and unobtrusively as the Apple Watch. And this is just at version one in year one. Failure, my arse!

Follow the author as @hiltmon on Twitter.

Spotlight Only - Nine Months Later

I think one should review one’s productivity tool load-out every once in a while. Operating system updates, other productivity tool updates and your own work practices change over time. Your tool load-out should too. Changing the muscle-memory, it turns out, is surprisingly simple, quick and easy. And your productivity usually increases.

I am a huge fan of keyboard launcher/productivity applications like LaunchBar, Alfred, and back-in-the-day QuickSilver. They were amongst the first applications installed on any new system, and I believed I could not work productively without them.

Nine months ago I rebuilt my 15" MacBook Pro for some forgotten reason and decided to see if I could operate productively with Apple’s built-in Spotlight only for the core features that LaunchBar and Alfred provided.

To make it clear though, my use-cases for these products were basic, mostly using them as shortcut launchers. I never used the advanced scripting features, rarely added plugins, forgot about the additional actions on results and never touched the clipboard histories provided. Mostly because I had Keyboard Maestro juiced up to take care of those functions and more.

Its been nine months, and I am just as happy and productive as ever. Apple did a great job with the Yosemite Spotlight power-up, and the El-Capitan update made it just that much better.

So here’s a core set of Spotlight features — it’s a short list — and how it compares with Alfred or LaunchBar:

Application Launcher

Spotlight launches applications just as well as the others, including with abbreviations. For example, to launch Navicat Premium Essentials, a Spotlight of npe puts it at the top as expected.

Result: Just as good and quick.

Text/File Finder

Type a few words and it finds matching files and their contents very quickly. Unlike the commercial applications, Spotlight returns far fewer results in the HUD screen, but you rarely need more than the top four to find the file you want. Also, since El-Capitan, it now searches on partial strings. Note that I also needed to add a Markdown Plugin to make it work perfectly for me.

Result: Mostly the same, a longer and customizable result list would be nicer. I know you can resize the Spotlight screen, but I want more results per category, not a larger screen showing more categories.


Type the first few letters of a person’s name and Spotlight shows their contact card. Move the mouse over an email or phone to get a click-through icon to send a message, etc. The commercial applications are much better here, allowing you to keep your hands on the keyboard and select an action from the card.

Result: Not as good, but enough for me to see the phone number I need to punch in.

Web Search

Spotlight does have the ability to search the web via Bing (shudder). I do not use this. If Spotlight could use Google or DuckDuckGo it would be a different story. Instead I have a keyboard shortcut in Keyboard Maestro that launches Safari and allows me to search DuckDuckGo in one keypress. So I turned this off on Day 1, Bing search is rubbish.

Result: The third party applications do this way better.

Actions on Results

One thing Spotlight does not do is provide more actions once a result is found. You cannot do anything more with a found result except open an application, you cannot even select the application to use or run a macro on it. Since I never used that feature, I don’t miss it.

Result: If this is your primary way of using Alfred or LaunchBar, and I suspect thats how most of you use them, this missing feature is a showstopper.


I rarely use Spotlight to search for a stock price, weather, sports score or local movie time, these things are all far more conveniently available on my iPhone (and I have notifications set up for the important stuff).

Result: Same, same.

I am sure there is a lot of functionality that I could be missing out on, but since I am pretty much all-in on Keyboard Maestro, Apple’s built-in Spotlight works just fine for my launching and searching needs. Anything more complex gets a keystroke macro in Keyboard Maestro.

Follow the author as @hiltmon on Twitter.

Text Expansion Using Keyboard Maestro (First Cut)

This post presents how I have set up Keyboard Maestro to replace basic text expansion from TextExpander … so far. This post covers (More to follow at some point.):

  • Basic text expansion
  • When to use copy vs typing
  • Limiting to applications
  • Basic variables

Basic Text Expansion

The basic text expansion macro looks like the macro on the right.

  • It is triggered when a string is typed, in this case ;gma
  • It has a single action, Insert text by Typing, containing the text to be typed, in this case git pull; make clean; make -j 8.

Thats all there is to it. Nice and simple.

Type ;gma anywhere and Keyboard Maestro makes the replacement.

Insert text by typing vs pasting

Almost all the time, Insert text by typing is the right way to go. Its fast enough and does not affect the system clipboard. However, for long strings, typing may be too slow.

In these rare cases, Insert text by pasting is way faster. But you need to add another step to the macro. Add a Set Clipboard to Past Clipboard step after the paste to reset the clipboard back by 1 in Keyboard Maestro’s history. (Thanks to @TheBaronHimSelf for this tip.)

Limit To Application

Many of my snippets apply only to specific applications. To limit snippets to an application (or set of them), I create a new Group and make it available in a selected list of applications.

The snippets in this group only expand in Xcode.

Basic Variables

Keyboard Maestro has many of the same variables and abilities as TextExpander (and a whole bunch more, of course), including

  • Position Cursor after typing %|%
  • The current clipboard %CurrentClipboard%

So, for example, to create a Markdown Link using a URL on the clipboard and place the caret (the text insertion cursor) in the description area, I can use my ;mml macro.


Or to create a date heading in an Assembly Note, I can use my ;mmd macro.

This types:



You can format the date any way you like, of course.

To see what variables are available, click the Insert Token dropdown on the right of the action panel. As you can see, there is a huge number available.

I have managed to replace the majority of my TextExpander snippets using the basic text expansion macro described here, and it’s working great.

Next to do these with sounds and with more advanced abilities.

Hope this helps.

Follow the author as @hiltmon on Twitter.

Apple at 40

Apple turned 40 this week, and it got me thinking about the past 40 years of our individual computing experiences.

In many ways, my own journey to now parallels that of Apple.

And I’m willing to bet your journey is the similar.

The 1980s - Youthful Experimentation

In the early 1980s, Apple was young, surrounded by a wide range of competitors and the Apple II was it. Everybody who could, had one. They used them to work, to play, to learn programming and to experiment.

When the Apple Lisa project was announced and plastered all over Byte magazine, we all devoured each word written about it. We argued whether the Apple III or the Lisa was better (I was a Lisa), but both disappointed.

In 1984, Apple released the Macintosh. And changed the world.

In 1987, Apple released the Macintosh II. If there was ever a computer I wanted in the 1980s, that was it. That plus the LaserWriter recreated the entire publishing industry.

I view the 1980s Apple as a time of youthful experimentation. They experimented with several new platforms, took major risks, created unique products (some great, some horrible) and set out to change the world. The world fell in love with the GUI and the mouse.

In parallel, I was doing the same. I had Sinclair computers back then (talk about a unique platform) which were the only ones we could afford. When I went to university, I built my first PC clone, ran MS-DOS to learn programming using Turbo Pascal, and Xenix (later Minix) for everything else. I fell in love with computing and UNIX.

The 1990s - Suit Wearing Corporate Life

The 1990s were Apple trying to be corporate and becoming quite miserable about it. Time after time, Apple produced the same boring beige boxes, boring updates to the operating system, and struggled to complete against IBM compatible systems and the Microsoft juggernaut.

Apple was trying, as all young folks do in their first jobs, to fit in to a society they did not understand and felt powerless to change. They simply did what they thought the world expected of them. They tried to act like grown-ups and play the corporate game against older, powerful, entrenched interests, and had their spirits crushed.

It’s not that Apple did not create great things in the 1990s, its just that they were few and far between. The Powerbooks of 1994, the Newton and System 7 (IMHO) stand out in my mind.

In parallel, I started programming, managing projects and consulting — and wore a business suit every day. Since the corporate world was on PC compatible systems, that’s what I used. MS-DOS at work, Minix at home, Windows 95 at work, System 7 at home. I did this because I thought that was what was expected of me. To act like a grown-up, settle down, suit up and play by the rules of others. It crushed my spirit, and I was miserable.

By the late 1990s, Apple was doomed. Something needed to change.

By the late 1990s, I was miserable. Something had to change.

The 2000s - Finding the Bliss

The return of Steve Jobs via the reverse acquisition of NeXT was the trigger for Apple to Think Different again. Its moment of change had come. The new iMac design language took hold, from the Bondi blue model in the late 1990s, through the beautiful iMac G4 lampshade model to the current slab design on the desktop, the powerful PowerMac G4 Quicksilvers with their unique handles leading to the amazing all-metal G5 models, the new Powerbooks G4s and later MacBooks.

And more. OS X was introduced and blossomed. The Intel transition happened. And the iPod became the most iconic, must-have product for our generation.

Apple’s products became Apple’s again. They had found their bliss. And the market found it with them. Apple changed to doing what it wanted to do, what it loved and that showed. It found its market wanted the same and shared their love of great design, music, experience and reliability.

In parallel, so did I. I replaced the suit and meetings and Windows PC with jeans, an IDE and a Titanium PowerBook G4. I changed countries (twice) and worked on the products that I wanted to work on and make great.

I had found my bliss. I was doing what I loved and was free to also live my life surrounded by people I loved doing fun things at work and especially at play.

By the late 2000s, Apple was a successful and confident organization. It had proven itself to itself and the world and was surrounded by friends. It was ready to expand its reach. And it did so in the most incredible way, by launching the amazing iPhone. No other firm could have done it, it required the unique kind of creativity and operational chops that only a happy, confident Apple could delver. The iPhone became the one icon to rule them all.

As was I, well, successful I mean. Its because of this bliss that I was able to move to New York, do the work I wanted to do, creates some of my best product and run my own consulting business here.

All using Apple products.

The 2010s - Living the Life

By the start of the 2010s, Apple was confidently living the life. The passing of Steve Jobs and the handover to Tim Cook did not change who or what Apple was. Apple had gotten better at things it traditionally was terrible at, like services, even better at things it was good at, like design, manufacturing and innovation. Yet it was still finding more bliss. The iPad, Apple TV and Apple Watch may not be seen as super-successful products compared to the iPhone, but each on their own would be a Fortune 500 company!

Apple has gotten confidently comfortable with who they are, what they do and how they go about it. They continue to innovate in other areas, continue to press forward, continue to enjoy what they love. They have not stagnated or settled down. They continue to youthfully experiment yet deliver like a mature firm.

In parallel, so have I. I live where I want to, do what I love to do, use the products I love to use. I am continually working on becoming an even better software designer, programmer and person. And finding more bliss. Like writing and traveling again.

I am confidently comfortable with who I have become, what I do and how I go about it. But I am not ready to settle down. I continually try new tools, languages and approaches. I continue to youthfully experiment yet deliver like a pro.

Onwards and Upwards

Apple at 40 (and in parallel, myself a few years older) is a master of many things, it has put in its 10,000 hours. But being a master of one, two or even ten things is not good enough for either of us. We continue to experiment, to try, to put 10,000 more hours into new ideas, experiences and technologies.

I cannot see Apple slowing its pace of innovation, change and expansion. It’s who Apple is now and who Apple always wanted to be. The path to here was long and winding, and full of bumps. The path forward will be too. And because Apple Thinks Different, it will always be different and misunderstood and underestimated. Apple at 40 does not care what others think, it has found its bliss and will continue to push forward, writing its own story.

I intend to do the same. Et vous?

Follow the author as @hiltmon on Twitter.

Dependency Limited and Conflict Free C++

TL;DR: Beware of libraries you need to compile yourself and copy-pasted code, the performance, maintenance and other hellscapes you create are not worth it in the medium and long run:

  1. Do not use dependencies that have dependencies that you have to compile.
  2. Do not use libraries depended on by dependencies anywhere else.
  3. Solve your own problems and understand the solutions. Do not copy-paste from the web.
  4. Always write your own code where performance and maintenance is critical.

This post specifically targets C++, but is really a general set of rules and advice. See NPM & left-pad: Have We Forgotten How To Program? for a NodeJS scenario that this would have prevented.

I write a boatload of C++ code these days, developing on the Mac and deploying to Linux servers. All of it is dependency limited and conflict free. What does this mean? It means that I never have to deal with multiple versions of dependency libraries or on dependencies that have their own conflicting dependencies. At best, I code in straight-up C++11, use the STL and rely on vendor precompiled libraries that are dependency limited. The result is fast code that I and my team can read and maintain, deploy and run on all platforms with ease.

Dependency and Conflict Hell

When I started writing these products, I went mad and using a ton of third-party libraries. The theory was that these libraries were out there, already written and tested, everyone used them, and I could leverage them to short-cut my development time and effort.

And it worked.

For a very short while.

Within weeks of starting, I found one of the libraries I was using stopped compiling. It started throwing really odd errors, yet nothing had changed. It turns out that this library relied on code in an old version of another library that had been deprecated in later versions. I had added another dependency that needed a newer version of the same dependent library, and all hell broke loose.

The UNIX operating system solves dependency hell by allowing you to link your library to specific versions of libraries, so I could link to, for example, Boost v1.30.1 for the first dependency and Boost 1.52.0 for the other dependency — as long as they were compiled into separate libraries! Which means maintaining two installs of Boost and two environments just to get my dependencies to compile. And if I add another dependency that say requires a third version of Boost, complexity increases.

There are many problems with this:

  • When it comes to design and architecture, I need to split up my dependencies into separate libraries that compile with their own dependencies and then link to them in the main application, or use static linking which is not the preferred option.

  • When it comes to maintenance, I need to document where each dependency is, where its used and somehow describe the Gordian Knot to myself and my team for use in 6 months time without context.

  • When it comes to setting up a development environment, I need to somehow save old versions of dependencies and make complex Makefile trees to generate the correct versioned libraries.

  • When it comes to compiling, I have to compile a lot more components or use static linking to ensure that the right library is linked and the right functions called, increasing executable size, memory use and complexity.

  • And when it comes to deployment, I have to build this hellish mess for each platform.

Aside: Debugging and Maintenance Hell

Solving for the above takes time, it’s not hard, and once it’s been done, you could argue for smooth sailing. I expect this is how most teams do it.

Until something goes wrong and you start to debug. I don’t know about your tools, but mine always seem to find the “Jump to Definition” code in the wrong version of the wrong dependency every time. Which means that trying to find where something fails becomes that much harder. Is the error in the dependency, the dependency version or in my code? Ouch.

Or until time passes, like say six months, when a new error is being thrown in Production. Six-month-later-me does not remember what current-me knows now, leading to maintenance hell. Not only do we have a production problem and unhappy users, but I would have forgotten all the little tricks and hacks to get back my dependency hell knowledge.

And most importantly, I have lost the chance and ability to know and understand the application.

Dependency Limited and Conflict Free

So how to do this? I follow the following rules:

  1. I do not use dependencies that have dependencies that I have to compile. That means using vendor and open-source precompiled libraries that require no additional software installs to use them.
  2. I do not use libraries used by dependencies that may conflict. If a vendor library uses another library, I avoid using that other library in any of my code anywhere.
  3. Where necessary, I solve my own problems, not rely on third-party, unmaintained code I find on Stack Overflow or Github.
  4. I always write my own clean code when performance or maintenance is critical.

I am not saying I do not use dependencies or never look to Stack Overflow or Github for ideas. I’m not that smart. I simply limit my exposure and maximize my ability to read and understand my code and environment, now and in the future, with these limiting rules.

Looking at Rule 1

For example, lets talk about one of my database library dependencies. Its is written using Boost. Which means that the client library that I need to compile against has a dependency on Boost. Following the first rule, I use their precompiled libraries, not their source code, and since Boost is a dependency, I do not use Boost anywhere else (Rule 2). It’s up to the lovely folks at the database company to deal with their dependencies and create good stable libraries and installers for all platforms, and all I do is use their stable binary versions. A nice clean separation of my code from theirs, easy to maintain and easy to deploy.

Looking at Rule 2

Since we are on Boost, let me stay on Boost. These days, its almost as if you cannot do anything in C++ without Boost. Every answer to every “I’m stuck” question in StackOverflow on C++ seems to be answered by the phrase “Use Boost”.

I’m not saying Boost is a bad library, it’s not. It’s so awesome that the C++11 standards team nicked all the good stuff from Boost to make the STL for C++11 and C++14.

But every darn library out there, every example, every potential dependency seems to use different versions of Boost in different ways. And eventually they conflict because some code somewhere uses a deprecated call or side-effect that conflicts with the same call elsewhere. Following rule 2, I do not use Boost because everyone else seems to. Half of my problems came from other people’s code that used Boost badly.

To reiterate, my problem is not with Boost, it’s awesome, my problem is with how badly it’s used and abused, and rule 2 protects me.

Looking at Rule 3

We all know there are the piles of code that are just too tedious to write and have been done over and over again. Loggers, data structures, parsers, network interfaces, great ideas, and simple Stack Overflow solutions. It’s so tempting to just copy and paste that code, get it to compile and move on. I mean seriously, why rewrite a logger class! 1

Just use one that’s out there and move on, no?

My experience with these have been a case of short term gains with long term pains. Oh sure, I can get it going with less work on my end. But when things go wrong as they always do? Or when the application starts to perform so slowly that nothing seems to fix it? Or its six months later and the pasted code starts to act funny in production?

Rule 3 ensures I avoid these situations.

Keep in mind, example code or Github projects were written with no context or to solve the writers specific problem, scenarios that almost certainly do not apply in my environment or yours. And when things do go wrong, we have no understanding of the code or know how to fix it. Understanding code is more important that saving a few hours or days of developer time.

Looking at Rule 4

Given that I am developing real-time applications for Finance, hence the C++, performance and memory management being critical. The products process vast amounts of data, which means tight control over RAM, CPU caches and even cache-lines are important, and any wasted compute cycles add up in performance degradation. All my vendor dependencies have been tested and put through this wringer, so I can trust them to be as fast and memory safe as possible, or I would have selected a different vendor.

But not much else is, mostly because it was not written or tested to be that way. Under rule 4, the only way I know how to get the fastest application is to write it myself. That way I can see, and most importantly understand, where the bottlenecks are, where the memory is going crazy, where threads are running wild and fix it. Copy-pasted code or Github code rarely cuts it.

My Situation seems Unique. It’s Not.

I do understand that my situation seems to be reasonably unique. My applications are large and complex and need to interact with many systems and technologies which means dependency management is critical. The large code base and tiny team environment means that a simple development setup is best. Maintainable and understandable code is more important than getting it written quickly. Production issues will cost us a fortune, which means readable, simple and understandable code is critical to being able to detect and correct issues quickly. And the application needs to be fast and correct and reliable.

For most of you, many of these attributes seem not to apply. Crazy deadlines mean that dependencies and copy-paste code are perceived as the only way to get there. Maintenance is probably not your problem. Apps and requirements are straightforward, hardware is ubiquitous and cheap and if it takes a few seconds longer to process, who cares. Good for you if thats your environment.

But mission critical systems need rock solid foundations, architectures and maintainable code. And any additional dependencies, any additional complexities, anything that slows down deployments or maintenance need to be eliminated mercilessly. Sure, it will take longer to write and test. But the cost and time to build dependency limited and conflict free systems pays off handsomely in reliability, maintenance speed and application performance.

No matter your situation, if you cannot clearly understand your development environment and all the application code, you’ll never figure it out when the production system gets slow or goes down. Especially after time has passed and you have been working on other projects.

Not so unique after all.

In Summary

If you are writing anything mission critical, where future maintenance, performance and teamwork is critical, brutally limit dependencies to simplify the development environment, deployment process and maximize your ability to debug and maintain the product. Ensure that all code added to the project needs to be there and is fully understood, and that it does not conflict with any other code in the system.

It means that you will have to write a few more modules yourself, but that investment pays off incredibly later on.

  1. A Logger Class: I referred earlier to a logger class as an example of code that has been done hundreds of times over and who the heck is silly enough to write another one. Me, that’s who. Why? Because I needed a real-time logger that would not in any way slow down the processing on the primary threads of the application. Almost all logging classes are synchronous, you have to pause the thread while the logger spits its results out to screen and file system. That makes sense when you have the time to wait and ensures that log entries are saved before moving on and potentially crashing. Async loggers often collect log entries, then pause everything to batch send (even if they are on alternate threads). But in a real-time system, the huge number of milliseconds (1/1,000 of a sec) needed to append to a file or send a log message kills performance that is measured in microseconds (1/100,000 of a sec). I needed to know just how the logger impacts thread performance, and need to optimize its performance too, and that’s why I wrote my own logger.