Hiltmon

On walkabout in life and technology

Notification City

It is best for your technology stack to tell you what went wrong as soon as it goes wrong to get the right level of attention in the correct timespan.

I run a massive technology stack at work, filled up with multiple servers, plenty of web applications, loads of C++ programs and massive numbers of scheduled and recurring tasks, and I do it with an insanely tiny team and no DevOps folks. Almost all of the time, our software works as designed and the entire network of systems runs just fine. Just like yours.

When systems fail, we get notified.

Depending on the nature of the failure, we get notified in different ways. This helps us quickly decide whether we need to stop what we are doing and react, or wait to see if more notifications follow.

And it works.

In this post, I will stay out of the technology and explain our thinking and implementation of notifications, how we send them, monitor them, use them and manage the volume so we are not, in any way, overloaded or subject to unnecessary noise.

Crashes and Notifications

As an intentional design decision, my team writes software that intentionally crashes when things are not right. We do not catch and recover from exceptions, we crash. We wrap database changes in transactions so the crash is safe. We do not, under any circumstances, run systems that continuously and expectedly fail and quietly self-restart.

We rely on notifications to quickly tell us of the crash so we can see what went wrong and rectify the issue. We ensure, where possible, that our error messages are both clear and identify where the crash happened.

This design is justified as our systems are a network of interdependencies, and so a failure in one process over here can impact, or require reruns, over there. Since we are a small team, building a DevOps infrastructure to map and auto-recover on all of these paths, which are constantly changing, is not optimal. We’d spend all our time doing it.

And so we do it simply. Almost all our processes are launched from simple shell wrappers or rake tasks. When an execution fails, the shell wrapper captures the error, and fires off the appropriate notification to the appropriate channel, then logs it and pops it in the right chat room.

Aside: This works because we also design all our processes to carry on where they left off, so even the most critical real-time systems just carry on from where they left off on a restart after a crash. How we do that could fill a bunch of posts.

Errors and Failures

No matter how good your software quality, things will go wrong. Programmers introduce bugs, bad data causes failures, hardware fails, and external systems are not always there. Some of these issues are easily dealt with, the rest need human intervention.

For example, a large proportion of our software gets data from a remote location, munges it and bungs it into the database (or the other way around). More often that not, that data is remote and third-party. And reasonably frequently, their server is down, their data is late, or the data is bad.

Of course our code “attack dials” when servers are unavailable or data is not present, so we do not get notified of these events — that would flood us with useless notifications. But, if the process has been dialing a while, or the data is not available in the dial window, then we get a notification. And if the data is bad, the program that munges it will crash, sending a notification.

Processes that depend on this data do not notify that the data is missing.

Why not?

We already know that the data is missing from the first notification, no need to pile on more notifications saying the same thing. Their failures are also logged, but in a different location and chat room from the primary. This model helps recovery and reduces confusion in identification.

Aside: We also have a Wiki that tracks data dependencies, so we know which processes to rerun after we correct a failure. This wiki lists all the commands to paste, so its easy. Whenever we face a new situation, we update the wiki.

Success and Last Runs

Clearly we do not want a notification when an expected process runs successfully, that would create an insane flood of notifications. We still send them, with some third party software we cannot stop them, just to a different destination. These notifications are saved so we can review them if we want, but it does not alert the team.

Note that failures are also saved, so we can go back and see where and when what fails more often.

Live Process Monitor

Real-time C++ programs are more difficult to manage. We write them as “bullet-proof” as possible, but they too can and are expected by design to fail. Known bad data situations are dealt with, but we want unusual situations to take them down.

For these, we, the humans, need to drop everything and act. For this we run a mix of open-source and our own home grown process monitors. As soon as a monitored program fails, we get a notification on a bunch of channels:

  • Our Sonya, an electronic voice on an ancient Mac Pro, loudly tells us what failed. Having a “Bitching Betty” voice state the nature of the problem really gets our attention. Aside: Sonya as in “gets on ya nerves”, thanks British Rail.
  • We get an iMessage, on our phones and watches, for when we cannot hear Sonya.
  • The error Notification Center chat room gets a message as well, which pops up a UI alert.

The live process monitor also watches our “Coal-mine Canary” processes. There are a few threads we run that crash early and quickly when things go wrong, oftentimes quickly enough for us to be on it when the important stuff gets ready to fail. These also get the Sonya alerts.

For example, we have a process called “universe” that runs all day long and it depends on a large number of our systems and services, so it’s the first to die when things go wrong, a perfect “Coal-mine Canary” candidate. When Sonya squawks that “The universe has collapsed”, we know bad things have happened.

Ongoing Notifications

If we see the same notification and deal with the same interruption over and over again, then we know we have an ongoing problem. In this case, we stop all work and get it resolved. The cost in time and lost productivity of dealing with the same issue over and over again is not worth it. Especially in a small team of developers. Taking the time to get it fixed is always worth it.

To be clear, we do not muffle notifications and silently restart “known” failures. We fix them, over and above all other work. Silence means all is well, not “all is well except for the known failures”.

It also ensures that when we do get a notification, we cannot and do not ignore it. The notification signals a real issue. We have no reason to tune out notifications, and therefore no reason to start ignoring them.

Regular System and IT Activities

Of course, being a “normal” tech team, we also leverage the notification infrastructure for regular, non-failure mode notifications. We just send these to a system that logs them, a system we can glance at when we need to see what happened. These notifications are not sent to us humans in the regular chat rooms, so do not bother us. This includes:

  • Hardware monitors reporting successful checks
  • Runs that succeeded
  • Programmer commits
  • System deploys
  • Software updates

Notification Volume Management

Most notification systems projectile vomit notifications, they are as chatty as a flock of seagulls over a bag of chips or a lawyer in court. The negative is that no-one can deal with the noise and yet still spot the real issue, and eventually they tune the noise out.

So how do we manage the flood of notifications, keep them to a manageable trickle of things we need to respond to?

  • Rule number one, we do not notify humans for informational purposes or success. That is all noise and we so not send these out, only log them. If the notice is expected or does not require immediate human response, do not send it to people, just save it.
  • Use different channels for different importances. If immediate attention is needed, set off Sonya and the iMessage alerts. If not, send it to the monitored chat room to be dealt with later. And if no response is needed, log only.
  • Notify once and once only, flooding the chat room with a bunch of notifications that were triggered by a single failure also adds noise and makes it harder to find what cause the cascade. Trust the humans to know what needs to be done to recover from an event.
  • Get an intelligent repeating voice alert, like our Sonya, on the job for systems that must be up to transact business and keep her repeating the issue every few seconds until the system is back up. Its noisy and annoying, but others can hear when things are wrong and when they get back to normal. Oh, and do not send these notification by the normal channels, so they do not fill up your chat rooms.
  • Use a chat room for failure notifications. Firstly, you can get alerts when new messages come in, but more importantly, the responder can identify which notifications have been dealt with by responding to the messages. So, if more than one person is looking, that chat room will tell them which ones have been dealt with, and by whom. That way, not everyone gets involved when an alert comes in. It also allows us to scroll back to see common failures and note what was done to rectify.

Notification City

In our Notification City:

  • When Sonya starts talking and our iMessages start pinging, we jump. She tells us which real-time system has failed and we go and see why, fix it and restart.
  • When the “Coal-mine Canary” processes fail, Sonya and iMessage let us know as well. We look at the chat room to see what dependency triggered it.
  • When a regular thread fails, it gets posted to the chat room, and we get a UI notification that it happened. We can then see what went wrong, make the necessary calls, get it going again, run the additional processes to recover and respond that the issue was resolved.
  • When all goes well, we get no notifications at all, nothing in the chat room and no interruptions, and we can focus on our work. Later on, we can look at the logs and status screens to see all was well.

This allows us to focus on what we need to do, yet respond appropriately when notified. We’re not inundated with noise or unnecessary messages, so we do not need to tune them out.

When we hear or get a notification, we know that the situation is exceptional, and, depending on the channel, we know whether to jump now or have a few minutes to respond.

Our Sonya has been quiet today, all is well.

Follow the author as @hiltmon on Twitter.

Coding Style Standards in 2017

I’ve been writing software for well over 30 years, I’ve spent well over my 10,000 hours and gotten rather good at it. And I still write to a very rigorous coding style standard.

You’re kidding right? Its 2017, code style guides are so passé.

Nope. I’m deadly serious.

Get off my lawn

Some of us remember when coding styles were de rigeur. When you climbed off your commuter dinosaur and joined a coding team, the first document they gave you was the coding style guideline. It was a thick, three-ring binder that covered everything from naming, spacing, commenting, position of braces, white space rules, spaces or tabs and the line length rule.

And, when you tried to submit your first code, the code you were so proud of, the team destroyed you in review for not following their stupid guidelines. There you sat, knowing your code worked, wondering why these people were being so anal about spaces and where your effin brackets were and why you could not use the letter m as a variable name. “The code works, dammit,” you thought to yourself, “what is wrong with these people!”

The reality was that these folks knew something we rookies did not. That it was easier for them to read, review and smell-check code written the way they expected than to try to decipher yet another programmer’s conventions. It saved them time, saved them effort and allowed the pattern matching engines in their brains to take over to enhance their understanding.

Back then, code quality depended on other people reading, reviewing, understanding and smell-testing your code. It was up to humans to see if things could go wrong, find the issues and get you to fix them before the system failed. This was how the Apollo code was done.

The coding style guideline made that job a whole bunch easier.

The Bazaar

The rise of open source, good semantic tools, formatters, linters and rise of the code ninja, have led to the demise in many cases of the coding style standard.

Most open source projects are a hodgepodge of coding styles because there is no leader, no team-boss and no benevolent dictator. Some, like LLVM and Python, do have such a character, and therefore a style guide. Most do not.

Some languages, like go, have an opinionated style and provide a formatter. And some teams use formatters to make they code look “better”.

And don’t get me started on projects that intermix various open-source code-bases that uses conflicting styles. Aside: generated code has to be excluded as it gets regenerated on each compile. I’m looking at you, Google protobuf!

The big issue is that code these days is less readable by humans, and is less frequently reviewed by humans. Much code is written using a mishmash of open source code, pastes from StackOverflow and a bit pf programmer code. Paying homage to some random management mandated format using a tool does not improve the quality, readability and maintainability of the code.

The great debates

Those of us who do care believe that our coding styles are the best. Of course we do, we came up with them. We use them daily. We find code to our style easier to read. Writing to our style is now a habit.

Bring in another programmer and the war begins. Arguments erupt on line lengths, tabs or spaces to indent, indent sizes, brace positions, early exit, and function lengths. Of course, the other programmer is wrong, their habits are bad, they are idiots and their code looks terrible.

The truth is that these arguments are stupid since no side is “correct”. It’s a taste thing, a personal preferences thing, a habit thing and sometimes a power play.

At the end of the day, it really does not matter what you choose, as long as you all agree and all adhere to the agreement. Win some, lose some, it does not matter.

What matters is being able to read, review, and fix each-others code with ease. And that requires a coding style standard.

So why still standardize in 2017

Because:

  • Code is meant to be read, reviewed, modified and refactored by humans first, and compiled second. Code written to an agreed style is way easier to for humans to process.
  • When not sure what to do or how to write something, the standard steps in. When to switch from a long parameter list to a parameter object, how far you can take a function before refactoring to smaller functions, and where in the code-base to place a certain file are all decided by the style standard.
  • Naming in code is a nightmare. Styles define how to name things, and what case to use, making it easier to choose the right name. Most importantly, the reader can jump to the right inference when reading a name in the code.
  • We don’t care who wrote the buggy line, blame is not what we do. But everyone in the team should be able to read, diagnose and fix it. If you want to find the fastest way to speed up maintenance times and bug detection times, write to an agreed style.
  • The debates are over, we can start to form habits, and we can all focus on writing great code.

So I guess you use an old standard?

Nope, we update ours every year. Most of the time it changes little. The original space vs indent vs line length stuff remains mostly the same. Those debates are over and the habits formed.

But languages change, language practices change, and these lead to changes in the standard. We learn more about our own code over time. Misunderstandings in naming inferences change the naming conventions, identified bugs lead to file layout changes and better patterns identified by team members get added. And old, unnecessary guidelines are removed.

For example, our 2016 standard added the requirement that single line block statements must be wrapped in braces in C++, so a heartbleed like issue would never affect us. It finally allowed the use of function templates now that our tools can handle them properly — and we programmers finally got a handle on them. It changed the file extension on all our C++ headers to “.hpp” because our compilers treated them differently. And it moved function parameter lists to their own lines in headers so we could improve header file commenting and document generation. Nothing earth-shaking, but still huge improvements to readability.

So all code is to standard?

Yes, and no. All new code is to standard. We do not stop all work, go back and correct old code, there is too much of it and we have better things to do.

But, each team member knows that if they ever need to touch, change or fix old code, they need to refactor to the new standard at the same time. We do not commit non-standard code. Over time, the old code changes to match the new standard.

Ok, summarize this for me

  • Code is meant to be read by humans first.
  • Code written in an agreed style is way easier for humans to find, read, understand, diagnose and maintain.
  • Moving to a new standard takes time to build the habit, but once it becomes a habit, writing to standard becomes just part of the flow.
  • The standard needs to change as languages change, as programmers get better and as new members join the team.
  • All new code is written to the latest standard, all code committed is to the new standard, all old code is refactored on touch.
  • Coding style guidelines are just as important in 2017 as they were when we rode dinosaurs to work.

Follow the author as @hiltmon on Twitter.

MathJax in Markdown

Adding mathematical formulae to HTML pages is easy these days using MathJax. But I create all my documents in Markdown format on my Mac. This post shows how to add mathematical formulae to your Markdown documents on the Mac and have them preview and export to PDF correctly.

MathJax in Markdown

Adding mathematical formulae to a markdown document simply requires you to use the MathJax delimiters to start and end each formula as follows:

  • For centered formulae, use \\[ and \\].
  • For inline formulae, use \\( and \\).

For example, the formula:

\\[ x = {-b \pm \sqrt{b^2-4ac} \over 2a} \\]

Renders like this from markdown:

$$ x = {-b \pm \sqrt{b^2-4ac} \over 2a} $$

Or we can go inline where the code \\( ax^2 + \sqrt{bx} + c = 0 \\) renders as \(ax^2 + \sqrt{bx} + c = 0 \).

Preview: iA Writer, Byword, Ulysses

Most Markdown Editors have a Preview function, but do not include MathJax by default. To add MathJax rendering in iA Writer, Byword, Ulysses and most others, you need to create a custom template to render the document (I assume you have done this already - see [Letterhead - Markdown Style]https://hiltmonon.com/blog/2013/05/23/letterhead-markdown-style/) for an example).

For iA Writer, for example, go to Preferences, select the Templates tab and click the plus below Custom Templates, and choose Open Documentation to learn how to create your own template. Or copy an existing one and rename it.

In the main html file, called document.html in the iA template, add the MathJax javascript header line:

<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>

My template file is very simple:

<!doctype html>
<html>
<head>
    <meta charset="UTF-8">
    <link rel="stylesheet" media="all" href="normalize.css">
    <link rel="stylesheet" media="all" href="core.css">
    <link rel="stylesheet" media="all" href="style.css">
    <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
</head>
<body data-document>&nbsp;</body>
</html>

Next time iA Writer, Byword or Ulysses loads its preview pane and renders the page, the javascript will run and render the MathJax as mathematical formulae. For example, in iA Writer:

Note: Occasionally the preview will fail to render the MathJax, either because the MathJax is invalid or the refresh fails to reload the Javascript. If you see something like the image on the right, just right-click on the preview-pane and click Reload. That forces the preview pane to reload both the rendering template and the page.

Preview: Marked 2

On the other hand, if you use the magnificent Marked 2 program to render your HTML, well, it has MathJax support built-in. Under Preferences, choose the Style tab and check Enable MathJax.

Note: Marked 2 does not have the intermittent problem of failing to render MathJax properly while you are editing the document.



So there it is, simply add the MathJax using delimiters to your Markdown file and update the previewer to render it.

Follow the author as @hiltmon on Twitter.

On the New MacBook Pros

Much has been written, tweeted and complained about the new MacBook Pros released by Apple last week. Complaints about the 16GB limit, all-in switch to Thunderbolt 3 (USB-C), the removal of the SD-card and MagSafe, the new keyboard, the aged CPUs, the slow GPU, dongles, that they are not “Pro” level machines, and more. More and more influential folks are writing that Apple has forgotten the Mac, that the Mac is doomed or dead.

They are probably right. The new MacBook Pros are not ideal, nor for them.

I believe the real issue for these folks is not what Apple did not do, but that there is no viable alternative product out there for them that has the one feature they need and all the benefits of the Mac.

Linux on the Desktop is getting better, even Windows is improving, but it’s not macOS. The applications professional-level users prefer run better in the Apple ecosystem. Several only exist in the Apple ecosystem. And even if the primary applications are cross-platform, the tools, utilities, and productivity applications we use are not available elsewhere.

If there were a better alternative to Apple’s ecosystem, professional users and developers like myself would have already switched.

In the mean time, Apple released new MacBook Pros that are according to Apple’s base, horrendously compromised ones.

Its all kremlinology whether this was intentional on Apple’s behalf.

Some believe Apple compromised now because the line was aging, they needed to do something and Intel was falling too far behind. But then Microsoft released Surface the day before and it was the same platform, nothing newer inside (except for a better GPU in a thicker, heavier body).

Some believe Apple intentionally made the shift to a new design and ports now, just as they did with USB, floppies and CDs before. Their first machines with the new ports were always compromised, but they got better.

And some believe Apple simply does not care about the Mac. That one does not compute with me. The new design, the new touch-bar, the effort that went in to the look, weight and feel of the device proves otherwise.

I am a professional programmer, writing multithreaded, complex, big-data applications. I should be using a Mac Pro at work (and another at home and in the coffee shop) with maximum cores and RAM in order to compile and run my code and maximize my productivity. But I am also a graphics and print designer, a web developer, a writer, an amateur-photographer and a productivity nut. The MacBook Pro, as imagined by the complainers, should be the perfect machine for me.

The reality is that the perfect machine does not exist for professional you or professional me, it never has and never will. I have always wanted a UNIX-based operating system with a GUI, more cores, more RAM, faster storage, better battery life, a better screen and a thin and light laptop because I walk and work everywhere. You may need a machine with certain ports, more RAM, longer battery life, bigger screen, whatever. Our needs are mostly the same, but differ in significant areas.

I have tried Dell Windows, MacBook Pros, Mac Pros, MacBook Airs, Lenovo’s running Linux and combinations thereof, and the one computer that has met the most of my needs – but never all of them – has been the MacBook Pro. I am writing this on my maxed-out trusty 15-inch Mid-2014 MacBook Pro. The cores have been plenty fast, the RAM sufficient, the SSD good enough, the display great, the battery life the best ever, the ports just fine. But it never was my ideal computer. It was and remains the best I could get to meet the most of my needs at the time.

I have ordered the new 15-inch MacBook Pro, with upgraded CPUs, faster GPU, new larger SSD and the same, but faster, RAM. I do not expect the new laptop to be perfect, none ever has, but I do expect a reasonable performance improvement in compile times and database reads, way better color on my display and a lighter load when walking. It may not sound like a lot, but these small improvements when added up over days and weeks turn into major productivity gains.

What I will not be doing is giving up on the stuff that already makes me so productive. The operating system that I love, the tools that I know and love, and the processes and workflows and muscle memories that help me fly. I see nothing else on the market right now that I can change to that can help me perform better.

I also think that Apple, Microsoft and Google are all being frustrated by Intel, who in turn is being frustrated by their issues with their latest process. Knowing Intel, we know they will solve this. Sooner than later. And so I do expect next year for all of Apple’s, Microsoft’s and Googles PCs to move to the next generation of Intel chip-ware that will meet more of professional users needs.

Until then, I intend to enjoy the beautiful new MacBook Pro and its productivity improvements when it arrives, and use a few adapters to replace the ports I need to keep going. But I also will look closely at the 2017 MacBook Pros when they come out. And keep an eye on the pro-level iOS/Pencil/Keyboard solution in case it becomes part of a better solution for my needs.

Follow the author as @hiltmon on Twitter.

The Gentleman James V

Last evening a package arrived from Amazon. A package that neither I nor my wife had ordered. A mysterious, enigmatic package. From the outside, there was no indication of its content or providence.

We discussed where it could have come from. What could it be. Should we open it. Maybe Amazon sent the package to the wrong person. Yet the delivery address was certainly mine.

Finally, I opened it.

It contained a bubble wrapped box, a bunch of packing bubbles and three slips of paper. The first slip was a packing slip describing the content of the bubble-wrapped box. The second was a return slip. Where this package came from remained a mystery.

It was the third slip of paper upon which we hoped to gain the key clue, the source of this package, our mysterious benefactor.

It did not help.

It contained a personal note.

An unsigned personal note.

A note clearly written by someone who knows me and how I live my life.

Someone who spotted an emptiness in my existence that I was unaware of.



The bubble-wrapped box contained a gift, a perfect gift. One borne of great kindness and understanding of my lifestyle and of unknown unstated needs.

From the note and content, it was clear that the sender knew me well. It was also clear that the sender was considerate, kind, wise and understanding. They had taken the time to observe that necessary items, those in the bubble-wrapped box, were missing from my life. That the quality of my life and that of many others would be improved immensely by this gift. They had taken the time to research and select the perfect gift to fill this unknown unstated void. And they had executed, purchased and shipped it.

With a personal, yet unsigned note.

Who could this wonderful, kind, generous, person be? Why had they not signed the note? How does one accept such a magnificent gift from an anonymous source without the opportunity to express heartfelt gratitude and the soul-filling joy such a gift brings.



A mystery was present, the game was afoot.

This angel of awesomeness was to be unmasked and gratitude expressed.

Whatever it took.

However long it would take.

All leads would be followed.

As far as they would lead.

This mysterious messenger, this masked angel, would be found.

And unmasked for all to see their true generosity.



In the end, an email was sent. A sleuth engaged. A night passed. And an email received.

I knew who the culprit was.

I had unmasked the angel of awesomeness.

And had a good night’s sleep.



The message on the third slip of paper in full:

Hilton… something for your office. I couldn’t bear the thought of you drinking from shitty plastic cups. Enjoy!

The bubble-wrapped box contained four stunning glass tumblers. They presented in the style of crystal Manhattan glasses. The perfect compliment to the office whiskey collection. The perfect implement to hold and enjoy the Scottish Nectar at the end of a hard day’s work.



A simple call and thank you is not enough.

A note on Facebook neither.

This kind, considerate person needs to be immortalized.

A plaque perhaps.

Maybe have something named after them.

A bridge, a ship, a building, a space shuttle.

I have none of those things.

But I do have a bar.

One I attend regularly.

It is stocked with quality whiskey and bourbons.

All comers are welcome.

It is a place of relaxation, conversation and comfort.

It brings joy to many regulars and guests.



I hereby declare that The Gentleman James V bar open.

All glasses will be raised in his honor.

His name will be whispered with reverence.

His contribution to quality of life and joy known and remembered.

And his presence at the The Gentleman James V is much desired.



For those of you who got this far and do not know who I am talking about, allow me to introduce James V Waldo. He is man with the ferocious visage of a viking biker, an arse that emits a toxic hellstew of gasses that are not present on the periodic table, the soul of a poet, the intellect of a debater, the wit or a writer, and a heart the size of the moon. A husband. A dad. And a very good and special friend.

TL;DR: Thanks for the Whiskey Tumblers, Jay.

Follow the author as @hiltmon on Twitter.

The Annual Dependency Library Upgrade Process

At work, we write a lot of code. In order to remain productive, we reuse the same proven dependent libraries and tools over and over again. Which is fine. Until we start seeing end-of-life notices, vulnerabilities, deprecations, performance improvements and bug-fixes passing us by. At some point we need to update our dependencies for performance and security.

But its not that easy.

Take some of the libraries we use:

  • Google’s Protocol Buffers are amazing. We’ve been on 2.6.1 for ages, but 3.1.0 is out and it supports Swift and Go, two languages we surely would like to use. But the proto2 format we use everywhere is not available in the new languages. We need to migrate.
  • ZeroMQ moved to a secure libsodium base in 4.1, making it much safer to use. But the C++ bindings from 4.0.5 are incompatible. We need to migrate.
  • g++ on CentOS 6 is ancient, version 4.4.7 from 2010. We’ve been using the devtoolset-2 edition 4.8.2 from 2013 to get C++11 compatibility, with a few library hacks. But that version of g++ produces horribly slow and insecure C++11 code. We skipped devtoolset-3 even though g++ 4.9 was better. devtoolset-4 is out, using g++ 5.2.4 from 2015, still not the latest, but it is much better at C++11 (without our hacks), more secure and faster. Yet is ABI incompatible. We need to migrate.

The amount of work seems staggering given we have well over 100 protobufs used across our code base, ZeroMQ everywhere and everything is compiled for production using devtoolset-2. The old libraries and tools are a known, proven platform. The current code is stable, reliable and fast enough. It ain’t broke.

The benefits are also hard to measure. Given all the effort to upgrade, do we really get that much faster code, that much more secure code? And what about the code changes needed to support new interfaces, formats and ABIs? What does that get us?

For most IT shops, the discussion stops there. “It ain’t broke, don’t fix it!”, or “All pain for no gain, not gonna play.” They stay on the known tools and platforms forever.

For my IT shop, things are different. We want to use new tools, new languages, new platforms yet remain compatible with our existing services. We need to be secure. And we really do need to eke out each additional microsecond in computing. No, if it ain’t broke, break it!

So, once in a while, generally once a year, we update the platform. Update the libraries. Update the tools. Update the databases.

And we do it right.

Firstly we try the new libraries on our development machines. Homebrew installs make that easy for the dependencies. Rake tasks make it easy to upgrade our Ruby dependencies and Rails versions. We build and test our code in a migration branch and make sure it all works, changing to new interfaces and formats where necessary.

We then spin up a virtual machine on our production operating system (CentOS 7 now), install the new compiler and dependencies, and rebuild all there. Given that issues are mostly resolved in development, we only find g++ quirks in this test.

And then one weekend, we run the scripts to update our production servers to the new tools and dependencies and deploy the new versions.

And since we do this every year, it runs like a well-oiled machine.

It helps that we have tools to recompile, run and test our entire code base. It helps that we have tools to stage and deploy all our code automatically. And it helps that we have done this before, and will do it again.

Long term, the benefits are amazing. We can try new platforms with ease. Our code gets better, faster and more secure all the time. The need for workarounds and hacks and platform specific coding becomes less and less. The ability to nimbly move our code base grows each time.

Many of the projects we want to take on become possible after the annual upgrade. That’s why we do it.

If it ain’t broke, break it.

And it really is not that much work!

Follow the author as @hiltmon on Twitter.

Minimal Project Management - 6 Months Later

Just short of six months ago, I wrote about how I was transitioning to Minimal Project Management as my team was growing at work. So, how did it go? Did it work? Any Problems?

In short, after a few false-starts getting our heads around the intent of the Statement of Work document, it went — and continues to go — very well. Projects we used to start and never finish are now completing and shipping. Communication within the team and with our users is better. And our throughput is up.

In fact, now that the progress and task lists are more visible to management and users alike, assignment and prioritization is also better. The Management Team is more aware of the complexities in projects — from the Statements of Work - and why they take so long — sometimes — to get done. We are also less likely to kill an ongoing, assigned piece of work, when the progress and involvement is clear. We also think a bit more and harder about what we really need to get done next instead of just assigning everything to everyone on a whim.

The Statement of Work has evolved into a thinking tool first, and a communication tool second. My team now uses the time writing the Statement of Work to think through the options, the details, the knowns and unknowns, the questions needed to be asked and answered. They are spending more and more time working the document up front instead of diving into coding and “bumping” into issues. Just the other day, one of the developers commented that the programming for a particular project would be easy now that the way forward was so clear.

I do also see our productivity going up. We may take more time up-front to write a document, but we are taking way less time when coding, testing and shipping as we’re not futzing around trying to figure things out. The total time to ship is dropping steadily as we become more habitual in thinking things through and writing them down clearly.

Our users also look these over. This leads to discussion, clarification, and the setting of expectations as to what will actually be shipped. It also leads to more work, but we track these add-ons as separate change requests or future projects. When we ship, our users are far more aware of what the changes are and how it impacts them.

The weekly review is also easier because, since the whole team reads all Statements of Work, we all know very well what each other team member is working on. For fun, I sometimes get team members to present each-other’s deliverables for the week, a task made easier by the process we follow.

Some things have not changed much. We still get a large number of interruptions, but my team is far more likely to triage the request and decide whether to accept the interruption and fix the issue, delay it until they get [out of the zone]https://hiltmonon.com/blog/2011/12/03/the-four-hour-rule/), or push it off as a future project to deal with later. We still get a large number of scope changes, and these too get triaged better. And we do get fewer priority changes, mostly because those that change the priorities see the work already done and are loathe to interrupt.

Of the issues faced, most have been resolved through practice.

Programmers would rather code that write documents. So the first few Statements of Work were a treated as a speed bump to get to coding up a solution, and necessary step to please the “old man”. After running through a few iterations, the benefits of doing the thinking, checking and discussions up front became quite clear. Writing is still a drag, but the benefits are now clear and there is more enthusiasm in writing and reviewing these Statements of Work within the team.

The other issue, the level of detail to be written, is also being resolved through practice. Initially they wrote very high-level Statements of Work, hoping to resolve assumptions and misunderstandings during coding — the old way. But as the early reviews by me and by users showed them, their readers were confused, identified missing components, pointed out areas not documented and therefore not thought about (or though through), and some were just plain wrong. The next iterations were more detailed, and the next more detailed in areas where details were needed. We’re still evolving where and when to dive deeper in a Statement of Work and where not to, but the documents are certainly getting better and the coding process way faster.

The result of the change to [Minimal Project Managementhttps://hiltmonmon.com/blog/2016/03/05/minimal-project-management/) is easy to see. More projects getting shipped correctly and quicker, with better discussion and problem solving up front and faster coding to the finish line. And our communications and prioritization processes run smoother.

Follow the author as @hiltmon on Twitter.

Attractive Multi-Line SQL Statements in C++

I often need to embed SQL statements in C++ code. Unfortunately, many of mine are long and complex. Even the simple ones are wide. This leads to the following ugly code:

std::string s8("SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;");

… which means I need a massively wide screen to view it (and it violates my 80-column rule), its not formatted legibly and even with wrap, its hard to understand — and this is a simple example. Its a maintenance nightmare.

Going multi-line, which is how SQL is usually written, makes things worse:

1
2
3
4
5
6
7
std::string s8(
  "SELECT id "
  "FROM lst_quotes "
  "WHERE route_id = $1 "
  "AND lst_request_id = $2 "
  "AND quote_id = $3; "
);

C++ compilers helpfully merge the strings, but I need to put in the quotes around each line (and have a space at the end of each line before the closing quote).

Or this monstrosity:

1
2
3
4
5
6
7
std::string s8("\
  SELECT id \
  FROM lst_quotes \
  WHERE route_id = $1 \
  AND lst_request_id = $2 \
  AND quote_id = $3; \
");

… where the end of line slashes still need to be added or the compiler gets upset.

Ugly. Hard to maintain. Hard to read. Impossible to copy and paste. Unmaintainable.

I want to be able to paste in SQL. Just SQL. As Is. From my database query tool.

The Solution

The solution is a simple C++ variadic macro placed at the top of the file:

#define SQL(...) #__VA_ARGS__

When used, this macro concatenates all lines between the parentheses, gets rid of newlines and, as an additional bonus, converts multiple white spaces into single ones. So this code (note the sexy formatting and excellent use of white space):

1
2
3
4
5
6
7
8
9
std::string s8( SQL(
  
    SELECT id
    FROM lst_quotes
    WHERE route_id = $1
    AND lst_request_id = $2
    AND quote_id = $3;
    
  ));

Looks and works great. I can format, make legible and paste SQL in as necessary — the way SQL was meant to be.

When compiled, the resulting string is:

SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;

… which is the desired compact version to pass on to the server.

Legible. Easy to maintain. Easy to read. Simple to copy and paste. Very maintainable.

Follow the author as @hiltmon on Twitter.

Apple Watch - After 1 Year

The Apple Watch turns 1 year old next week. If you follow the popular press, you’d think the device was rubbish and a complete failure.

I vehemently disagree.

It may be a limited version one, but it is a flawlessly engineered timepiece that is conducive to small, yet significant, life hacks.

I wear, and continue to wear my Apple Watch every day. I have done so since the day it arrived, the result of a late night wake-up alarm, a few taps on the Apple Store app and a return to sleep on opening night.

The Rings

If there is a top — by far — reason I wear the Watch as much as possible, it’s those perky exercise rings. I have set the calorie and exercise time goals just above my average, with gym exercise, day. After a year, I am still gamed into walking the long way home and feeling bad on those lazy Sundays when the rings gain almost no color.

The Apple Watch has quietly encouraged me to move and exercise in ways that I have never been able to do myself. I have watched my average heart rate for a 30 minute walk drop to normal levels. And the stand-up reminders, which I use to get up and refill my water glass, seemed to keep me healthier this past year.

The Notifications

I can easily replace the Apple Watch with a dedicated fitness device to replace the rings that rule them all, but nothing saves me more time than having notifications on my wrist.

As described in last year’s How the Apple Watch Has Changed My Behavior for the Better, the process to view notifications is much faster (and less error-prone) and the need I feel to react to them is much smaller when viewing them on the watch.

But over the year I have done something few people have. I have added new notification sources to my world, without noticeably increasing my notification volume. Most of the new notification sources are internal to work, notifying me when systems fail or have issues. I feel these vibrations on my wrist and know — just know — whether to interrupt what I am doing and respond. The result, fewer business issues and faster response when they do occur.

The Time

I was a watch wearer before, and will be for life. The device on my wrist needs to be an excellent time piece, in design, feel, engineering and in allowing me to glance and “know” the time. Prior to the Apple Watch, I wore a Titanium Special Pilot Edition Citizen Eco-Drive watch, a solar-powered engineering marvel. Its face was as familiar to me as my own.

For much of the year, I used the Utility face on the Apple Watch to transition with an analog display. It took no time to get used to. And the wrist flick needed to activate the screen works every time for me. It’s the same movement I guesses I used with the old watch. I still switch to Utility for dress-up.

These days I run modular to see more data in the same glance. And reading that, too, has become habit.

The Next Appointment

It’s not unusual for people to pop appointments in my calendar at work. My fault, I gave them access to my calendar for just that purpose. When deciding what task to work on next, I need to know when my next appointment is. If its far away, I will select a programming challenge and enter the zone. If its near-by, I will work on something smaller requiring less focus.

Before the Apple Watch I would go to my computer and launch the Fantastical menu bar applet to see what’s next. But even that requires the eye to move down the list and read down. Fantastical fades past appointments to make this process easier.

On the Watch, I just flip my wrist up and the next appointment is in the middle of my Modular view. Way quicker.

The Weather

I live in a high-rise in Manhattan. The best weather report comes from looking out the window. But I used to have no idea the temperature range outside as the building is heavily heated. A bright sunny day may look warm from a heated room, but be blisteringly cold.

Having the current temperature (in celsius - I am not an animal) complication on the watch face has saved me many times from going out without an appropriate coat on freezing sunny days.

The Apple Pay

It took a while to figure out the double-tap needed to trigger Apple Pay on the watch. But once figured out, I use it more than Apple Pay on the iPhone. Even late at night after a few drinks, I can Apple Pay for a Yellow Cab with ease.

Aside: I just wish more retailers in the USA supported contactless payments. Some, like my local supermarkets, do. Many, like my local big-chain chemist (who has it on the scanner but stupidly disabled), restaurants and take-out food places do not. Would someone please drag these neanderthal companies into the twenty-first century to join the rest of us.

The Band

I purchased the Black Apple Watch Sport with the black sport band on day one — the nerd version. While that was awesome, I missed having a metallic band like my old watch. I seriously considered purchasing the black link bracelet but felt it just was too expensive for my tastes. I loved the look and feel of the Milanese Loop, but the silver looked terrible with the black Watch.

Recently, Apple released the black Milanese Loop. I tried my luck and the Grand Central Apple Store had one the next day which I purchased. And it’s amazingly great, from a quality watch band feel and engineering perspective. My plan was to wear the Milanese Loop for work, and switch to the Elastomer bands for gym. I did that once. The Black Milanese is now the permanent band, er, for now.

The Apps

As expected, I rarely use Watch Apps unless I want to drill more on a notification. The slow launch times have improved with watchOS 2, but are still too slow.

I do use the Exercise app most days and love that each exercise is now saved and shown on the iPhone.

I also, rarely, answer the phone on the Watch. It works great, but holding my wrist up awkwardly while talking feels weird.

And that’s about it.

A Grand Start

For a device that requires charging every night, has the slowest setup and app launch times, and is tad bulky, it is still a grand start for a product. Its capabilities and utility far outweighs its first version flaws. The press has it wrong about it being rubbish.

It also sold more than all swiss watches in Q4 2015 and would probably make it into the Fortune 500 as a stand-alone one-product business. And its only been one year. The iPhone product business was in a worse state at the same stage of its evolution. The press has it wrong about it being a failure.

I am very happy with my Apple Watch, as much with the device as with how it has immeasurably improved my quality of life and behavior. No other device, including the iPhone, has hacked my ways as quickly, efficiently and unobtrusively as the Apple Watch. And this is just at version one in year one. Failure, my arse!

Follow the author as @hiltmon on Twitter.

Spotlight Only - Nine Months Later

I think one should review one’s productivity tool load-out every once in a while. Operating system updates, other productivity tool updates and your own work practices change over time. Your tool load-out should too. Changing the muscle-memory, it turns out, is surprisingly simple, quick and easy. And your productivity usually increases.

I am a huge fan of keyboard launcher/productivity applications like LaunchBar, Alfred, and back-in-the-day QuickSilver. They were amongst the first applications installed on any new system, and I believed I could not work productively without them.

Nine months ago I rebuilt my 15" MacBook Pro for some forgotten reason and decided to see if I could operate productively with Apple’s built-in Spotlight only for the core features that LaunchBar and Alfred provided.

To make it clear though, my use-cases for these products were basic, mostly using them as shortcut launchers. I never used the advanced scripting features, rarely added plugins, forgot about the additional actions on results and never touched the clipboard histories provided. Mostly because I had Keyboard Maestro juiced up to take care of those functions and more.

Its been nine months, and I am just as happy and productive as ever. Apple did a great job with the Yosemite Spotlight power-up, and the El-Capitan update made it just that much better.

So here’s a core set of Spotlight features — it’s a short list — and how it compares with Alfred or LaunchBar:

Application Launcher

Spotlight launches applications just as well as the others, including with abbreviations. For example, to launch Navicat Premium Essentials, a Spotlight of npe puts it at the top as expected.

Result: Just as good and quick.

Text/File Finder

Type a few words and it finds matching files and their contents very quickly. Unlike the commercial applications, Spotlight returns far fewer results in the HUD screen, but you rarely need more than the top four to find the file you want. Also, since El-Capitan, it now searches on partial strings. Note that I also needed to add a Markdown Plugin to make it work perfectly for me.

Result: Mostly the same, a longer and customizable result list would be nicer. I know you can resize the Spotlight screen, but I want more results per category, not a larger screen showing more categories.

Contacts

Type the first few letters of a person’s name and Spotlight shows their contact card. Move the mouse over an email or phone to get a click-through icon to send a message, etc. The commercial applications are much better here, allowing you to keep your hands on the keyboard and select an action from the card.

Result: Not as good, but enough for me to see the phone number I need to punch in.

Web Search

Spotlight does have the ability to search the web via Bing (shudder). I do not use this. If Spotlight could use Google or DuckDuckGo it would be a different story. Instead I have a keyboard shortcut in Keyboard Maestro that launches Safari and allows me to search DuckDuckGo in one keypress. So I turned this off on Day 1, Bing search is rubbish.

Result: The third party applications do this way better.

Actions on Results

One thing Spotlight does not do is provide more actions once a result is found. You cannot do anything more with a found result except open an application, you cannot even select the application to use or run a macro on it. Since I never used that feature, I don’t miss it.

Result: If this is your primary way of using Alfred or LaunchBar, and I suspect thats how most of you use them, this missing feature is a showstopper.

Other

I rarely use Spotlight to search for a stock price, weather, sports score or local movie time, these things are all far more conveniently available on my iPhone (and I have notifications set up for the important stuff).

Result: Same, same.

I am sure there is a lot of functionality that I could be missing out on, but since I am pretty much all-in on Keyboard Maestro, Apple’s built-in Spotlight works just fine for my launching and searching needs. Anything more complex gets a keystroke macro in Keyboard Maestro.

Follow the author as @hiltmon on Twitter.