Hiltmon

On walkabout in life and technology

On Removing Comments

Today I removed the comments from hiltmon.com for one reason and one reason only — the comment service, Disqus, that I used — was tracking you across a multitude of sites and is selling your data to strangers without your (or my) permission. I no longer want hiltmon.com to be one of those collection points.

I’m going to miss the comments though. Your comments had been insightful, gracious and a wonderful way to connect with my readers, and to allow my readers to connect with each other. Unlike more popular sites, I never had to deal with comment spam, bad players or any of the common nastiness on the internet, just lucky I guess.

I have deleted my Disqus account and regenerated this site. I have no backup nor copy of any of your information, and now, neither do they. If you want to comment on any post, please do, I love them, just tweet me at @hiltmon.

I’m not 100% privacy clean, yet. Hiltmon.com still uses Google Analytics for site analytics, and now that Carbon has been sold to BuySellAds, I may, very soon, be removing the ad as well.

Follow the author as @hiltmon on Twitter.

Stop and Think

When I started out as a developer and designer, I know I was clever. When folks asked me to design and develop a software product, I would ask a few questions to confirm that I understood what was asked of me, listen to their answers, then set about making the product. Request, build, ship. Easy!

My mentor, who was definitely smarter than me, used to yell at me to Stop and Think.

When this first happened, I assumed he meant that I had to stop and think about the software I was going to design and create. But that made no sense to me. Why? Because even back then I knew to run a software development process, not to jump straight into coding. And that process would take care of this undefined stopping and thinking. I would have to think to write down what as asked (as Requirements), what was expected (Deliverables) and what I was going to implement to make that happen (Design). All of those required thinking. What was he on about?

Over drinks, he explained it to me.

The stop and think was not about the product asked for, he agreed that the development process would take care of that. The stop and think was to step back and look at the bigger picture, the context, the motivation, the larger workflow or process that this requested product would become a component of and who or what would be impacted by this project.

I had, naïvely, assumed that the client always knew and deeply understood these things, had done their homework and had come up with the request, requirements and deliverables based on their perfect understanding of the context. The reality is that they had felt a need, guessed a partial or bare solution and had asked for it, with no bigger picture or further thought.

That bigger picture, that further thought, that was on me, the designer and developer.

Stop and think was meant to make me ask questions outside the regular development process, questions like

  • How will the deliverables be used, who will use them, and what are their needs?
  • Why is this product needed at all?
  • Is there a reason that the product needs to be done the requested way?
  • What business problem, no, what real problem, are they trying to solve?
  • And how important, in the flow of things, is this project?

The answers to these non-product questions are what he needed me to stop and think on. You see, if the answers supported the requested product and its deliverables, then we’re good. But more often than not, the answers to the bigger picture questions did not match what was asked of in the product — because that person asking for it had not Stopped and Thought either. By asking these questions, the need, context, purpose, scope and nature of the product would clarify, solidify and change.

For example, a simple data load project sounds straightforward, get the data, write it somewhere. Most folks would spin up a script to load and dump the data into a table just as it came in. Its fast, works, and how most folks do it. But if the developer of the loader knows who would read the result, and how they wanted it and why, the developer could design data cleansing and transformation routines, better naming conventions and remove extraneous columns that would load the data to better suit the needs of downstream, saving both teams time and money — for the same development cost.

What really comes out of stop and think are better product definitions, better ideas on how to design and implement them, and a better resulting product.

My points:

  • Knowing the bigger picture helps design a product that fits that bigger picture instead of creating more tech or business debt for later on.
  • Knowing the bigger picture will often change what the product does, saving rework later when it does become clear, and makes everybody happier.
  • Knowing the bigger picture also helps to prioritize projects and project dependencies, and will help when planning and scheduling work.
  • Inversely, not knowing the bigger picture, or assuming the client does, leads to worse software, confusion, miscommunications, and the litany of problems all IT teams face daily.
  • The intellect and knowledge of the software designer is underrated. Maybe the software folks have a better way of getting to the same goal with different deliverables and a different product build.

Follow the author as @hiltmon on Twitter.

Migrate Octopress / Jekyll Posts to Ulysses

I wanted to move my published writing stashed in my Octopress/Jekyll site into my current writing workflow environment, Ulysses. Dragging and dropping the files from the _posts folder was not an option, because:

  • The file names were messy
  • There is no title in the file, it’s in the Markdown metadata
  • I wanted to keep the publication date on the imported files

So, I wrote a horrible script to do it.

The script takes a _posts Octopress or Jekyll folder and processes each file into an OUT_PATH as a set of clean Markdown files, making a few changes along the way. It’s a mix of Ruby to handle the YAML front matter and shell commands to do the actual work. And it runs as follows:

  • For each file in the IN_PATH
  • Parse the YAML front matter to get the title and publish date
  • Create a new file using the title and write a Markdown H1 to it
  • Append the existing file data to the new file
  • Depending on how the publish date is formatted (old was a string, new is a time), touch the new file to set the original publish date

After that, I just dragged and dropped the Markdown files into Ulysses which kept the formatting and dates.

The script itself is below. Its terrible and you dare not use it. But maybe it has some ideas to help you run your own conversion.

#!/usr/bin/env ruby

require 'rubygems'
require 'yaml'
require 'time'

# IN_PATH = "/Users/hiltmon/Projects/Personal/HiltmonDotCom/source/_posts/"
# IN_PATH = "/Users/hiltmon/Projects/Personal/NoverseDotCom/code/noverse/source/_posts/"
IN_PATH = "/Users/hiltmon/Downloads/_posts/"
OUT_PATH = "/Users/hiltmon/Desktop/Blog/"

Dir.new(IN_PATH).each do |path|
  next if path =~ /^\./

  basename = File.basename(path)
  puts "Processing #{basename}..."

  posthead = YAML.load_file(IN_PATH + path)
  title = posthead['title'].sub("'", '')

  # Create a new file and add the H1
  cmd2 = "echo \"# #{posthead['title'].strip}\n\" > \"#{OUT_PATH + title}.md\""
  %x{#{cmd2}}

  # Append the original Markdown
  cmd1 = "cat #{IN_PATH + path} >> \"#{OUT_PATH + title}.md\""
  %x{#{cmd1}}

  # Mess with the file time
  if posthead['date'].is_a?(Time)
    cmd3 = "touch -t #{posthead['date'].strftime("%Y%m%d%H%M")} \"#{OUT_PATH + title}.md\""
    %x{#{cmd3}}
  else
    file_date = Time.parse(posthead['date'])

    cmd3 = "touch -t #{file_date.strftime("%Y%m%d%H%M")} \"#{OUT_PATH + title}.md\""
    %x{#{cmd3}}
  end
end

The result:

Follow the author as @hiltmon on Twitter.

Notification City

It is best for your technology stack to tell you what went wrong as soon as it goes wrong to get the right level of attention in the correct timespan.

I run a massive technology stack at work, filled up with multiple servers, plenty of web applications, loads of C++ programs and massive numbers of scheduled and recurring tasks, and I do it with an insanely tiny team and no DevOps folks. Almost all of the time, our software works as designed and the entire network of systems runs just fine. Just like yours.

When systems fail, we get notified.

Depending on the nature of the failure, we get notified in different ways. This helps us quickly decide whether we need to stop what we are doing and react, or wait to see if more notifications follow.

And it works.

In this post, I will stay out of the technology and explain our thinking and implementation of notifications, how we send them, monitor them, use them and manage the volume so we are not, in any way, overloaded or subject to unnecessary noise.

Crashes and Notifications

As an intentional design decision, my team writes software that intentionally crashes when things are not right. We do not catch and recover from exceptions, we crash. We wrap database changes in transactions so the crash is safe. We do not, under any circumstances, run systems that continuously and expectedly fail and quietly self-restart.

We rely on notifications to quickly tell us of the crash so we can see what went wrong and rectify the issue. We ensure, where possible, that our error messages are both clear and identify where the crash happened.

This design is justified as our systems are a network of interdependencies, and so a failure in one process over here can impact, or require reruns, over there. Since we are a small team, building a DevOps infrastructure to map and auto-recover on all of these paths, which are constantly changing, is not optimal. We’d spend all our time doing it.

And so we do it simply. Almost all our processes are launched from simple shell wrappers or rake tasks. When an execution fails, the shell wrapper captures the error, and fires off the appropriate notification to the appropriate channel, then logs it and pops it in the right chat room.

Aside: This works because we also design all our processes to carry on where they left off, so even the most critical real-time systems just carry on from where they left off on a restart after a crash. How we do that could fill a bunch of posts.

Errors and Failures

No matter how good your software quality, things will go wrong. Programmers introduce bugs, bad data causes failures, hardware fails, and external systems are not always there. Some of these issues are easily dealt with, the rest need human intervention.

For example, a large proportion of our software gets data from a remote location, munges it and bungs it into the database (or the other way around). More often that not, that data is remote and third-party. And reasonably frequently, their server is down, their data is late, or the data is bad.

Of course our code “attack dials” when servers are unavailable or data is not present, so we do not get notified of these events — that would flood us with useless notifications. But, if the process has been dialing a while, or the data is not available in the dial window, then we get a notification. And if the data is bad, the program that munges it will crash, sending a notification.

Processes that depend on this data do not notify that the data is missing.

Why not?

We already know that the data is missing from the first notification, no need to pile on more notifications saying the same thing. Their failures are also logged, but in a different location and chat room from the primary. This model helps recovery and reduces confusion in identification.

Aside: We also have a Wiki that tracks data dependencies, so we know which processes to rerun after we correct a failure. This wiki lists all the commands to paste, so its easy. Whenever we face a new situation, we update the wiki.

Success and Last Runs

Clearly we do not want a notification when an expected process runs successfully, that would create an insane flood of notifications. We still send them, with some third party software we cannot stop them, just to a different destination. These notifications are saved so we can review them if we want, but it does not alert the team.

Note that failures are also saved, so we can go back and see where and when what fails more often.

Live Process Monitor

Real-time C++ programs are more difficult to manage. We write them as “bullet-proof” as possible, but they too can and are expected by design to fail. Known bad data situations are dealt with, but we want unusual situations to take them down.

For these, we, the humans, need to drop everything and act. For this we run a mix of open-source and our own home grown process monitors. As soon as a monitored program fails, we get a notification on a bunch of channels:

  • Our Sonya, an electronic voice on an ancient Mac Pro, loudly tells us what failed. Having a “Bitching Betty” voice state the nature of the problem really gets our attention. Aside: Sonya as in “gets on ya nerves”, thanks British Rail.
  • We get an iMessage, on our phones and watches, for when we cannot hear Sonya.
  • The error Notification Center chat room gets a message as well, which pops up a UI alert.

The live process monitor also watches our “Coal-mine Canary” processes. There are a few threads we run that crash early and quickly when things go wrong, oftentimes quickly enough for us to be on it when the important stuff gets ready to fail. These also get the Sonya alerts.

For example, we have a process called “universe” that runs all day long and it depends on a large number of our systems and services, so it’s the first to die when things go wrong, a perfect “Coal-mine Canary” candidate. When Sonya squawks that “The universe has collapsed”, we know bad things have happened.

Ongoing Notifications

If we see the same notification and deal with the same interruption over and over again, then we know we have an ongoing problem. In this case, we stop all work and get it resolved. The cost in time and lost productivity of dealing with the same issue over and over again is not worth it. Especially in a small team of developers. Taking the time to get it fixed is always worth it.

To be clear, we do not muffle notifications and silently restart “known” failures. We fix them, over and above all other work. Silence means all is well, not “all is well except for the known failures”.

It also ensures that when we do get a notification, we cannot and do not ignore it. The notification signals a real issue. We have no reason to tune out notifications, and therefore no reason to start ignoring them.

Regular System and IT Activities

Of course, being a “normal” tech team, we also leverage the notification infrastructure for regular, non-failure mode notifications. We just send these to a system that logs them, a system we can glance at when we need to see what happened. These notifications are not sent to us humans in the regular chat rooms, so do not bother us. This includes:

  • Hardware monitors reporting successful checks
  • Runs that succeeded
  • Programmer commits
  • System deploys
  • Software updates

Notification Volume Management

Most notification systems projectile vomit notifications, they are as chatty as a flock of seagulls over a bag of chips or a lawyer in court. The negative is that no-one can deal with the noise and yet still spot the real issue, and eventually they tune the noise out.

So how do we manage the flood of notifications, keep them to a manageable trickle of things we need to respond to?

  • Rule number one, we do not notify humans for informational purposes or success. That is all noise and we so not send these out, only log them. If the notice is expected or does not require immediate human response, do not send it to people, just save it.
  • Use different channels for different importances. If immediate attention is needed, set off Sonya and the iMessage alerts. If not, send it to the monitored chat room to be dealt with later. And if no response is needed, log only.
  • Notify once and once only, flooding the chat room with a bunch of notifications that were triggered by a single failure also adds noise and makes it harder to find what cause the cascade. Trust the humans to know what needs to be done to recover from an event.
  • Get an intelligent repeating voice alert, like our Sonya, on the job for systems that must be up to transact business and keep her repeating the issue every few seconds until the system is back up. Its noisy and annoying, but others can hear when things are wrong and when they get back to normal. Oh, and do not send these notification by the normal channels, so they do not fill up your chat rooms.
  • Use a chat room for failure notifications. Firstly, you can get alerts when new messages come in, but more importantly, the responder can identify which notifications have been dealt with by responding to the messages. So, if more than one person is looking, that chat room will tell them which ones have been dealt with, and by whom. That way, not everyone gets involved when an alert comes in. It also allows us to scroll back to see common failures and note what was done to rectify.

Notification City

In our Notification City:

  • When Sonya starts talking and our iMessages start pinging, we jump. She tells us which real-time system has failed and we go and see why, fix it and restart.
  • When the “Coal-mine Canary” processes fail, Sonya and iMessage let us know as well. We look at the chat room to see what dependency triggered it.
  • When a regular thread fails, it gets posted to the chat room, and we get a UI notification that it happened. We can then see what went wrong, make the necessary calls, get it going again, run the additional processes to recover and respond that the issue was resolved.
  • When all goes well, we get no notifications at all, nothing in the chat room and no interruptions, and we can focus on our work. Later on, we can look at the logs and status screens to see all was well.

This allows us to focus on what we need to do, yet respond appropriately when notified. We’re not inundated with noise or unnecessary messages, so we do not need to tune them out.

When we hear or get a notification, we know that the situation is exceptional, and, depending on the channel, we know whether to jump now or have a few minutes to respond.

Our Sonya has been quiet today, all is well.

Follow the author as @hiltmon on Twitter.

Coding Style Standards in 2017

I’ve been writing software for well over 30 years, I’ve spent well over my 10,000 hours and gotten rather good at it. And I still write to a very rigorous coding style standard.

You’re kidding right? Its 2017, code style guides are so passé.

Nope. I’m deadly serious.

Get off my lawn

Some of us remember when coding styles were de rigeur. When you climbed off your commuter dinosaur and joined a coding team, the first document they gave you was the coding style guideline. It was a thick, three-ring binder that covered everything from naming, spacing, commenting, position of braces, white space rules, spaces or tabs and the line length rule.

And, when you tried to submit your first code, the code you were so proud of, the team destroyed you in review for not following their stupid guidelines. There you sat, knowing your code worked, wondering why these people were being so anal about spaces and where your effin brackets were and why you could not use the letter m as a variable name. “The code works, dammit,” you thought to yourself, “what is wrong with these people!”

The reality was that these folks knew something we rookies did not. That it was easier for them to read, review and smell-check code written the way they expected than to try to decipher yet another programmer’s conventions. It saved them time, saved them effort and allowed the pattern matching engines in their brains to take over to enhance their understanding.

Back then, code quality depended on other people reading, reviewing, understanding and smell-testing your code. It was up to humans to see if things could go wrong, find the issues and get you to fix them before the system failed. This was how the Apollo code was done.

The coding style guideline made that job a whole bunch easier.

The Bazaar

The rise of open source, good semantic tools, formatters, linters and rise of the code ninja, have led to the demise in many cases of the coding style standard.

Most open source projects are a hodgepodge of coding styles because there is no leader, no team-boss and no benevolent dictator. Some, like LLVM and Python, do have such a character, and therefore a style guide. Most do not.

Some languages, like go, have an opinionated style and provide a formatter. And some teams use formatters to make they code look “better”.

And don’t get me started on projects that intermix various open-source code-bases that uses conflicting styles. Aside: generated code has to be excluded as it gets regenerated on each compile. I’m looking at you, Google protobuf!

The big issue is that code these days is less readable by humans, and is less frequently reviewed by humans. Much code is written using a mishmash of open source code, pastes from StackOverflow and a bit pf programmer code. Paying homage to some random management mandated format using a tool does not improve the quality, readability and maintainability of the code.

The great debates

Those of us who do care believe that our coding styles are the best. Of course we do, we came up with them. We use them daily. We find code to our style easier to read. Writing to our style is now a habit.

Bring in another programmer and the war begins. Arguments erupt on line lengths, tabs or spaces to indent, indent sizes, brace positions, early exit, and function lengths. Of course, the other programmer is wrong, their habits are bad, they are idiots and their code looks terrible.

The truth is that these arguments are stupid since no side is “correct”. It’s a taste thing, a personal preferences thing, a habit thing and sometimes a power play.

At the end of the day, it really does not matter what you choose, as long as you all agree and all adhere to the agreement. Win some, lose some, it does not matter.

What matters is being able to read, review, and fix each-others code with ease. And that requires a coding style standard.

So why still standardize in 2017

Because:

  • Code is meant to be read, reviewed, modified and refactored by humans first, and compiled second. Code written to an agreed style is way easier to for humans to process.
  • When not sure what to do or how to write something, the standard steps in. When to switch from a long parameter list to a parameter object, how far you can take a function before refactoring to smaller functions, and where in the code-base to place a certain file are all decided by the style standard.
  • Naming in code is a nightmare. Styles define how to name things, and what case to use, making it easier to choose the right name. Most importantly, the reader can jump to the right inference when reading a name in the code.
  • We don’t care who wrote the buggy line, blame is not what we do. But everyone in the team should be able to read, diagnose and fix it. If you want to find the fastest way to speed up maintenance times and bug detection times, write to an agreed style.
  • The debates are over, we can start to form habits, and we can all focus on writing great code.

So I guess you use an old standard?

Nope, we update ours every year. Most of the time it changes little. The original space vs indent vs line length stuff remains mostly the same. Those debates are over and the habits formed.

But languages change, language practices change, and these lead to changes in the standard. We learn more about our own code over time. Misunderstandings in naming inferences change the naming conventions, identified bugs lead to file layout changes and better patterns identified by team members get added. And old, unnecessary guidelines are removed.

For example, our 2016 standard added the requirement that single line block statements must be wrapped in braces in C++, so a heartbleed like issue would never affect us. It finally allowed the use of function templates now that our tools can handle them properly — and we programmers finally got a handle on them. It changed the file extension on all our C++ headers to “.hpp” because our compilers treated them differently. And it moved function parameter lists to their own lines in headers so we could improve header file commenting and document generation. Nothing earth-shaking, but still huge improvements to readability.

So all code is to standard?

Yes, and no. All new code is to standard. We do not stop all work, go back and correct old code, there is too much of it and we have better things to do.

But, each team member knows that if they ever need to touch, change or fix old code, they need to refactor to the new standard at the same time. We do not commit non-standard code. Over time, the old code changes to match the new standard.

Ok, summarize this for me

  • Code is meant to be read by humans first.
  • Code written in an agreed style is way easier for humans to find, read, understand, diagnose and maintain.
  • Moving to a new standard takes time to build the habit, but once it becomes a habit, writing to standard becomes just part of the flow.
  • The standard needs to change as languages change, as programmers get better and as new members join the team.
  • All new code is written to the latest standard, all code committed is to the new standard, all old code is refactored on touch.
  • Coding style guidelines are just as important in 2017 as they were when we rode dinosaurs to work.

Follow the author as @hiltmon on Twitter.

MathJax in Markdown

Adding mathematical formulae to HTML pages is easy these days using MathJax. But I create all my documents in Markdown format on my Mac. This post shows how to add mathematical formulae to your Markdown documents on the Mac and have them preview and export to PDF correctly.

MathJax in Markdown

Adding mathematical formulae to a markdown document simply requires you to use the MathJax delimiters to start and end each formula as follows:

  • For centered formulae, use \\[ and \\].
  • For inline formulae, use \\( and \\).

For example, the formula:

\\[ x = {-b \pm \sqrt{b^2-4ac} \over 2a} \\]

Renders like this from markdown:

$$ x = {-b \pm \sqrt{b^2-4ac} \over 2a} $$

Or we can go inline where the code \\( ax^2 + \sqrt{bx} + c = 0 \\) renders as \(ax^2 + \sqrt{bx} + c = 0 \).

Preview: iA Writer, Byword, Ulysses

Most Markdown Editors have a Preview function, but do not include MathJax by default. To add MathJax rendering in iA Writer, Byword, Ulysses and most others, you need to create a custom template to render the document (I assume you have done this already - see [Letterhead - Markdown Style]https://hiltmon.com/blog/2013/05/23/letterhead-markdown-style/) for an example).

For iA Writer, for example, go to Preferences, select the Templates tab and click the plus below Custom Templates, and choose Open Documentation to learn how to create your own template. Or copy an existing one and rename it.

In the main html file, called document.html in the iA template, add the MathJax javascript header line:

<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>

My template file is very simple:

<!doctype html>
<html>
<head>
    <meta charset="UTF-8">
    <link rel="stylesheet" media="all" href="normalize.css">
    <link rel="stylesheet" media="all" href="core.css">
    <link rel="stylesheet" media="all" href="style.css">
    <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
</head>
<body data-document>&nbsp;</body>
</html>

Next time iA Writer, Byword or Ulysses loads its preview pane and renders the page, the javascript will run and render the MathJax as mathematical formulae. For example, in iA Writer:

Note: Occasionally the preview will fail to render the MathJax, either because the MathJax is invalid or the refresh fails to reload the Javascript. If you see something like the image on the right, just right-click on the preview-pane and click Reload. That forces the preview pane to reload both the rendering template and the page.

Preview: Marked 2

On the other hand, if you use the magnificent Marked 2 program to render your HTML, well, it has MathJax support built-in. Under Preferences, choose the Style tab and check Enable MathJax.

Note: Marked 2 does not have the intermittent problem of failing to render MathJax properly while you are editing the document.



So there it is, simply add the MathJax using delimiters to your Markdown file and update the previewer to render it.

Follow the author as @hiltmon on Twitter.

On the New MacBook Pros

Much has been written, tweeted and complained about the new MacBook Pros released by Apple last week. Complaints about the 16GB limit, all-in switch to Thunderbolt 3 (USB-C), the removal of the SD-card and MagSafe, the new keyboard, the aged CPUs, the slow GPU, dongles, that they are not “Pro” level machines, and more. More and more influential folks are writing that Apple has forgotten the Mac, that the Mac is doomed or dead.

They are probably right. The new MacBook Pros are not ideal, nor for them.

I believe the real issue for these folks is not what Apple did not do, but that there is no viable alternative product out there for them that has the one feature they need and all the benefits of the Mac.

Linux on the Desktop is getting better, even Windows is improving, but it’s not macOS. The applications professional-level users prefer run better in the Apple ecosystem. Several only exist in the Apple ecosystem. And even if the primary applications are cross-platform, the tools, utilities, and productivity applications we use are not available elsewhere.

If there were a better alternative to Apple’s ecosystem, professional users and developers like myself would have already switched.

In the mean time, Apple released new MacBook Pros that are according to Apple’s base, horrendously compromised ones.

Its all kremlinology whether this was intentional on Apple’s behalf.

Some believe Apple compromised now because the line was aging, they needed to do something and Intel was falling too far behind. But then Microsoft released Surface the day before and it was the same platform, nothing newer inside (except for a better GPU in a thicker, heavier body).

Some believe Apple intentionally made the shift to a new design and ports now, just as they did with USB, floppies and CDs before. Their first machines with the new ports were always compromised, but they got better.

And some believe Apple simply does not care about the Mac. That one does not compute with me. The new design, the new touch-bar, the effort that went in to the look, weight and feel of the device proves otherwise.

I am a professional programmer, writing multithreaded, complex, big-data applications. I should be using a Mac Pro at work (and another at home and in the coffee shop) with maximum cores and RAM in order to compile and run my code and maximize my productivity. But I am also a graphics and print designer, a web developer, a writer, an amateur-photographer and a productivity nut. The MacBook Pro, as imagined by the complainers, should be the perfect machine for me.

The reality is that the perfect machine does not exist for professional you or professional me, it never has and never will. I have always wanted a UNIX-based operating system with a GUI, more cores, more RAM, faster storage, better battery life, a better screen and a thin and light laptop because I walk and work everywhere. You may need a machine with certain ports, more RAM, longer battery life, bigger screen, whatever. Our needs are mostly the same, but differ in significant areas.

I have tried Dell Windows, MacBook Pros, Mac Pros, MacBook Airs, Lenovo’s running Linux and combinations thereof, and the one computer that has met the most of my needs – but never all of them – has been the MacBook Pro. I am writing this on my maxed-out trusty 15-inch Mid-2014 MacBook Pro. The cores have been plenty fast, the RAM sufficient, the SSD good enough, the display great, the battery life the best ever, the ports just fine. But it never was my ideal computer. It was and remains the best I could get to meet the most of my needs at the time.

I have ordered the new 15-inch MacBook Pro, with upgraded CPUs, faster GPU, new larger SSD and the same, but faster, RAM. I do not expect the new laptop to be perfect, none ever has, but I do expect a reasonable performance improvement in compile times and database reads, way better color on my display and a lighter load when walking. It may not sound like a lot, but these small improvements when added up over days and weeks turn into major productivity gains.

What I will not be doing is giving up on the stuff that already makes me so productive. The operating system that I love, the tools that I know and love, and the processes and workflows and muscle memories that help me fly. I see nothing else on the market right now that I can change to that can help me perform better.

I also think that Apple, Microsoft and Google are all being frustrated by Intel, who in turn is being frustrated by their issues with their latest process. Knowing Intel, we know they will solve this. Sooner than later. And so I do expect next year for all of Apple’s, Microsoft’s and Googles PCs to move to the next generation of Intel chip-ware that will meet more of professional users needs.

Until then, I intend to enjoy the beautiful new MacBook Pro and its productivity improvements when it arrives, and use a few adapters to replace the ports I need to keep going. But I also will look closely at the 2017 MacBook Pros when they come out. And keep an eye on the pro-level iOS/Pencil/Keyboard solution in case it becomes part of a better solution for my needs.

Follow the author as @hiltmon on Twitter.

The Gentleman James V

Last evening a package arrived from Amazon. A package that neither I nor my wife had ordered. A mysterious, enigmatic package. From the outside, there was no indication of its content or providence.

We discussed where it could have come from. What could it be. Should we open it. Maybe Amazon sent the package to the wrong person. Yet the delivery address was certainly mine.

Finally, I opened it.

It contained a bubble wrapped box, a bunch of packing bubbles and three slips of paper. The first slip was a packing slip describing the content of the bubble-wrapped box. The second was a return slip. Where this package came from remained a mystery.

It was the third slip of paper upon which we hoped to gain the key clue, the source of this package, our mysterious benefactor.

It did not help.

It contained a personal note.

An unsigned personal note.

A note clearly written by someone who knows me and how I live my life.

Someone who spotted an emptiness in my existence that I was unaware of.



The bubble-wrapped box contained a gift, a perfect gift. One borne of great kindness and understanding of my lifestyle and of unknown unstated needs.

From the note and content, it was clear that the sender knew me well. It was also clear that the sender was considerate, kind, wise and understanding. They had taken the time to observe that necessary items, those in the bubble-wrapped box, were missing from my life. That the quality of my life and that of many others would be improved immensely by this gift. They had taken the time to research and select the perfect gift to fill this unknown unstated void. And they had executed, purchased and shipped it.

With a personal, yet unsigned note.

Who could this wonderful, kind, generous, person be? Why had they not signed the note? How does one accept such a magnificent gift from an anonymous source without the opportunity to express heartfelt gratitude and the soul-filling joy such a gift brings.



A mystery was present, the game was afoot.

This angel of awesomeness was to be unmasked and gratitude expressed.

Whatever it took.

However long it would take.

All leads would be followed.

As far as they would lead.

This mysterious messenger, this masked angel, would be found.

And unmasked for all to see their true generosity.



In the end, an email was sent. A sleuth engaged. A night passed. And an email received.

I knew who the culprit was.

I had unmasked the angel of awesomeness.

And had a good night’s sleep.



The message on the third slip of paper in full:

Hilton… something for your office. I couldn’t bear the thought of you drinking from shitty plastic cups. Enjoy!

The bubble-wrapped box contained four stunning glass tumblers. They presented in the style of crystal Manhattan glasses. The perfect compliment to the office whiskey collection. The perfect implement to hold and enjoy the Scottish Nectar at the end of a hard day’s work.



A simple call and thank you is not enough.

A note on Facebook neither.

This kind, considerate person needs to be immortalized.

A plaque perhaps.

Maybe have something named after them.

A bridge, a ship, a building, a space shuttle.

I have none of those things.

But I do have a bar.

One I attend regularly.

It is stocked with quality whiskey and bourbons.

All comers are welcome.

It is a place of relaxation, conversation and comfort.

It brings joy to many regulars and guests.



I hereby declare that The Gentleman James V bar open.

All glasses will be raised in his honor.

His name will be whispered with reverence.

His contribution to quality of life and joy known and remembered.

And his presence at the The Gentleman James V is much desired.



For those of you who got this far and do not know who I am talking about, allow me to introduce James V Waldo. He is man with the ferocious visage of a viking biker, an arse that emits a toxic hellstew of gasses that are not present on the periodic table, the soul of a poet, the intellect of a debater, the wit or a writer, and a heart the size of the moon. A husband. A dad. And a very good and special friend.

TL;DR: Thanks for the Whiskey Tumblers, Jay.

Follow the author as @hiltmon on Twitter.

The Annual Dependency Library Upgrade Process

At work, we write a lot of code. In order to remain productive, we reuse the same proven dependent libraries and tools over and over again. Which is fine. Until we start seeing end-of-life notices, vulnerabilities, deprecations, performance improvements and bug-fixes passing us by. At some point we need to update our dependencies for performance and security.

But its not that easy.

Take some of the libraries we use:

  • Google’s Protocol Buffers are amazing. We’ve been on 2.6.1 for ages, but 3.1.0 is out and it supports Swift and Go, two languages we surely would like to use. But the proto2 format we use everywhere is not available in the new languages. We need to migrate.
  • ZeroMQ moved to a secure libsodium base in 4.1, making it much safer to use. But the C++ bindings from 4.0.5 are incompatible. We need to migrate.
  • g++ on CentOS 6 is ancient, version 4.4.7 from 2010. We’ve been using the devtoolset-2 edition 4.8.2 from 2013 to get C++11 compatibility, with a few library hacks. But that version of g++ produces horribly slow and insecure C++11 code. We skipped devtoolset-3 even though g++ 4.9 was better. devtoolset-4 is out, using g++ 5.2.4 from 2015, still not the latest, but it is much better at C++11 (without our hacks), more secure and faster. Yet is ABI incompatible. We need to migrate.

The amount of work seems staggering given we have well over 100 protobufs used across our code base, ZeroMQ everywhere and everything is compiled for production using devtoolset-2. The old libraries and tools are a known, proven platform. The current code is stable, reliable and fast enough. It ain’t broke.

The benefits are also hard to measure. Given all the effort to upgrade, do we really get that much faster code, that much more secure code? And what about the code changes needed to support new interfaces, formats and ABIs? What does that get us?

For most IT shops, the discussion stops there. “It ain’t broke, don’t fix it!”, or “All pain for no gain, not gonna play.” They stay on the known tools and platforms forever.

For my IT shop, things are different. We want to use new tools, new languages, new platforms yet remain compatible with our existing services. We need to be secure. And we really do need to eke out each additional microsecond in computing. No, if it ain’t broke, break it!

So, once in a while, generally once a year, we update the platform. Update the libraries. Update the tools. Update the databases.

And we do it right.

Firstly we try the new libraries on our development machines. Homebrew installs make that easy for the dependencies. Rake tasks make it easy to upgrade our Ruby dependencies and Rails versions. We build and test our code in a migration branch and make sure it all works, changing to new interfaces and formats where necessary.

We then spin up a virtual machine on our production operating system (CentOS 7 now), install the new compiler and dependencies, and rebuild all there. Given that issues are mostly resolved in development, we only find g++ quirks in this test.

And then one weekend, we run the scripts to update our production servers to the new tools and dependencies and deploy the new versions.

And since we do this every year, it runs like a well-oiled machine.

It helps that we have tools to recompile, run and test our entire code base. It helps that we have tools to stage and deploy all our code automatically. And it helps that we have done this before, and will do it again.

Long term, the benefits are amazing. We can try new platforms with ease. Our code gets better, faster and more secure all the time. The need for workarounds and hacks and platform specific coding becomes less and less. The ability to nimbly move our code base grows each time.

Many of the projects we want to take on become possible after the annual upgrade. That’s why we do it.

If it ain’t broke, break it.

And it really is not that much work!

Follow the author as @hiltmon on Twitter.

Minimal Project Management - 6 Months Later

Just short of six months ago, I wrote about how I was transitioning to Minimal Project Management as my team was growing at work. So, how did it go? Did it work? Any Problems?

In short, after a few false-starts getting our heads around the intent of the Statement of Work document, it went — and continues to go — very well. Projects we used to start and never finish are now completing and shipping. Communication within the team and with our users is better. And our throughput is up.

In fact, now that the progress and task lists are more visible to management and users alike, assignment and prioritization is also better. The Management Team is more aware of the complexities in projects — from the Statements of Work - and why they take so long — sometimes — to get done. We are also less likely to kill an ongoing, assigned piece of work, when the progress and involvement is clear. We also think a bit more and harder about what we really need to get done next instead of just assigning everything to everyone on a whim.

The Statement of Work has evolved into a thinking tool first, and a communication tool second. My team now uses the time writing the Statement of Work to think through the options, the details, the knowns and unknowns, the questions needed to be asked and answered. They are spending more and more time working the document up front instead of diving into coding and “bumping” into issues. Just the other day, one of the developers commented that the programming for a particular project would be easy now that the way forward was so clear.

I do also see our productivity going up. We may take more time up-front to write a document, but we are taking way less time when coding, testing and shipping as we’re not futzing around trying to figure things out. The total time to ship is dropping steadily as we become more habitual in thinking things through and writing them down clearly.

Our users also look these over. This leads to discussion, clarification, and the setting of expectations as to what will actually be shipped. It also leads to more work, but we track these add-ons as separate change requests or future projects. When we ship, our users are far more aware of what the changes are and how it impacts them.

The weekly review is also easier because, since the whole team reads all Statements of Work, we all know very well what each other team member is working on. For fun, I sometimes get team members to present each-other’s deliverables for the week, a task made easier by the process we follow.

Some things have not changed much. We still get a large number of interruptions, but my team is far more likely to triage the request and decide whether to accept the interruption and fix the issue, delay it until they get [out of the zone]https://hiltmon.com/blog/2011/12/03/the-four-hour-rule/), or push it off as a future project to deal with later. We still get a large number of scope changes, and these too get triaged better. And we do get fewer priority changes, mostly because those that change the priorities see the work already done and are loathe to interrupt.

Of the issues faced, most have been resolved through practice.

Programmers would rather code that write documents. So the first few Statements of Work were a treated as a speed bump to get to coding up a solution, and necessary step to please the “old man”. After running through a few iterations, the benefits of doing the thinking, checking and discussions up front became quite clear. Writing is still a drag, but the benefits are now clear and there is more enthusiasm in writing and reviewing these Statements of Work within the team.

The other issue, the level of detail to be written, is also being resolved through practice. Initially they wrote very high-level Statements of Work, hoping to resolve assumptions and misunderstandings during coding — the old way. But as the early reviews by me and by users showed them, their readers were confused, identified missing components, pointed out areas not documented and therefore not thought about (or though through), and some were just plain wrong. The next iterations were more detailed, and the next more detailed in areas where details were needed. We’re still evolving where and when to dive deeper in a Statement of Work and where not to, but the documents are certainly getting better and the coding process way faster.

The result of the change to [Minimal Project Managementhttps://hiltmon.com/blog/2016/03/05/minimal-project-management/) is easy to see. More projects getting shipped correctly and quicker, with better discussion and problem solving up front and faster coding to the finish line. And our communications and prioritization processes run smoother.

Follow the author as @hiltmon on Twitter.