Hiltmon

On walkabout in life and technology

Working From Home ... 88 Days Later

Follow on post to Working From Home … Here We Go! written in March.

88 Days ago in New York we all started working from home. 88 Days ago we stopped commuting, sat down for the first time at our fledgeling home workspaces, launched our first Zoom or Slack meetings and started figuring out how best to do our jobs from there. 88 Days later, we’re still doing it and will be for the foreseeable future.

My entire company has been remote the whole time, spread across New Jersey, Connecticut, Long Island and a few stalwarts remain in the city. A lot has happened since, but most importantly, we’ve learned a few very key lessons about our home environments that we wish we’d known before.

And I thought I’d share them with you who are still on the fence about investing in a better home workspace.

Get a good chair

By far the biggest number of issues faced by my team have been back and shoulder problems from sitting on bad chairs hunched over tiny workspaces or kitchen counters. We all know that if you sit for 8-10 hours a day at a desk, good ergonomics are critical. But most folks never really used their home setups except for the odd hours here and there, and so did not invest in them. Cheap (or dining room) chairs, old desks or kitchen tables and a tiny workspace were common.

Step one is to have a good chair. There are plenty of second hand, refurbished high-quality office chairs out on the market, they all deliver and assemble with ease. Those that bought chairs have not regretted the decision. This who have not, still suffer back pain. If you have to spend the most on anything for your home office, start by getting a good chair.

Upgrade your internet

The second most common issue was internet reliability and capacity. And this one has been the hardest to deal with. Most folks had a basic internet plan with the one and only internet provider to their homes, which meant terribly limited bandwidth, unreliable routers and plenty of outages when it rained.

Those that had changed providers or services in the past few years seemed to be better, but the majority who had fallen off the end of their “honeymoon” contracts all had problems. In most cases, calling the provider rarely helped. It took a month or two of lockdown before they would start responding and none have been able — yet — to get their connections fixed or upgraded.

In short, do not let your internet contract lapse, either negotiate a new one whenever you can, or better yet — and if possible, switch providers for both a better deal and better service. None of the providers we use seem to care at all for customers with long service records, only with gaining new customers. Your loyalty is worthless to them.

I recommend you get on it now, it may take a while to happen, but we’ll be doing this a long time more, and better, more reliable internet is critical.

Get an external monitor or laptop stand

As with the chair, working on a laptop is fine for a few hours a day. But eventually the act of hunching over the keyboard and squinting at the tiny display will cause neck, wrist and eye problems.

In this case you have choices: put the laptop on a raised laptop stand on your desk (if the laptop screen is larger), or get an external monitor. In either case, you will need to buy a mouse and keyboard. These are very cheap and any will do.

Raising the screen of the laptop to eye level will fix the ergonomics of your desk, and save your neck and shoulders. It will also place the laptop camera up to a good face level for video calls. Place the keyboard and mouse below on the desk.

Modern external monitors are ridiculously cheap these days. If you can afford it, get a 4K 24" or 27", but honestly any new computer monitor will do. It will raise your eyes and head, making you more comfortable and give you way more screen real-estate than a teeny laptop, adding to your productivity.

If you do go external monitor, I also recommend running the laptop in clamshell mode (closed) or raised on a laptop stand to bring its screen up to the level of the monitor. I am aware of issues (and slowdowns) with clamshell mode on some older computers, but having the screens up at eye level and not having to stoop your head to look at data on the laptop screen is far better for you.

Use an old iPad for Zoom / Teams / Slack calls

Most people’s laptops or desktops have terrible sound and video (if at all), and often lose video quality, sound bites and connection when using the computer at the same time as Slack, Teams or Zoom calling. Which makes it frustrating for the user and for those on the call which cannot understand what’s going on.

All iPads have great microphones and cameras, even the ones from years ago whose batteries have failed. Stand the iPad next to your computer on the desk, plugged in to the charger, with the front-facing camera towards you and use that for all video and voice calls. If you don’t have an old iPad, an old mobile phone is second best. This device will provide dedicated video and audio throughput while your computer chugs along with its work.

Headphones with microphones, or speaker mode

This one is all over the place. Some folks using AirPods, wired headsets or headphones with built-in microphones seem to be OK. But not all the time. Sound quality is sometimes good, sometimes terrible. The newer AirPods and recent over-ear headphones do seem better, but only while their batteries last. Lots of folks now run iPads or iPhones hands-free on speaker mode, it does allow ambient noise in (kids, pets and sirens) but ironically, the voice quality seems the best that way — and we’re all used to hearing background noises on work calls now.

Whichever solution you use, make sure you can always swap to a different audio model. If one gets bad or fails, switch early in the conversation. I have the iPad on speaker for the main group call, and use headphones with the laptop or iPhone for secondary calls - and when that gets bad - I go to speaker mode there too.

Finally, step away when done

Most importantly, step away from your workspace when eating, taking breaks and when you are done working. Change your Slack status to away for breaks. Step out of the group call (you can always come back). And walk away from your workspace when the day is done. Treat it as separate office, as a consulate where different rules apply. Let the family know that when you leave your desk, you are back home and theirs again.

If you need your laptop or iPad after hours, unplug it, go somewhere else and use it there.

Thoughts

We may be re-opening up soon, but I feel that work-from-home is going to remain a big part of our lives. Whether there are more waves of virus, or whether its just more productive to skip the commute on days where you have no special reason for being in the office, or whether this is the new norm, I feel we will we working a lot more from home. And having a good work environment at home is critical to your health and productivity.

If you have not done it by now, be about it. This is not the end of work-from-home, its just the beginning.

Follow the author as @hiltmon on Twitter.

Working From Home ... Here We Go!

It’s been many years since I worked from home and much is different since I ran Noverse from here. First, my wife has taken over my workspace as hers, so the desk, good keyboard, mouse and monitor are her domain now. Secondly, it’s likely this is not a one-day disaster recovery test, so I need to find a way settle in instead of use the laptop on the dining table. And thirdly, Slack has changed the way we communicate internally, which makes it as easy to maintain contact as it does in our open plan office.

If I am going to be working from home for a while, I want my somewhat temporary workspace to be as good - productivity wise - as my office desk. There I have an Apple 27” thunderbolt monitor, an older magic keyboard and a well worn in gaming mouse. So, in preparation, I ordered a refurbished 27” monitor (no need for a full priced retail monitor), a new magic keyboard and a cheap, but good, similarly shaped gaming mouse. I can now productively operate my trusty MacBook Pro in clamshell mode the same at home and work.

The other thing needed is a good chair. We purchased new work chairs a year ago on sale, so mine has now replaced the fourth chair at the dining room table. Our table end abuts the heater and outside windows for the apartment, and we normally eat on the other end. So I have covered the generally unused half of the table in a cloth, and set-up a temporary office. From where I now sit, I have the monitor in front of me, and may apartment view to the left.

Today was the first - and possibly only - day that I will use this setup, and it worked mostly well. The monitor is good, but I had to jack the brightness up during the morning as the light streamed into the apartment. The keyboard is the same, no change there. And the mouse feels funny as I am far too used to my work mouse.

To my right, across the small lounge we have is the TV. Instead of having the bustle and noise of the office, I left the TV on to create background noise. You forget just how quiet an empty apartment is even in the center of New York City! And how the silence can actually distract you.

And there’s Slack. We had two Slack voice meetings today. The first, most folks struggled to get their microphones and speakers working, but eventually they were ready. We all then learned quickly to use mute on our microphones so the clacking of our keyboards, barking of dogs and squealing of happy children did not interfere in the conversation. Ironically, I feel we all communicated a bit more effectively, possibly because of the medium in use, or because more ears were listening in, or maybe because we we all so excited to have skipped the commute.

Let’s see how this works out.

Follow the author as @hiltmon on Twitter.

The Theater Was Empty

Johannesburg, 1977

It was a crisp, sunny Saturday morning and my grandmother was in town. As we always did when together, we went to the movies. The theater nearby was old, run down but comfortable and in walking distance. I’d go there later again many times on many Saturday mornings.

For the first time, though, I was more excited about the movie than I was about going with grandmother. I was about to turn 10. And I was about to see a Star Wars movie, the original one, for the very first time.

The theater was empty as we sat down. I do not remember the previews or waiting.

And then it began. The crawl came up. And then the stars swirled. And a space ship flew over my head. And then a larger one, that seemed never to end, flew over my head chasing the first one. My mind was blown, my imagination expanded and I was drawn in.

It was the best movie ever. Not because of the story, visuals, or universe, they were very cool though. But because it sparked my creativity and imagination in new ways I had never, well, imagined.

And it changed me.

New York, 2019

It was a cold and dreary day in New York and my wife had quietly booked us tickets to the movies. This time we’d try the dolby theater with the amazing sound and reclining chairs.

I was about to see the last Star Wars movie for the first time.

The theater was empty as we sat down. The previews endless.

But I was just as excited as the first.

This time I felt the same awakening of creativity and imagination, a memory of the past, a connection to almost 10 year old me. The child sitting in the seat on this day was the same child that sat next to his grandmother on that first day.

A few days later

It does not matter whether the Star Wars series was good or bad, whether the plots held, the characters evolved, the acting improved, the jokes funny, or the CGI got better. It does not matter which movie was better or worse, or even how the story meandered.

What matters is the imagination, a fictional universe of dirty old starships, shiny light sabers, and odd grungy planets. A place where a scruffy Wookiee can exist and sound eloquent, a princess kicks arse, soldiers wear white, and a tiny green mannequin is the most intelligent and powerful being in it. A universe where personal transport has not invented roofs (or windshields) yet runs on anti-gravity. A technology that enables faster than light travel, supported by artificially intelligent droids, yet still requires manual pilots and people to aim lasers.

Yet still familiar. People eat, love, fight, cheat, fail, commit crime, laugh, and live. They have homes, and dirty clothes, and drink in bars. Where a culturally familiar story of a bunch of ronin that save it all can and does happen.

What matters is that it opened our minds to new possibilities of design, thought, imagination and creativity. It changed our culture, and the stories that followed.

In my case, I became interested in technology, art, futurism, sci-fi stories, even economics and politics. I found there was more to life than food, sleep, school and football. That there were endless possibilities and paths to follow.

A huge awakening for an almost 10 year old.

And reminder of that once again.

Follow the author as @hiltmon on Twitter.

An iPad and a Pencil

In 2018, I switched to using an iPad Pro and Apple Pencil when not using my computer, replacing notebooks, scraps of paper, Post-It notes, and ink-leaking pens. After a year of being digital, here are some of the processes and habits I have picked up.

Testing C++17 Projects in Xcode With XCTest

The vast majority of development I perform is in C++17 on an Apple Mac Computer using Xcode. For a while now, I have been using Catch2 as my Unit Testing framework, and its absolutely excellent. But its not integrated into the Xcode IDE and I wanted the ability to use Xcode’s excellent testing and test debugging tools to improve my productivity and workflow.

In this post I will show you how to set up a simple standard C++17 library project in Xcode 10 and then add XCTests to it.

You can find and download the sample project from Github at HiltmonLibrary-with-XCTest.

On Removing Comments

Today I removed the comments from hiltmon.com for one reason and one reason only — the comment service, Disqus, that I used — was tracking you across a multitude of sites and is selling your data to strangers without your (or my) permission. I no longer want hiltmon.com to be one of those collection points.

I’m going to miss the comments though. Your comments had been insightful, gracious and a wonderful way to connect with my readers, and to allow my readers to connect with each other. Unlike more popular sites, I never had to deal with comment spam, bad players or any of the common nastiness on the internet, just lucky I guess.

I have deleted my Disqus account and regenerated this site. I have no backup nor copy of any of your information, and now, neither do they. If you want to comment on any post, please do, I love them, just tweet me at @hiltmon.

I’m not 100% privacy clean, yet. Hiltmon.com still uses Google Analytics for site analytics, and now that Carbon has been sold to BuySellAds, I may, very soon, be removing the ad as well.

Follow the author as @hiltmon on Twitter.

Stop and Think

When I started out as a developer and designer, I know I was clever. When folks asked me to design and develop a software product, I would ask a few questions to confirm that I understood what was asked of me, listen to their answers, then set about making the product. Request, build, ship. Easy!

My mentor, who was definitely smarter than me, used to yell at me to Stop and Think.

When this first happened, I assumed he meant that I had to stop and think about the software I was going to design and create. But that made no sense to me. Why? Because even back then I knew to run a software development process, not to jump straight into coding. And that process would take care of this undefined stopping and thinking. I would have to think to write down what as asked (as Requirements), what was expected (Deliverables) and what I was going to implement to make that happen (Design). All of those required thinking. What was he on about?

Over drinks, he explained it to me.

The stop and think was not about the product asked for, he agreed that the development process would take care of that. The stop and think was to step back and look at the bigger picture, the context, the motivation, the larger workflow or process that this requested product would become a component of and who or what would be impacted by this project.

I had, naïvely, assumed that the client always knew and deeply understood these things, had done their homework and had come up with the request, requirements and deliverables based on their perfect understanding of the context. The reality is that they had felt a need, guessed a partial or bare solution and had asked for it, with no bigger picture or further thought.

That bigger picture, that further thought, that was on me, the designer and developer.

Stop and think was meant to make me ask questions outside the regular development process, questions like

  • How will the deliverables be used, who will use them, and what are their needs?
  • Why is this product needed at all?
  • Is there a reason that the product needs to be done the requested way?
  • What business problem, no, what real problem, are they trying to solve?
  • And how important, in the flow of things, is this project?

The answers to these non-product questions are what he needed me to stop and think on. You see, if the answers supported the requested product and its deliverables, then we’re good. But more often than not, the answers to the bigger picture questions did not match what was asked of in the product — because that person asking for it had not Stopped and Thought either. By asking these questions, the need, context, purpose, scope and nature of the product would clarify, solidify and change.

For example, a simple data load project sounds straightforward, get the data, write it somewhere. Most folks would spin up a script to load and dump the data into a table just as it came in. Its fast, works, and how most folks do it. But if the developer of the loader knows who would read the result, and how they wanted it and why, the developer could design data cleansing and transformation routines, better naming conventions and remove extraneous columns that would load the data to better suit the needs of downstream, saving both teams time and money — for the same development cost.

What really comes out of stop and think are better product definitions, better ideas on how to design and implement them, and a better resulting product.

My points:

  • Knowing the bigger picture helps design a product that fits that bigger picture instead of creating more tech or business debt for later on.
  • Knowing the bigger picture will often change what the product does, saving rework later when it does become clear, and makes everybody happier.
  • Knowing the bigger picture also helps to prioritize projects and project dependencies, and will help when planning and scheduling work.
  • Inversely, not knowing the bigger picture, or assuming the client does, leads to worse software, confusion, miscommunications, and the litany of problems all IT teams face daily.
  • The intellect and knowledge of the software designer is underrated. Maybe the software folks have a better way of getting to the same goal with different deliverables and a different product build.

Follow the author as @hiltmon on Twitter.

Migrate Octopress / Jekyll Posts to Ulysses

I wanted to move my published writing stashed in my Octopress/Jekyll site into my current writing workflow environment, Ulysses. Dragging and dropping the files from the _posts folder was not an option, because:

  • The file names were messy
  • There is no title in the file, it’s in the Markdown metadata
  • I wanted to keep the publication date on the imported files

So, I wrote a horrible script to do it.

The script takes a _posts Octopress or Jekyll folder and processes each file into an OUT_PATH as a set of clean Markdown files, making a few changes along the way. It’s a mix of Ruby to handle the YAML front matter and shell commands to do the actual work. And it runs as follows:

  • For each file in the IN_PATH
  • Parse the YAML front matter to get the title and publish date
  • Create a new file using the title and write a Markdown H1 to it
  • Append the existing file data to the new file
  • Depending on how the publish date is formatted (old was a string, new is a time), touch the new file to set the original publish date

After that, I just dragged and dropped the Markdown files into Ulysses which kept the formatting and dates.

The script itself is below. Its terrible and you dare not use it. But maybe it has some ideas to help you run your own conversion.

#!/usr/bin/env ruby

require 'rubygems'
require 'yaml'
require 'time'

# IN_PATH = "/Users/hiltmon/Projects/Personal/HiltmonDotCom/source/_posts/"
# IN_PATH = "/Users/hiltmon/Projects/Personal/NoverseDotCom/code/noverse/source/_posts/"
IN_PATH = "/Users/hiltmon/Downloads/_posts/"
OUT_PATH = "/Users/hiltmon/Desktop/Blog/"

Dir.new(IN_PATH).each do |path|
  next if path =~ /^\./

  basename = File.basename(path)
  puts "Processing #{basename}..."

  posthead = YAML.load_file(IN_PATH + path)
  title = posthead['title'].sub("'", '')

  # Create a new file and add the H1
  cmd2 = "echo \"# #{posthead['title'].strip}\n\" > \"#{OUT_PATH + title}.md\""
  %x{#{cmd2}}

  # Append the original Markdown
  cmd1 = "cat #{IN_PATH + path} >> \"#{OUT_PATH + title}.md\""
  %x{#{cmd1}}

  # Mess with the file time
  if posthead['date'].is_a?(Time)
    cmd3 = "touch -t #{posthead['date'].strftime("%Y%m%d%H%M")} \"#{OUT_PATH + title}.md\""
    %x{#{cmd3}}
  else
    file_date = Time.parse(posthead['date'])

    cmd3 = "touch -t #{file_date.strftime("%Y%m%d%H%M")} \"#{OUT_PATH + title}.md\""
    %x{#{cmd3}}
  end
end

The result:

Follow the author as @hiltmon on Twitter.

Notification City

It is best for your technology stack to tell you what went wrong as soon as it goes wrong to get the right level of attention in the correct timespan.

I run a massive technology stack at work, filled up with multiple servers, plenty of web applications, loads of C++ programs and massive numbers of scheduled and recurring tasks, and I do it with an insanely tiny team and no DevOps folks. Almost all of the time, our software works as designed and the entire network of systems runs just fine. Just like yours.

When systems fail, we get notified.

Depending on the nature of the failure, we get notified in different ways. This helps us quickly decide whether we need to stop what we are doing and react, or wait to see if more notifications follow.

And it works.

In this post, I will stay out of the technology and explain our thinking and implementation of notifications, how we send them, monitor them, use them and manage the volume so we are not, in any way, overloaded or subject to unnecessary noise.

Crashes and Notifications

As an intentional design decision, my team writes software that intentionally crashes when things are not right. We do not catch and recover from exceptions, we crash. We wrap database changes in transactions so the crash is safe. We do not, under any circumstances, run systems that continuously and expectedly fail and quietly self-restart.

We rely on notifications to quickly tell us of the crash so we can see what went wrong and rectify the issue. We ensure, where possible, that our error messages are both clear and identify where the crash happened.

This design is justified as our systems are a network of interdependencies, and so a failure in one process over here can impact, or require reruns, over there. Since we are a small team, building a DevOps infrastructure to map and auto-recover on all of these paths, which are constantly changing, is not optimal. We’d spend all our time doing it.

And so we do it simply. Almost all our processes are launched from simple shell wrappers or rake tasks. When an execution fails, the shell wrapper captures the error, and fires off the appropriate notification to the appropriate channel, then logs it and pops it in the right chat room.

Aside: This works because we also design all our processes to carry on where they left off, so even the most critical real-time systems just carry on from where they left off on a restart after a crash. How we do that could fill a bunch of posts.

Errors and Failures

No matter how good your software quality, things will go wrong. Programmers introduce bugs, bad data causes failures, hardware fails, and external systems are not always there. Some of these issues are easily dealt with, the rest need human intervention.

For example, a large proportion of our software gets data from a remote location, munges it and bungs it into the database (or the other way around). More often that not, that data is remote and third-party. And reasonably frequently, their server is down, their data is late, or the data is bad.

Of course our code “attack dials” when servers are unavailable or data is not present, so we do not get notified of these events — that would flood us with useless notifications. But, if the process has been dialing a while, or the data is not available in the dial window, then we get a notification. And if the data is bad, the program that munges it will crash, sending a notification.

Processes that depend on this data do not notify that the data is missing.

Why not?

We already know that the data is missing from the first notification, no need to pile on more notifications saying the same thing. Their failures are also logged, but in a different location and chat room from the primary. This model helps recovery and reduces confusion in identification.

Aside: We also have a Wiki that tracks data dependencies, so we know which processes to rerun after we correct a failure. This wiki lists all the commands to paste, so its easy. Whenever we face a new situation, we update the wiki.

Success and Last Runs

Clearly we do not want a notification when an expected process runs successfully, that would create an insane flood of notifications. We still send them, with some third party software we cannot stop them, just to a different destination. These notifications are saved so we can review them if we want, but it does not alert the team.

Note that failures are also saved, so we can go back and see where and when what fails more often.

Live Process Monitor

Real-time C++ programs are more difficult to manage. We write them as “bullet-proof” as possible, but they too can and are expected by design to fail. Known bad data situations are dealt with, but we want unusual situations to take them down.

For these, we, the humans, need to drop everything and act. For this we run a mix of open-source and our own home grown process monitors. As soon as a monitored program fails, we get a notification on a bunch of channels:

  • Our Sonya, an electronic voice on an ancient Mac Pro, loudly tells us what failed. Having a “Bitching Betty” voice state the nature of the problem really gets our attention. Aside: Sonya as in “gets on ya nerves”, thanks British Rail.
  • We get an iMessage, on our phones and watches, for when we cannot hear Sonya.
  • The error Notification Center chat room gets a message as well, which pops up a UI alert.

The live process monitor also watches our “Coal-mine Canary” processes. There are a few threads we run that crash early and quickly when things go wrong, oftentimes quickly enough for us to be on it when the important stuff gets ready to fail. These also get the Sonya alerts.

For example, we have a process called “universe” that runs all day long and it depends on a large number of our systems and services, so it’s the first to die when things go wrong, a perfect “Coal-mine Canary” candidate. When Sonya squawks that “The universe has collapsed”, we know bad things have happened.

Ongoing Notifications

If we see the same notification and deal with the same interruption over and over again, then we know we have an ongoing problem. In this case, we stop all work and get it resolved. The cost in time and lost productivity of dealing with the same issue over and over again is not worth it. Especially in a small team of developers. Taking the time to get it fixed is always worth it.

To be clear, we do not muffle notifications and silently restart “known” failures. We fix them, over and above all other work. Silence means all is well, not “all is well except for the known failures”.

It also ensures that when we do get a notification, we cannot and do not ignore it. The notification signals a real issue. We have no reason to tune out notifications, and therefore no reason to start ignoring them.

Regular System and IT Activities

Of course, being a “normal” tech team, we also leverage the notification infrastructure for regular, non-failure mode notifications. We just send these to a system that logs them, a system we can glance at when we need to see what happened. These notifications are not sent to us humans in the regular chat rooms, so do not bother us. This includes:

  • Hardware monitors reporting successful checks
  • Runs that succeeded
  • Programmer commits
  • System deploys
  • Software updates

Notification Volume Management

Most notification systems projectile vomit notifications, they are as chatty as a flock of seagulls over a bag of chips or a lawyer in court. The negative is that no-one can deal with the noise and yet still spot the real issue, and eventually they tune the noise out.

So how do we manage the flood of notifications, keep them to a manageable trickle of things we need to respond to?

  • Rule number one, we do not notify humans for informational purposes or success. That is all noise and we so not send these out, only log them. If the notice is expected or does not require immediate human response, do not send it to people, just save it.
  • Use different channels for different importances. If immediate attention is needed, set off Sonya and the iMessage alerts. If not, send it to the monitored chat room to be dealt with later. And if no response is needed, log only.
  • Notify once and once only, flooding the chat room with a bunch of notifications that were triggered by a single failure also adds noise and makes it harder to find what cause the cascade. Trust the humans to know what needs to be done to recover from an event.
  • Get an intelligent repeating voice alert, like our Sonya, on the job for systems that must be up to transact business and keep her repeating the issue every few seconds until the system is back up. Its noisy and annoying, but others can hear when things are wrong and when they get back to normal. Oh, and do not send these notification by the normal channels, so they do not fill up your chat rooms.
  • Use a chat room for failure notifications. Firstly, you can get alerts when new messages come in, but more importantly, the responder can identify which notifications have been dealt with by responding to the messages. So, if more than one person is looking, that chat room will tell them which ones have been dealt with, and by whom. That way, not everyone gets involved when an alert comes in. It also allows us to scroll back to see common failures and note what was done to rectify.

Notification City

In our Notification City:

  • When Sonya starts talking and our iMessages start pinging, we jump. She tells us which real-time system has failed and we go and see why, fix it and restart.
  • When the “Coal-mine Canary” processes fail, Sonya and iMessage let us know as well. We look at the chat room to see what dependency triggered it.
  • When a regular thread fails, it gets posted to the chat room, and we get a UI notification that it happened. We can then see what went wrong, make the necessary calls, get it going again, run the additional processes to recover and respond that the issue was resolved.
  • When all goes well, we get no notifications at all, nothing in the chat room and no interruptions, and we can focus on our work. Later on, we can look at the logs and status screens to see all was well.

This allows us to focus on what we need to do, yet respond appropriately when notified. We’re not inundated with noise or unnecessary messages, so we do not need to tune them out.

When we hear or get a notification, we know that the situation is exceptional, and, depending on the channel, we know whether to jump now or have a few minutes to respond.

Our Sonya has been quiet today, all is well.

Follow the author as @hiltmon on Twitter.

Coding Style Standards in 2017

I’ve been writing software for well over 30 years, I’ve spent well over my 10,000 hours and gotten rather good at it. And I still write to a very rigorous coding style standard.

You’re kidding right? Its 2017, code style guides are so passé.

Nope. I’m deadly serious.

Get off my lawn

Some of us remember when coding styles were de rigeur. When you climbed off your commuter dinosaur and joined a coding team, the first document they gave you was the coding style guideline. It was a thick, three-ring binder that covered everything from naming, spacing, commenting, position of braces, white space rules, spaces or tabs and the line length rule.

And, when you tried to submit your first code, the code you were so proud of, the team destroyed you in review for not following their stupid guidelines. There you sat, knowing your code worked, wondering why these people were being so anal about spaces and where your effin brackets were and why you could not use the letter m as a variable name. “The code works, dammit,” you thought to yourself, “what is wrong with these people!”

The reality was that these folks knew something we rookies did not. That it was easier for them to read, review and smell-check code written the way they expected than to try to decipher yet another programmer’s conventions. It saved them time, saved them effort and allowed the pattern matching engines in their brains to take over to enhance their understanding.

Back then, code quality depended on other people reading, reviewing, understanding and smell-testing your code. It was up to humans to see if things could go wrong, find the issues and get you to fix them before the system failed. This was how the Apollo code was done.

The coding style guideline made that job a whole bunch easier.

The Bazaar

The rise of open source, good semantic tools, formatters, linters and rise of the code ninja, have led to the demise in many cases of the coding style standard.

Most open source projects are a hodgepodge of coding styles because there is no leader, no team-boss and no benevolent dictator. Some, like LLVM and Python, do have such a character, and therefore a style guide. Most do not.

Some languages, like go, have an opinionated style and provide a formatter. And some teams use formatters to make they code look “better”.

And don’t get me started on projects that intermix various open-source code-bases that uses conflicting styles. Aside: generated code has to be excluded as it gets regenerated on each compile. I’m looking at you, Google protobuf!

The big issue is that code these days is less readable by humans, and is less frequently reviewed by humans. Much code is written using a mishmash of open source code, pastes from StackOverflow and a bit pf programmer code. Paying homage to some random management mandated format using a tool does not improve the quality, readability and maintainability of the code.

The great debates

Those of us who do care believe that our coding styles are the best. Of course we do, we came up with them. We use them daily. We find code to our style easier to read. Writing to our style is now a habit.

Bring in another programmer and the war begins. Arguments erupt on line lengths, tabs or spaces to indent, indent sizes, brace positions, early exit, and function lengths. Of course, the other programmer is wrong, their habits are bad, they are idiots and their code looks terrible.

The truth is that these arguments are stupid since no side is “correct”. It’s a taste thing, a personal preferences thing, a habit thing and sometimes a power play.

At the end of the day, it really does not matter what you choose, as long as you all agree and all adhere to the agreement. Win some, lose some, it does not matter.

What matters is being able to read, review, and fix each-others code with ease. And that requires a coding style standard.

So why still standardize in 2017

Because:

  • Code is meant to be read, reviewed, modified and refactored by humans first, and compiled second. Code written to an agreed style is way easier to for humans to process.
  • When not sure what to do or how to write something, the standard steps in. When to switch from a long parameter list to a parameter object, how far you can take a function before refactoring to smaller functions, and where in the code-base to place a certain file are all decided by the style standard.
  • Naming in code is a nightmare. Styles define how to name things, and what case to use, making it easier to choose the right name. Most importantly, the reader can jump to the right inference when reading a name in the code.
  • We don’t care who wrote the buggy line, blame is not what we do. But everyone in the team should be able to read, diagnose and fix it. If you want to find the fastest way to speed up maintenance times and bug detection times, write to an agreed style.
  • The debates are over, we can start to form habits, and we can all focus on writing great code.

So I guess you use an old standard?

Nope, we update ours every year. Most of the time it changes little. The original space vs indent vs line length stuff remains mostly the same. Those debates are over and the habits formed.

But languages change, language practices change, and these lead to changes in the standard. We learn more about our own code over time. Misunderstandings in naming inferences change the naming conventions, identified bugs lead to file layout changes and better patterns identified by team members get added. And old, unnecessary guidelines are removed.

For example, our 2016 standard added the requirement that single line block statements must be wrapped in braces in C++, so a heartbleed like issue would never affect us. It finally allowed the use of function templates now that our tools can handle them properly — and we programmers finally got a handle on them. It changed the file extension on all our C++ headers to “.hpp” because our compilers treated them differently. And it moved function parameter lists to their own lines in headers so we could improve header file commenting and document generation. Nothing earth-shaking, but still huge improvements to readability.

So all code is to standard?

Yes, and no. All new code is to standard. We do not stop all work, go back and correct old code, there is too much of it and we have better things to do.

But, each team member knows that if they ever need to touch, change or fix old code, they need to refactor to the new standard at the same time. We do not commit non-standard code. Over time, the old code changes to match the new standard.

Ok, summarize this for me

  • Code is meant to be read by humans first.
  • Code written in an agreed style is way easier for humans to find, read, understand, diagnose and maintain.
  • Moving to a new standard takes time to build the habit, but once it becomes a habit, writing to standard becomes just part of the flow.
  • The standard needs to change as languages change, as programmers get better and as new members join the team.
  • All new code is written to the latest standard, all code committed is to the new standard, all old code is refactored on touch.
  • Coding style guidelines are just as important in 2017 as they were when we rode dinosaurs to work.

Follow the author as @hiltmon on Twitter.