On walkabout in life and technology

The Annual Dependency Library Upgrade Process

At work, we write a lot of code. In order to remain productive, we reuse the same proven dependent libraries and tools over and over again. Which is fine. Until we start seeing end-of-life notices, vulnerabilities, deprecations, performance improvements and bug-fixes passing us by. At some point we need to update our dependencies for performance and security.

But its not that easy.

Take some of the libraries we use:

  • Google’s Protocol Buffers are amazing. We’ve been on 2.6.1 for ages, but 3.1.0 is out and it supports Swift and Go, two languages we surely would like to use. But the proto2 format we use everywhere is not available in the new languages. We need to migrate.
  • ZeroMQ moved to a secure libsodium base in 4.1, making it much safer to use. But the C++ bindings from 4.0.5 are incompatible. We need to migrate.
  • g++ on CentOS 6 is ancient, version 4.4.7 from 2010. We’ve been using the devtoolset-2 edition 4.8.2 from 2013 to get C++11 compatibility, with a few library hacks. But that version of g++ produces horribly slow and insecure C++11 code. We skipped devtoolset-3 even though g++ 4.9 was better. devtoolset-4 is out, using g++ 5.2.4 from 2015, still not the latest, but it is much better at C++11 (without our hacks), more secure and faster. Yet is ABI incompatible. We need to migrate.

The amount of work seems staggering given we have well over 100 protobufs used across our code base, ZeroMQ everywhere and everything is compiled for production using devtoolset-2. The old libraries and tools are a known, proven platform. The current code is stable, reliable and fast enough. It ain’t broke.

The benefits are also hard to measure. Given all the effort to upgrade, do we really get that much faster code, that much more secure code? And what about the code changes needed to support new interfaces, formats and ABIs? What does that get us?

For most IT shops, the discussion stops there. “It ain’t broke, don’t fix it!”, or “All pain for no gain, not gonna play.” They stay on the known tools and platforms forever.

For my IT shop, things are different. We want to use new tools, new languages, new platforms yet remain compatible with our existing services. We need to be secure. And we really do need to eke out each additional microsecond in computing. No, if it ain’t broke, break it!

So, once in a while, generally once a year, we update the platform. Update the libraries. Update the tools. Update the databases.

And we do it right.

Firstly we try the new libraries on our development machines. Homebrew installs make that easy for the dependencies. Rake tasks make it easy to upgrade our Ruby dependencies and Rails versions. We build and test our code in a migration branch and make sure it all works, changing to new interfaces and formats where necessary.

We then spin up a virtual machine on our production operating system (CentOS 7 now), install the new compiler and dependencies, and rebuild all there. Given that issues are mostly resolved in development, we only find g++ quirks in this test.

And then one weekend, we run the scripts to update our production servers to the new tools and dependencies and deploy the new versions.

And since we do this every year, it runs like a well-oiled machine.

It helps that we have tools to recompile, run and test our entire code base. It helps that we have tools to stage and deploy all our code automatically. And it helps that we have done this before, and will do it again.

Long term, the benefits are amazing. We can try new platforms with ease. Our code gets better, faster and more secure all the time. The need for workarounds and hacks and platform specific coding becomes less and less. The ability to nimbly move our code base grows each time.

Many of the projects we want to take on become possible after the annual upgrade. That’s why we do it.

If it ain’t broke, break it.

And it really is not that much work!

Follow the author as @hiltmon on Twitter.

Minimal Project Management - 6 Months Later

Just short of six months ago, I wrote about how I was transitioning to Minimal Project Management as my team was growing at work. So, how did it go? Did it work? Any Problems?

In short, after a few false-starts getting our heads around the intent of the Statement of Work document, it went — and continues to go — very well. Projects we used to start and never finish are now completing and shipping. Communication within the team and with our users is better. And our throughput is up.

In fact, now that the progress and task lists are more visible to management and users alike, assignment and prioritization is also better. The Management Team is more aware of the complexities in projects — from the Statements of Work - and why they take so long — sometimes — to get done. We are also less likely to kill an ongoing, assigned piece of work, when the progress and involvement is clear. We also think a bit more and harder about what we really need to get done next instead of just assigning everything to everyone on a whim.

The Statement of Work has evolved into a thinking tool first, and a communication tool second. My team now uses the time writing the Statement of Work to think through the options, the details, the knowns and unknowns, the questions needed to be asked and answered. They are spending more and more time working the document up front instead of diving into coding and “bumping” into issues. Just the other day, one of the developers commented that the programming for a particular project would be easy now that the way forward was so clear.

I do also see our productivity going up. We may take more time up-front to write a document, but we are taking way less time when coding, testing and shipping as we’re not futzing around trying to figure things out. The total time to ship is dropping steadily as we become more habitual in thinking things through and writing them down clearly.

Our users also look these over. This leads to discussion, clarification, and the setting of expectations as to what will actually be shipped. It also leads to more work, but we track these add-ons as separate change requests or future projects. When we ship, our users are far more aware of what the changes are and how it impacts them.

The weekly review is also easier because, since the whole team reads all Statements of Work, we all know very well what each other team member is working on. For fun, I sometimes get team members to present each-other’s deliverables for the week, a task made easier by the process we follow.

Some things have not changed much. We still get a large number of interruptions, but my team is far more likely to triage the request and decide whether to accept the interruption and fix the issue, delay it until they get [out of the zone]https://hiltmon.com/blog/2011/12/03/the-four-hour-rule/), or push it off as a future project to deal with later. We still get a large number of scope changes, and these too get triaged better. And we do get fewer priority changes, mostly because those that change the priorities see the work already done and are loathe to interrupt.

Of the issues faced, most have been resolved through practice.

Programmers would rather code that write documents. So the first few Statements of Work were a treated as a speed bump to get to coding up a solution, and necessary step to please the “old man”. After running through a few iterations, the benefits of doing the thinking, checking and discussions up front became quite clear. Writing is still a drag, but the benefits are now clear and there is more enthusiasm in writing and reviewing these Statements of Work within the team.

The other issue, the level of detail to be written, is also being resolved through practice. Initially they wrote very high-level Statements of Work, hoping to resolve assumptions and misunderstandings during coding — the old way. But as the early reviews by me and by users showed them, their readers were confused, identified missing components, pointed out areas not documented and therefore not thought about (or though through), and some were just plain wrong. The next iterations were more detailed, and the next more detailed in areas where details were needed. We’re still evolving where and when to dive deeper in a Statement of Work and where not to, but the documents are certainly getting better and the coding process way faster.

The result of the change to [Minimal Project Managementhttps://hiltmon.com/blog/2016/03/05/minimal-project-management/) is easy to see. More projects getting shipped correctly and quicker, with better discussion and problem solving up front and faster coding to the finish line. And our communications and prioritization processes run smoother.

Follow the author as @hiltmon on Twitter.

Attractive Multi-Line SQL Statements in C++

I often need to embed SQL statements in C++ code. Unfortunately, many of mine are long and complex. Even the simple ones are wide. This leads to the following ugly code:

std::string s8("SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;");

… which means I need a massively wide screen to view it (and it violates my 80-column rule), its not formatted legibly and even with wrap, its hard to understand — and this is a simple example. Its a maintenance nightmare.

Going multi-line, which is how SQL is usually written, makes things worse:

std::string s8(
  "SELECT id "
  "FROM lst_quotes "
  "WHERE route_id = $1 "
  "AND lst_request_id = $2 "
  "AND quote_id = $3; "

C++ compilers helpfully merge the strings, but I need to put in the quotes around each line (and have a space at the end of each line before the closing quote).

Or this monstrosity:

std::string s8("\
  SELECT id \
  FROM lst_quotes \
  WHERE route_id = $1 \
  AND lst_request_id = $2 \
  AND quote_id = $3; \

… where the end of line slashes still need to be added or the compiler gets upset.

Ugly. Hard to maintain. Hard to read. Impossible to copy and paste. Unmaintainable.

I want to be able to paste in SQL. Just SQL. As Is. From my database query tool.

The Solution

The solution is a simple C++ variadic macro placed at the top of the file:

#define SQL(...) #__VA_ARGS__

When used, this macro concatenates all lines between the parentheses, gets rid of newlines and, as an additional bonus, converts multiple white spaces into single ones. So this code (note the sexy formatting and excellent use of white space):

std::string s8( SQL(
    SELECT id
    FROM lst_quotes
    WHERE route_id = $1
    AND lst_request_id = $2
    AND quote_id = $3;

Looks and works great. I can format, make legible and paste SQL in as necessary — the way SQL was meant to be.

When compiled, the resulting string is:

SELECT id FROM lst_quotes WHERE route_id = $1 AND lst_request_id = $2 AND quote_id = $3;

… which is the desired compact version to pass on to the server.

Legible. Easy to maintain. Easy to read. Simple to copy and paste. Very maintainable.

Follow the author as @hiltmon on Twitter.

Apple Watch - After 1 Year

The Apple Watch turns 1 year old next week. If you follow the popular press, you’d think the device was rubbish and a complete failure.

I vehemently disagree.

It may be a limited version one, but it is a flawlessly engineered timepiece that is conducive to small, yet significant, life hacks.

I wear, and continue to wear my Apple Watch every day. I have done so since the day it arrived, the result of a late night wake-up alarm, a few taps on the Apple Store app and a return to sleep on opening night.

The Rings

If there is a top — by far — reason I wear the Watch as much as possible, it’s those perky exercise rings. I have set the calorie and exercise time goals just above my average, with gym exercise, day. After a year, I am still gamed into walking the long way home and feeling bad on those lazy Sundays when the rings gain almost no color.

The Apple Watch has quietly encouraged me to move and exercise in ways that I have never been able to do myself. I have watched my average heart rate for a 30 minute walk drop to normal levels. And the stand-up reminders, which I use to get up and refill my water glass, seemed to keep me healthier this past year.

The Notifications

I can easily replace the Apple Watch with a dedicated fitness device to replace the rings that rule them all, but nothing saves me more time than having notifications on my wrist.

As described in last year’s How the Apple Watch Has Changed My Behavior for the Better, the process to view notifications is much faster (and less error-prone) and the need I feel to react to them is much smaller when viewing them on the watch.

But over the year I have done something few people have. I have added new notification sources to my world, without noticeably increasing my notification volume. Most of the new notification sources are internal to work, notifying me when systems fail or have issues. I feel these vibrations on my wrist and know — just know — whether to interrupt what I am doing and respond. The result, fewer business issues and faster response when they do occur.

The Time

I was a watch wearer before, and will be for life. The device on my wrist needs to be an excellent time piece, in design, feel, engineering and in allowing me to glance and “know” the time. Prior to the Apple Watch, I wore a Titanium Special Pilot Edition Citizen Eco-Drive watch, a solar-powered engineering marvel. Its face was as familiar to me as my own.

For much of the year, I used the Utility face on the Apple Watch to transition with an analog display. It took no time to get used to. And the wrist flick needed to activate the screen works every time for me. It’s the same movement I guesses I used with the old watch. I still switch to Utility for dress-up.

These days I run modular to see more data in the same glance. And reading that, too, has become habit.

The Next Appointment

It’s not unusual for people to pop appointments in my calendar at work. My fault, I gave them access to my calendar for just that purpose. When deciding what task to work on next, I need to know when my next appointment is. If its far away, I will select a programming challenge and enter the zone. If its near-by, I will work on something smaller requiring less focus.

Before the Apple Watch I would go to my computer and launch the Fantastical menu bar applet to see what’s next. But even that requires the eye to move down the list and read down. Fantastical fades past appointments to make this process easier.

On the Watch, I just flip my wrist up and the next appointment is in the middle of my Modular view. Way quicker.

The Weather

I live in a high-rise in Manhattan. The best weather report comes from looking out the window. But I used to have no idea the temperature range outside as the building is heavily heated. A bright sunny day may look warm from a heated room, but be blisteringly cold.

Having the current temperature (in celsius - I am not an animal) complication on the watch face has saved me many times from going out without an appropriate coat on freezing sunny days.

The Apple Pay

It took a while to figure out the double-tap needed to trigger Apple Pay on the watch. But once figured out, I use it more than Apple Pay on the iPhone. Even late at night after a few drinks, I can Apple Pay for a Yellow Cab with ease.

Aside: I just wish more retailers in the USA supported contactless payments. Some, like my local supermarkets, do. Many, like my local big-chain chemist (who has it on the scanner but stupidly disabled), restaurants and take-out food places do not. Would someone please drag these neanderthal companies into the twenty-first century to join the rest of us.

The Band

I purchased the Black Apple Watch Sport with the black sport band on day one — the nerd version. While that was awesome, I missed having a metallic band like my old watch. I seriously considered purchasing the black link bracelet but felt it just was too expensive for my tastes. I loved the look and feel of the Milanese Loop, but the silver looked terrible with the black Watch.

Recently, Apple released the black Milanese Loop. I tried my luck and the Grand Central Apple Store had one the next day which I purchased. And it’s amazingly great, from a quality watch band feel and engineering perspective. My plan was to wear the Milanese Loop for work, and switch to the Elastomer bands for gym. I did that once. The Black Milanese is now the permanent band, er, for now.

The Apps

As expected, I rarely use Watch Apps unless I want to drill more on a notification. The slow launch times have improved with watchOS 2, but are still too slow.

I do use the Exercise app most days and love that each exercise is now saved and shown on the iPhone.

I also, rarely, answer the phone on the Watch. It works great, but holding my wrist up awkwardly while talking feels weird.

And that’s about it.

A Grand Start

For a device that requires charging every night, has the slowest setup and app launch times, and is tad bulky, it is still a grand start for a product. Its capabilities and utility far outweighs its first version flaws. The press has it wrong about it being rubbish.

It also sold more than all swiss watches in Q4 2015 and would probably make it into the Fortune 500 as a stand-alone one-product business. And its only been one year. The iPhone product business was in a worse state at the same stage of its evolution. The press has it wrong about it being a failure.

I am very happy with my Apple Watch, as much with the device as with how it has immeasurably improved my quality of life and behavior. No other device, including the iPhone, has hacked my ways as quickly, efficiently and unobtrusively as the Apple Watch. And this is just at version one in year one. Failure, my arse!

Follow the author as @hiltmon on Twitter.

Spotlight Only - Nine Months Later

I think one should review one’s productivity tool load-out every once in a while. Operating system updates, other productivity tool updates and your own work practices change over time. Your tool load-out should too. Changing the muscle-memory, it turns out, is surprisingly simple, quick and easy. And your productivity usually increases.

I am a huge fan of keyboard launcher/productivity applications like LaunchBar, Alfred, and back-in-the-day QuickSilver. They were amongst the first applications installed on any new system, and I believed I could not work productively without them.

Nine months ago I rebuilt my 15" MacBook Pro for some forgotten reason and decided to see if I could operate productively with Apple’s built-in Spotlight only for the core features that LaunchBar and Alfred provided.

To make it clear though, my use-cases for these products were basic, mostly using them as shortcut launchers. I never used the advanced scripting features, rarely added plugins, forgot about the additional actions on results and never touched the clipboard histories provided. Mostly because I had Keyboard Maestro juiced up to take care of those functions and more.

Its been nine months, and I am just as happy and productive as ever. Apple did a great job with the Yosemite Spotlight power-up, and the El-Capitan update made it just that much better.

So here’s a core set of Spotlight features — it’s a short list — and how it compares with Alfred or LaunchBar:

Application Launcher

Spotlight launches applications just as well as the others, including with abbreviations. For example, to launch Navicat Premium Essentials, a Spotlight of npe puts it at the top as expected.

Result: Just as good and quick.

Text/File Finder

Type a few words and it finds matching files and their contents very quickly. Unlike the commercial applications, Spotlight returns far fewer results in the HUD screen, but you rarely need more than the top four to find the file you want. Also, since El-Capitan, it now searches on partial strings. Note that I also needed to add a Markdown Plugin to make it work perfectly for me.

Result: Mostly the same, a longer and customizable result list would be nicer. I know you can resize the Spotlight screen, but I want more results per category, not a larger screen showing more categories.


Type the first few letters of a person’s name and Spotlight shows their contact card. Move the mouse over an email or phone to get a click-through icon to send a message, etc. The commercial applications are much better here, allowing you to keep your hands on the keyboard and select an action from the card.

Result: Not as good, but enough for me to see the phone number I need to punch in.

Web Search

Spotlight does have the ability to search the web via Bing (shudder). I do not use this. If Spotlight could use Google or DuckDuckGo it would be a different story. Instead I have a keyboard shortcut in Keyboard Maestro that launches Safari and allows me to search DuckDuckGo in one keypress. So I turned this off on Day 1, Bing search is rubbish.

Result: The third party applications do this way better.

Actions on Results

One thing Spotlight does not do is provide more actions once a result is found. You cannot do anything more with a found result except open an application, you cannot even select the application to use or run a macro on it. Since I never used that feature, I don’t miss it.

Result: If this is your primary way of using Alfred or LaunchBar, and I suspect thats how most of you use them, this missing feature is a showstopper.


I rarely use Spotlight to search for a stock price, weather, sports score or local movie time, these things are all far more conveniently available on my iPhone (and I have notifications set up for the important stuff).

Result: Same, same.

I am sure there is a lot of functionality that I could be missing out on, but since I am pretty much all-in on Keyboard Maestro, Apple’s built-in Spotlight works just fine for my launching and searching needs. Anything more complex gets a keystroke macro in Keyboard Maestro.

Follow the author as @hiltmon on Twitter.

Text Expansion Using Keyboard Maestro (First Cut)

This post presents how I have set up Keyboard Maestro to replace basic text expansion from TextExpander … so far. This post covers (More to follow at some point.):

  • Basic text expansion
  • When to use copy vs typing
  • Limiting to applications
  • Basic variables

Basic Text Expansion

The basic text expansion macro looks like the macro on the right.

  • It is triggered when a string is typed, in this case ;gma
  • It has a single action, Insert text by Typing, containing the text to be typed, in this case git pull; make clean; make -j 8.

Thats all there is to it. Nice and simple.

Type ;gma anywhere and Keyboard Maestro makes the replacement.

Insert text by typing vs pasting

Almost all the time, Insert text by typing is the right way to go. Its fast enough and does not affect the system clipboard. However, for long strings, typing may be too slow.

In these rare cases, Insert text by pasting is way faster. But you need to add another step to the macro. Add a Set Clipboard to Past Clipboard step after the paste to reset the clipboard back by 1 in Keyboard Maestro’s history. (Thanks to @TheBaronHimSelf for this tip.)

Limit To Application

Many of my snippets apply only to specific applications. To limit snippets to an application (or set of them), I create a new Group and make it available in a selected list of applications.

The snippets in this group only expand in Xcode.

Basic Variables

Keyboard Maestro has many of the same variables and abilities as TextExpander (and a whole bunch more, of course), including

  • Position Cursor after typing %|%
  • The current clipboard %CurrentClipboard%

So, for example, to create a Markdown Link using a URL on the clipboard and place the caret (the text insertion cursor) in the description area, I can use my ;mml macro.


Or to create a date heading in an Assembly Note, I can use my ;mmd macro.

This types:



You can format the date any way you like, of course.

To see what variables are available, click the Insert Token dropdown on the right of the action panel. As you can see, there is a huge number available.

I have managed to replace the majority of my TextExpander snippets using the basic text expansion macro described here, and it’s working great.

Next to do these with sounds and with more advanced abilities.

Hope this helps.

Follow the author as @hiltmon on Twitter.

Apple at 40

Apple turned 40 this week, and it got me thinking about the past 40 years of our individual computing experiences.

In many ways, my own journey to now parallels that of Apple.

And I’m willing to bet your journey is the similar.

The 1980s - Youthful Experimentation

In the early 1980s, Apple was young, surrounded by a wide range of competitors and the Apple II was it. Everybody who could, had one. They used them to work, to play, to learn programming and to experiment.

When the Apple Lisa project was announced and plastered all over Byte magazine, we all devoured each word written about it. We argued whether the Apple III or the Lisa was better (I was a Lisa), but both disappointed.

In 1984, Apple released the Macintosh. And changed the world.

In 1987, Apple released the Macintosh II. If there was ever a computer I wanted in the 1980s, that was it. That plus the LaserWriter recreated the entire publishing industry.

I view the 1980s Apple as a time of youthful experimentation. They experimented with several new platforms, took major risks, created unique products (some great, some horrible) and set out to change the world. The world fell in love with the GUI and the mouse.

In parallel, I was doing the same. I had Sinclair computers back then (talk about a unique platform) which were the only ones we could afford. When I went to university, I built my first PC clone, ran MS-DOS to learn programming using Turbo Pascal, and Xenix (later Minix) for everything else. I fell in love with computing and UNIX.

The 1990s - Suit Wearing Corporate Life

The 1990s were Apple trying to be corporate and becoming quite miserable about it. Time after time, Apple produced the same boring beige boxes, boring updates to the operating system, and struggled to complete against IBM compatible systems and the Microsoft juggernaut.

Apple was trying, as all young folks do in their first jobs, to fit in to a society they did not understand and felt powerless to change. They simply did what they thought the world expected of them. They tried to act like grown-ups and play the corporate game against older, powerful, entrenched interests, and had their spirits crushed.

It’s not that Apple did not create great things in the 1990s, its just that they were few and far between. The Powerbooks of 1994, the Newton and System 7 (IMHO) stand out in my mind.

In parallel, I started programming, managing projects and consulting — and wore a business suit every day. Since the corporate world was on PC compatible systems, that’s what I used. MS-DOS at work, Minix at home, Windows 95 at work, System 7 at home. I did this because I thought that was what was expected of me. To act like a grown-up, settle down, suit up and play by the rules of others. It crushed my spirit, and I was miserable.

By the late 1990s, Apple was doomed. Something needed to change.

By the late 1990s, I was miserable. Something had to change.

The 2000s - Finding the Bliss

The return of Steve Jobs via the reverse acquisition of NeXT was the trigger for Apple to Think Different again. Its moment of change had come. The new iMac design language took hold, from the Bondi blue model in the late 1990s, through the beautiful iMac G4 lampshade model to the current slab design on the desktop, the powerful PowerMac G4 Quicksilvers with their unique handles leading to the amazing all-metal G5 models, the new Powerbooks G4s and later MacBooks.

And more. OS X was introduced and blossomed. The Intel transition happened. And the iPod became the most iconic, must-have product for our generation.

Apple’s products became Apple’s again. They had found their bliss. And the market found it with them. Apple changed to doing what it wanted to do, what it loved and that showed. It found its market wanted the same and shared their love of great design, music, experience and reliability.

In parallel, so did I. I replaced the suit and meetings and Windows PC with jeans, an IDE and a Titanium PowerBook G4. I changed countries (twice) and worked on the products that I wanted to work on and make great.

I had found my bliss. I was doing what I loved and was free to also live my life surrounded by people I loved doing fun things at work and especially at play.

By the late 2000s, Apple was a successful and confident organization. It had proven itself to itself and the world and was surrounded by friends. It was ready to expand its reach. And it did so in the most incredible way, by launching the amazing iPhone. No other firm could have done it, it required the unique kind of creativity and operational chops that only a happy, confident Apple could delver. The iPhone became the one icon to rule them all.

As was I, well, successful I mean. Its because of this bliss that I was able to move to New York, do the work I wanted to do, creates some of my best product and run my own consulting business here.

All using Apple products.

The 2010s - Living the Life

By the start of the 2010s, Apple was confidently living the life. The passing of Steve Jobs and the handover to Tim Cook did not change who or what Apple was. Apple had gotten better at things it traditionally was terrible at, like services, even better at things it was good at, like design, manufacturing and innovation. Yet it was still finding more bliss. The iPad, Apple TV and Apple Watch may not be seen as super-successful products compared to the iPhone, but each on their own would be a Fortune 500 company!

Apple has gotten confidently comfortable with who they are, what they do and how they go about it. They continue to innovate in other areas, continue to press forward, continue to enjoy what they love. They have not stagnated or settled down. They continue to youthfully experiment yet deliver like a mature firm.

In parallel, so have I. I live where I want to, do what I love to do, use the products I love to use. I am continually working on becoming an even better software designer, programmer and person. And finding more bliss. Like writing and traveling again.

I am confidently comfortable with who I have become, what I do and how I go about it. But I am not ready to settle down. I continually try new tools, languages and approaches. I continue to youthfully experiment yet deliver like a pro.

Onwards and Upwards

Apple at 40 (and in parallel, myself a few years older) is a master of many things, it has put in its 10,000 hours. But being a master of one, two or even ten things is not good enough for either of us. We continue to experiment, to try, to put 10,000 more hours into new ideas, experiences and technologies.

I cannot see Apple slowing its pace of innovation, change and expansion. It’s who Apple is now and who Apple always wanted to be. The path to here was long and winding, and full of bumps. The path forward will be too. And because Apple Thinks Different, it will always be different and misunderstood and underestimated. Apple at 40 does not care what others think, it has found its bliss and will continue to push forward, writing its own story.

I intend to do the same. Et vous?

Follow the author as @hiltmon on Twitter.

Dependency Limited and Conflict Free C++

TL;DR: Beware of libraries you need to compile yourself and copy-pasted code, the performance, maintenance and other hellscapes you create are not worth it in the medium and long run:

  1. Do not use dependencies that have dependencies that you have to compile.
  2. Do not use libraries depended on by dependencies anywhere else.
  3. Solve your own problems and understand the solutions. Do not copy-paste from the web.
  4. Always write your own code where performance and maintenance is critical.

This post specifically targets C++, but is really a general set of rules and advice. See NPM & left-pad: Have We Forgotten How To Program? for a NodeJS scenario that this would have prevented.

I write a boatload of C++ code these days, developing on the Mac and deploying to Linux servers. All of it is dependency limited and conflict free. What does this mean? It means that I never have to deal with multiple versions of dependency libraries or on dependencies that have their own conflicting dependencies. At best, I code in straight-up C++11, use the STL and rely on vendor precompiled libraries that are dependency limited. The result is fast code that I and my team can read and maintain, deploy and run on all platforms with ease.

Dependency and Conflict Hell

When I started writing these products, I went mad and using a ton of third-party libraries. The theory was that these libraries were out there, already written and tested, everyone used them, and I could leverage them to short-cut my development time and effort.

And it worked.

For a very short while.

Within weeks of starting, I found one of the libraries I was using stopped compiling. It started throwing really odd errors, yet nothing had changed. It turns out that this library relied on code in an old version of another library that had been deprecated in later versions. I had added another dependency that needed a newer version of the same dependent library, and all hell broke loose.

The UNIX operating system solves dependency hell by allowing you to link your library to specific versions of libraries, so I could link to, for example, Boost v1.30.1 for the first dependency and Boost 1.52.0 for the other dependency — as long as they were compiled into separate libraries! Which means maintaining two installs of Boost and two environments just to get my dependencies to compile. And if I add another dependency that say requires a third version of Boost, complexity increases.

There are many problems with this:

  • When it comes to design and architecture, I need to split up my dependencies into separate libraries that compile with their own dependencies and then link to them in the main application, or use static linking which is not the preferred option.

  • When it comes to maintenance, I need to document where each dependency is, where its used and somehow describe the Gordian Knot to myself and my team for use in 6 months time without context.

  • When it comes to setting up a development environment, I need to somehow save old versions of dependencies and make complex Makefile trees to generate the correct versioned libraries.

  • When it comes to compiling, I have to compile a lot more components or use static linking to ensure that the right library is linked and the right functions called, increasing executable size, memory use and complexity.

  • And when it comes to deployment, I have to build this hellish mess for each platform.

Aside: Debugging and Maintenance Hell

Solving for the above takes time, it’s not hard, and once it’s been done, you could argue for smooth sailing. I expect this is how most teams do it.

Until something goes wrong and you start to debug. I don’t know about your tools, but mine always seem to find the “Jump to Definition” code in the wrong version of the wrong dependency every time. Which means that trying to find where something fails becomes that much harder. Is the error in the dependency, the dependency version or in my code? Ouch.

Or until time passes, like say six months, when a new error is being thrown in Production. Six-month-later-me does not remember what current-me knows now, leading to maintenance hell. Not only do we have a production problem and unhappy users, but I would have forgotten all the little tricks and hacks to get back my dependency hell knowledge.

And most importantly, I have lost the chance and ability to know and understand the application.

Dependency Limited and Conflict Free

So how to do this? I follow the following rules:

  1. I do not use dependencies that have dependencies that I have to compile. That means using vendor and open-source precompiled libraries that require no additional software installs to use them.
  2. I do not use libraries used by dependencies that may conflict. If a vendor library uses another library, I avoid using that other library in any of my code anywhere.
  3. Where necessary, I solve my own problems, not rely on third-party, unmaintained code I find on Stack Overflow or Github.
  4. I always write my own clean code when performance or maintenance is critical.

I am not saying I do not use dependencies or never look to Stack Overflow or Github for ideas. I’m not that smart. I simply limit my exposure and maximize my ability to read and understand my code and environment, now and in the future, with these limiting rules.

Looking at Rule 1

For example, lets talk about one of my database library dependencies. Its is written using Boost. Which means that the client library that I need to compile against has a dependency on Boost. Following the first rule, I use their precompiled libraries, not their source code, and since Boost is a dependency, I do not use Boost anywhere else (Rule 2). It’s up to the lovely folks at the database company to deal with their dependencies and create good stable libraries and installers for all platforms, and all I do is use their stable binary versions. A nice clean separation of my code from theirs, easy to maintain and easy to deploy.

Looking at Rule 2

Since we are on Boost, let me stay on Boost. These days, its almost as if you cannot do anything in C++ without Boost. Every answer to every “I’m stuck” question in StackOverflow on C++ seems to be answered by the phrase “Use Boost”.

I’m not saying Boost is a bad library, it’s not. It’s so awesome that the C++11 standards team nicked all the good stuff from Boost to make the STL for C++11 and C++14.

But every darn library out there, every example, every potential dependency seems to use different versions of Boost in different ways. And eventually they conflict because some code somewhere uses a deprecated call or side-effect that conflicts with the same call elsewhere. Following rule 2, I do not use Boost because everyone else seems to. Half of my problems came from other people’s code that used Boost badly.

To reiterate, my problem is not with Boost, it’s awesome, my problem is with how badly it’s used and abused, and rule 2 protects me.

Looking at Rule 3

We all know there are the piles of code that are just too tedious to write and have been done over and over again. Loggers, data structures, parsers, network interfaces, great ideas, and simple Stack Overflow solutions. It’s so tempting to just copy and paste that code, get it to compile and move on. I mean seriously, why rewrite a logger class! 1

Just use one that’s out there and move on, no?

My experience with these have been a case of short term gains with long term pains. Oh sure, I can get it going with less work on my end. But when things go wrong as they always do? Or when the application starts to perform so slowly that nothing seems to fix it? Or its six months later and the pasted code starts to act funny in production?

Rule 3 ensures I avoid these situations.

Keep in mind, example code or Github projects were written with no context or to solve the writers specific problem, scenarios that almost certainly do not apply in my environment or yours. And when things do go wrong, we have no understanding of the code or know how to fix it. Understanding code is more important that saving a few hours or days of developer time.

Looking at Rule 4

Given that I am developing real-time applications for Finance, hence the C++, performance and memory management being critical. The products process vast amounts of data, which means tight control over RAM, CPU caches and even cache-lines are important, and any wasted compute cycles add up in performance degradation. All my vendor dependencies have been tested and put through this wringer, so I can trust them to be as fast and memory safe as possible, or I would have selected a different vendor.

But not much else is, mostly because it was not written or tested to be that way. Under rule 4, the only way I know how to get the fastest application is to write it myself. That way I can see, and most importantly understand, where the bottlenecks are, where the memory is going crazy, where threads are running wild and fix it. Copy-pasted code or Github code rarely cuts it.

My Situation seems Unique. It’s Not.

I do understand that my situation seems to be reasonably unique. My applications are large and complex and need to interact with many systems and technologies which means dependency management is critical. The large code base and tiny team environment means that a simple development setup is best. Maintainable and understandable code is more important than getting it written quickly. Production issues will cost us a fortune, which means readable, simple and understandable code is critical to being able to detect and correct issues quickly. And the application needs to be fast and correct and reliable.

For most of you, many of these attributes seem not to apply. Crazy deadlines mean that dependencies and copy-paste code are perceived as the only way to get there. Maintenance is probably not your problem. Apps and requirements are straightforward, hardware is ubiquitous and cheap and if it takes a few seconds longer to process, who cares. Good for you if thats your environment.

But mission critical systems need rock solid foundations, architectures and maintainable code. And any additional dependencies, any additional complexities, anything that slows down deployments or maintenance need to be eliminated mercilessly. Sure, it will take longer to write and test. But the cost and time to build dependency limited and conflict free systems pays off handsomely in reliability, maintenance speed and application performance.

No matter your situation, if you cannot clearly understand your development environment and all the application code, you’ll never figure it out when the production system gets slow or goes down. Especially after time has passed and you have been working on other projects.

Not so unique after all.

In Summary

If you are writing anything mission critical, where future maintenance, performance and teamwork is critical, brutally limit dependencies to simplify the development environment, deployment process and maximize your ability to debug and maintain the product. Ensure that all code added to the project needs to be there and is fully understood, and that it does not conflict with any other code in the system.

It means that you will have to write a few more modules yourself, but that investment pays off incredibly later on.

  1. A Logger Class: I referred earlier to a logger class as an example of code that has been done hundreds of times over and who the heck is silly enough to write another one. Me, that’s who. Why? Because I needed a real-time logger that would not in any way slow down the processing on the primary threads of the application. Almost all logging classes are synchronous, you have to pause the thread while the logger spits its results out to screen and file system. That makes sense when you have the time to wait and ensures that log entries are saved before moving on and potentially crashing. Async loggers often collect log entries, then pause everything to batch send (even if they are on alternate threads). But in a real-time system, the huge number of milliseconds (1/1,000 of a sec) needed to append to a file or send a log message kills performance that is measured in microseconds (1/100,000 of a sec). I needed to know just how the logger impacts thread performance, and need to optimize its performance too, and that’s why I wrote my own logger.

Hiltmonism - Talk to Drivers, Not Mechanics

How many people really know how their motor vehicle works, or even care to. Very few.

But they all drive.

And when their car breaks down or makes a noise or that ridiculous engine light comes on, they need mechanics. Nobody, except other mechanics, understands the explanation of whats wrong with the car. And therein lies the problem.

Mechanics need to learn to talk to drivers, not mechanics.

Techs are the Mechanics

Technology people are perceived to be painfully shy. I guess its a movie meme. They are not. Just observe a bunch of technology folks get into it on a topic they understand. You’ll never get them to stop talking, arguing, jousting and challenging. Mechanics are speaking to mechanics.

Technology people are also perceived as disconnected, strange, different, hard to speak to and harder to understand.

Unfortunately, this is not a meme. It’s true.

But not because techs are disconnected, strange, unintelligible folks. Or painfully shy for that matter.

Its because techs simply communicate differently to the way their audience does. And since techs do not speak to people the way they normally prefer and understand, this perception is supported by the evidence. Which makes it real.

Mechanics are speaking to drivers as if they were mechanics.

In business, successful technology teams understand this disconnect and learn to speak to their audience, to talk and relate the way their audience in the business does.

Successful mechanics speak driver-to-driver as fellow drivers .

Mechanics vs Drivers

Lets take a look closer at tech talk (the language of the mechanic) vs audience talk (the language of the driver) to see why this disconnect exists:

1. Techs talk in details, the audience talks in generalities. As a result, techs talk too much about the detail of what they are explaining and either confuse or bore the audience. Who cares that brakes have linings that soften, wear out and burr; but we all know when they squeak. Techs need to adopt necessary generalizations to address their clients properly and to have a shot at understanding them.

2. Techs also get lost in being accurate and pedantic, something the audience never does — they have better things to do. Whether there are N items or N+M items makes a difference to techs yet makes no difference to the audience. For a mechanic, the engine timing, tuning, air flow and seals are critical, for the driver, having a working car is all that matters. Techs need to loosen up and focus on how it will be used and not how it works. The tech and the client can always come back and drill down into the details later.

3. Techs use specific language to communicate, our audience uses common language and relies on context or experience to share what they are taking about — and understand that the right terminology does not matter as long as the core points of the conversation are understood. To a driver the doohickey is rattling, to a mechanic, that could be anything and the rattle a symptom, a result or something else! We techs get confused when the language is not our own, missing the gist of the conversation, which is what the audience wants us to understand. Techs need to learn their language patterns, and to focus on the gist of what is being said, on what the client is trying to say, not to imagine what the client may mean or what may be happening and what they just missed the client saying while doing all that imagining.

4. Techs explicitly express assumptions, the audience barely registers that they are making assumptions in conversation. Mechanics feel the need to explain the purpose of tappets and push-rods and how they react to different octane fuels which have different explosive properties, and that is why the car pings and feels sluggish. The driver wants the car to just go well. This one is hard for techs learning to speak to their audiences because they need to know what assumptions the audience usually makes. Working with your audience, listening to them interact, and asking them questions is a good start.

5. Finally, techs seek rigid exacting perfection, its necessary to make correct digital programs. The audience thinks and lives differently in an analog world where things change, move, shift, adjust and make — or fail to make — sense in unusual ways. Techs need to understand their audience’s analogue nature, senses, rate of change and direction, finding ways to communicate and adjust in analog while still operating in digital space.

Seeing the Signs

Its easy to spot the signs when this communication breaks down. If the audience starts to repeat itself, if the eyes glaze over or slow-blink, or they start pacing or making impatient motions, then the communication has failed. Just picture a frustrated driver trying to explain to a mechanic what is wrong with the car.

It works both ways. If the tech rambles on too long, finds themselves needing to say somethings, stops listening, or says “I understand” just to get rid of the audience, that too is bad. Just picture a mechanic detailing all the possible moving parts that could be the rattling doohickey to a puzzled driver!

Keep in mind that, unlike mechanics, techs do deal with different audiences. Each audience is vague or detailed in its own way, uses its own terminology, has its own assumptions and its own measures of success or failure. Each different audience has its own norms. Yet none of these audiences has the time or patience to discuss or learn all the dark details. They may seem different, but they are all essentially drivers.

The tech team needs to understand this about their audience to become part of it. They need to know how to speak to each audience in the language they understand, using the terms and levels of accuracy the audience expects.

To talk like drivers to drivers.

Mechanics can be Drivers too

The tech team needs to know what to tell their audience, and most importantly, what not to tell them . Explaining how a program or technology works, what an error message means, why something cannot or does not work, why this one case in 100 is possible and needs to be solved now, is interesting to techs, and not at all interesting to the audience. Drivers want a working vehicle, they do not need an explanation why it’s not working.

The tech team needs to know when to shut up. To the audience, perception being reality means they build their own mind-model of how a thing works. Letting them live in their own model is hard for techs because we need to deeply understand our own models and assume, incorrectly, that others do too. The driver does not need a lesson on internal combustion engine thermodynamics when knowing it turns on and makes a “vroom” sound is good enough.

And finally, the tech team needs to know when to speak up. Especially when the audience draws the wrong conclusions. If the driver is operating the vehicle incorrectly or using the wrong fuel, the mechanic needs to find a way to reach them in a way the driver can understand. The tech team needs to know how to effectively communicate the issue without going into boring details or terms, and draw the audience back in, regain their trust and understanding.

Talk to Drivers, not Mechanics

Finding that balance, the balance between detail and vagueness, between the correct term and the common one, between enough information and too much information, between saying more and shutting up is hard for tech teams.

But a good team can find this balance as long as it knows what the communication issues are.

And how to deal with them.

To talk to Drivers as Drivers, not Mechanics.

I use this Hiltmonism, “Talk to Drivers, not Mechanics”, to remind me and my team how to listen and communicate with those we design software for and how to build better product in the future. After all, great product is what they want and need too. We all need to be somewhere.

Oh, and to make the tech team seem a little less weird, strange and alien.

Click to see other [Hiltmonisms]https://hiltmon.com/blog/categories/hiltmonism/) in the ongoing series.

Follow the author as @hiltmon on Twitter.

Minimal Project Management

With my team starting to grow at work, its time to add some Project Management to our process. However, I do not want this to add any additional time, meetings or burden on them (or myself) and so all of the popular formal processes are no good for my needs.

In this post, I will outline the Minimal Project Management process, its steps and how it works. I will also cover the issues of change and interruptions. This process only works, however, because of the quality of my team.

The Team

My team is a group of experienced, smart folks who know how to design, program and ship product. Most of all, they understand the business, its priorities (our one weekly meeting covers that) and how their deliverables will impact. And they know how to manage their own time, interruptions and schedules.

They do not need to be micro- or even macro-level managed, they do not need systems and tools to tell them what to do when, they do not need someone else creating stories, setting tasks and getting on their cases when things slip. They know how to manage themselves, how to figure out what is needed, how to communicate effectively, and how to get it done.

That’s why I hired them.

I can assign work and know it will get done, done right, done on time with minimal overhead. And I know they will communicate progress and issues as needed. They need almost no project management from me.

Minimal Project Management

But as the team grows, so the number of projects and tasks that can be performed grows. When it was two of us, a quick conversation could cover all topics. When we grew to three, a weekly meeting and a single live-document sufficed.

Now we are five. Capacity is up. And the list of tasks and projects that can be tackled in parallel grows rapidly. I, as manager, need to stay on top of a growing mountain of this stuff.

This is when I level-up to Minimal Project Management as I have done on so many previous occasions.

Minimal Project Management consists of only a few steps (and one change process):

  • Assignment and prioritization of work.
  • A Statement of Work to define the task or project.
  • A single weekly review of progress.
  • A Management of Change process.

That’s all there is to it. The team gets to manage their own time. They know the priority and impact of the work, the dependencies, the business needs and their key deliverables as its discussed in the meeting. They know the standards expected of their work, the tests to run to be sure their deliverables are correct and the pressure we are all under.

That is, again, why I hired them.

Lets look at each component.

Assignment and Prioritization of Work

As the manager of the group, it is my responsibility to determine what work is needed, what the priority of that work is and who will do it.

I do not do this alone.

The Management Team of the business meets weekly to discuss issues, plans, challenges and review strategy. It is at these meetings where I contribute that which tech is bringing to the flow. The Management Team agrees on business priorities, and I convert that into prioritized work for my team to perform.

My team meets weekly too, and one part of this meeting is a review of the current business and work to be done — so they see the big picture. They too advise on dependencies and priorities.

With all this advice and help, I can easily determine what work needs to be done first, what next, what can remain on-hold, and can assign work to the team.

It’s not hard to determine what work implies a large, complex project and what work is small and easy. And there is no need for complex estimations or deadlines, we all know what’s at stake.

Each team member gets at least one large project and several smaller ones. This is intentional. There is much downtime on a single project as developers wait for data, people or dependencies. By loading each developer with several projects, they can work on another while waiting on one. Not only that, they can schedule their time to switch between projects to keep their work fun and interesting. It is my experience that developers with several projects “on the go” tend to mix-and-match their time and yet deliver and ship great code on time. Developers with nothing to do become disruptive or lazy.

Once work is assigned, each assignee is responsible for getting that work done. And they start with a Statement of Work.

The Statement of Work

A Statement of Work (SOW) is a short document that defines the objective, tasks and deliverables of a task or project. It can optionally contain assumptions and dependencies, and it may have details added later (designs, appendices, sample data, notes). But in essence, it’s a document that defines what needs to be done and how we know when it has been done.

My Statement of Work template is a simple Markdown text document as follows:

Kind: Statement of Work
Title: <Task or Project Name>
Author: Hilton Lipschitz
Affiliation: <Company Name>
Date: 2016-03-04
Version: 0.1

# SOW: [%Title]

## Overview

<What is the objective/goal is of this work>

## Tasks

<A list of the tasks to be performed — very high level, clearly written>

## Deliverables

<The deliverables of the project, so we know when it is complete>

Optional additional sections are

## Dependencies

<SOWs that need to be done first for this to be viable>

## Assumptions

<Any assumption made in why this work needs to be done>

That’s all there is to it.

A good Statement of Work is less than one (1) page long, complex ones get to a massive 2 pages, never more. It limits the scope of work and defines the tasks to be performed and deliverables to create. It does not detail each task, merely adds them as simple checklist items to be sure they get completed along the way. It does not say how, when or who will do the work. The how is up to the developer, the when and who is up to me, the Project Manager.

Most importantly, the person tasked to do the work writes the Statement of Work, not the Project Manager, not the user, not some Business Analyst.

There are many reasons for this:

  • The person doing the work needs to understand what needs to be done, what needs to be delivered, to whom and what is going on if they are to do it right. The best way for others to know if they have this knowledge and understanding is to get them to write the Statement of Work themselves and then review it.
  • The writer of the document also takes ownership of the project. It’s their scope, their tasks, their deliverables to make, their problems to solve.
  • The writer of the document usually starts with no assumptions. Asking an analyst or a user to write it leads to information missing because they assume the developer will know things. If the developer is writing it, they assume nothing.
  • And honestly, it frees me up to advise, coach, discuss, approve and work on other things, taking on my own Statements of Work. I’d rather be programming too.

Once the Statement of Work is completed (and reviewed, filed and approved by me), the person starts executing the tasks and we move into a work and review phase.

Weekly Review

Once a week, the team meets.

More than that and we’ll be interrupting their ability to get into the zone and work on projects, reducing their productivity.

The weekly meeting never goes beyond an hour. In it, we:

  • Each update the rest of the team on the projects we are doing and their progress.
  • Discuss new potential work that has come up.
  • Discuss any issues, ideas and changes that apply.
  • Discuss any key interruptions and bugs found.
  • Completed projects are closed.
  • Any new assignments are handed out.
  • And with the remaining time, we talk general technology and ideas, this being the part we like the best.

A weekly, semi-formal meeting does not mean we do not talk about work during the week. On the contrary, we are always discussing what tasks, documents and deliverables we are working on. If any issues arise, the team member goes to the people they need (especially me) to discuss, plan and resolve. While this is going on, the rest of the team is free to remain in the zone and continue to deliver on their work.

Dealing with Change

One thing you can always be sure of is that things change. The business changes, priorities change, even small project scopes change. Whether the change is external (Management, User) or internal (Tech Issues, Dependencies), the Minimal Project Management process needs to deal with it.

And I boil it down to two basic change types: Scopes change and Priorities change.

Scope Change

In scope change, the tasks and deliverables of a project as documented in a Statement of Work change. Users may ask for more features “while you are at it”, or, as the developer learns more about a topic, the nature of the tasks and deliverables change — what we thought we needed when writing the Statement of Work does not match what we actually need.

Scope change is managed by versioning the Statement of Work. A new version is generated by the developer and reviewed by me. If the scope change fits priorities, I will approve it, and the programmer will start working off the changed SOW. If not, or if the work should be a separate task or project, then a new task is generated and put aside for the next weekly prioritization and assignment.

If the scope change stops work on that project, no problem, because each programmer in the team always has several “on the go”, and can switch to working on another in the mean time.

Priorities Change

Like all programmers, I prefer to finish what I start. Given the limited scope of Statements of Work, this is usually and regularly achievable. But in the real world, priorities change.

This is where the manager needs to step in. Projects may need to be stopped mid-stream, with deliverables incomplete and undelivered, so that other projects can get some time and resource. Partial code branches need to be committed in case we return to this task. That’s all the team member needs to do, they then move on to other work.

I, on the other hand, need to track what was not done and why. I need to archive the SOW, with a note as to why it was stopped, by whom and when, and I need to be sure the partial work is safe. I usually add this information as a notes appendix to the document.

And, when the time comes, I can later assign this work back to the original programmer to continue and complete. Note that I rarely assign a SOW to anyone other than the writer of the SOW as they know best.


One thing that does ruin the simplicity of Minimal Project Management is interruptions. Users find bugs and interrupt, folks ask questions and interrupt, systems fail and interrupt, or some urgent, must be done now, yes bloody now, the place is on fire, task interrupts.

This is where the caliber of the team kicks in and where, as manager, I need to be patient and understanding (and occasionally a hard case). The people I hire and work with expect interruptions, they are realists and are experienced enough. In most cases, they deal with the interruption and move on, managing their own time and getting back to their work, without anyone being the wiser or anyone else being interrupted (until the weekly meeting where these are discussed).

If the interruption is a bigger thing, or happening to frequently, they make a judgment call and bring me in. Between the causer of the interruption, the programmer and myself, we’ll quickly discuss the issue. If it is something urgent, we’ll agree to perform the interruption, knowing work will be delayed, to get the interruption resolved. But if the interruption is a bigger issue, and it is up to me to determine that, then its also up to me to get the programmer back to work and the cause documented, scheduled for later resolution and interrupter satisfied — which means I need to be a hard case.

Mostly though, the team can and does deal with interruptions all day, and can still get their work done in spite of them.


When a task is completed and the deliverables shipped, I archive the Statement of Work. Often, I will add a Notes section at the bottom to discuss how the task went, who did it and any issues that we faced.

This library of archived projects helps me to assign work in the future. I can see who is good at what, and who needs to learn what. We as a team can learn from the issues of past projects how better to think about, document and plan future ones.


So, as my team is growing, I am moving from a semi-formal weekly meeting to a semi-formal weekly meeting and Statement of Work Minimal Project Management model.

I will use the same task recording and prioritization tools as before. I will hold the same weekly meeting as before. But now each team member will be responsible for managing their formal Statements of Work and more formally communicating, managing and tracking change.

Yet we’ll remain the fast, nimble, and exceptionally productive tech team as always, doing what we do best, delivering systems that multiply our firm’s capabilities and competitive edges.

Follow the author as @hiltmon on Twitter.