As a buyer of software, I always focus on what the software does do. If it does what I need to do well, and the price is right, I buy it, I use it and I gain the productivity benefits from it.
As a seller of software, I find potential customers focus on what the product does not do, and use this as a reason to avoid purchasing. That’s fine as long as the missing feature is something they can and do or will use. But it makes no sense to me when the missing feature is something that they already have and do not use, don’t know how to use or have no reason to use.
As a result, they remain with a painful process or outdated system, one that does not do what mine does do that they need (and why I’m selling to them in the first place).
For example, with regards to my Kifu product, these two conversations have happened many times:
Kifu does not have a report writer module. Instead, it does have a comprehensive set of reports. Several potential customers have stated that without a report writer, they will not buy Kifu. However, their existing software does have a report writer which they have never used and cannot figure out, and it did not come with any reports. They had to pay extra to hire the vendor to create reports for them (all of which Kifu already provides). We did not create a report writer for this exact reason, no-one except programmers can use report writers, and most clients need the same reports! But no sale because no report writer.
Kifu also does not have a user accessible query generator to enable users to create their own database queries. Several potential clients stated that the competitor’s product which they are using and wish to replace (which is why they are talking to us) does have this feature, so no sale. But only one out of about twenty actually used the feature or even knew how to find it. The remainder did not purchase because of a feature they themselves admitted they could not use. Go figure.
One could argue that these are both just-in-case type features. But the reality is that these are features for developers, not normal users; and their existence is a sign that the product is not feature complete. A report writer indicates that the vendor does not understand the reporting needs of their clients, a query engine implies the vendor does not understand the information needs of clients. In both cases, though, the vendor gets called in to use these features on behalf of the client because the client cannot. And if they don’t exist, the vendor gets called anyway. So what, really, is the difference?
I was always taught to talk about the benefits of a product, and to be honest about what it does not do. What I don’t understand is the decision to reject a better product because it does not have a feature you can and will never use.
Follow me on App.net as @hiltmon or Twitter @hiltmon and share your war stories.
Run /slogger -o Google again to launch a browser, authenticate and provide an auth_code. Paste that into slogger_config under auth_code.
Run /slogger -o Google a third time to get an access_token and create the first entries
Detailed Installation Instructions
If you are here, I assume you already have Slogger installed and running. As of writing this, I am on version 2.14.2.
Open a terminal and cd to your slogger folder (in my case that’s in ~/Scripts/Slogger. Run all commands from there.
Install the Google API Gem
In terminal, if you use RVM:
gem install google-api-client
If you are running the system ruby, you need to sudo it instead. You can tell if you are running the system ruby by running which ruby and if the answer is /usr/bin/ruby, it’s the system one.
sudo gem install google-api-client
Either way, you should see:
Installing ri documentation for google-api-client-0.5.0...
Installing RDoc documentation for google-api-client-0.5.0...
Note that this is a pre-release gem, but it’s close to final.
Install the Plugin
Download and extract the googleanalyticslogger.rb plugin file from Gist 4072068. Then move the googleanalyticslogger.rb file to your Slogger plugins folder.
Or you can also just create a new googleanalyticslogger.rb in your plugins folder and paste the raw gist code in.
Note: This is critical, the plugin will not work without this patch.
Open slogger in your favorite programmer’s editor and go to line 172, you should see:
Replace it with:
if plugin['updates_config'] == true
# Pass a reference to config for mutation
# Usual thing (so that we don't break other plugins)
An explanation for this patch is in the “How it Works” section below.
Save and close slogger
Optionally: Create your own Google API Client keys
You may skip this step in the process and use the Google API Client codes that I already set up. I’ve not been able to test this on anything but my own account so please let me know if this works.
Just in case, I have included instructions on how to create your own as an appendix to this post.
Create the slogger_config file entry for this plugin
As for all plugins, the first thing you need to do is run Slogger to create the slogger_config for it. The -o Google parameter forces Slogger to only run this plugin (and not run all your other plugins and create duplicate Day One entries):
./slogger -o Google
You should see:
Initializing Slogger v2.0 (126.96.36.199)...
> 11:00:45 GoogleAnalyticsLogger: Google Analytics has not been configured or a feed is invalid, please edit your slogger_config file.
Add the Client ID and Secret
Open slogger_config in your favorite text editor and scroll down to the GoogleAnalyticsLogger section.
Paste in my client ID and secret key (or use your own)
This plugin should now run every time your scheduled Slogger run occurs.
How it works
This plugin uses the Google Analytics API to retrieve stats for web properties using OAuth 2.0 security.
Oauth 2.0: Google Style
The first thing you need to do is create a Google API Client registration at Google (See the Appendix below on how to do this). The most important thing is to tell Google that this is an Installed Application. That way, Google will generate a refresh_token that can be used to enable the application to refresh its own access when the regular access_token expires.
Even though its an installed application, the first time around, Google OAuth 2.0 requires a user sitting in front of a browser. So I setup the plugin to help with this process.
If the client_id is not set, the plugin assumes this is the first run, pops a warning and does nothing.
If the client_id is set, it checks the auth_code. If the auth_code is not set, this must be the second run. The plugin creates an OAuth 2.0 authentication URL and launches the user’s browser. The URL is configured such that the resulting auth_code is visible to the user and can be copied and pasted. At some point, I could possibly write code to monitor the browser and get the auth_code but thats too much for now.
Note that the auth_code is a single use code, once it has been used once, it’s useless. We need to convert that to a longer term token. So, if the client_id is set and there is an auth_code, check the access_token. If that is blank, get a new one. Since this is assumed to be the first time, we know that Google also returns the refresh_token. These are saved to the config file (see mutable config below).
It then checks the access_token to see if it has expired. If so, asks for a new one. This code has not yet been tested, and will probably fail. Two days only!
The default Slogger plugin gets a ruby class level copy of the main Slogger config data structure. The problem is, I needed to able to save the access_token and refresh_token as and when they change without user involvement. If you change the copy, it does not change the original, and when Slogger finishes its run and saves the updated config, these changes will be lost.
I did look at creating a client_secrets.json file as per the gem documentation, but I feel that having more than one configuration file for Slogger was not a good idea.
So instead, I needed access to the original config data structure, not the class copy. Hence the patch. Now, slogger looks for an updated_config attribute in the registration, and if it is true, passes the config to the plugin directly, else runs all other plugins as usual. This plugin sets 'updates_config' => true in its registration.
I’m uncertain whether this is a good or right way to go, but it works for now in the alpha. Note that any updates to Slogger will trash this patch, which means a two-file approach may be better.
Once the OAuth 2.0 is done, the plugin “discovers” the Google Analytics API. This is needed to access it.
The plugin then uses the Analytics Management API to download and cache a set of all the web properties accessible to this account. If anything went wrong in OAuth 2.0, we’ll find it out here.
The Google Analytics API does not seem to make timestamps available. It does need a start_date and end_date to get data. If you just use these, though, the API sums all the data between the two dates and returns it as one row. Fortunately, it does have a date dimension that can be used.
Since I want the journal in Day One to have the full set of stats for a date, I setup the plugin to operate up until yesterday, and to do nothing if the last run was after yesterday. That way, you should never see a journal with partial day stats. But you can back fill if you want.
The plugin then runs for each web site that you have a Google Analytics UA code. It uses the cached properties list to convert that into an internal Google site code and get the site name (used in the journal header). If it cannot match the UA code to an entry in the cache, it does nothing.
For each matched site code, it runs the queries. In the alpha, I have these nice and separate for testing, but they can be batched later on.
Since the date is a dimension field, it means that the Google Analytics API returns a row for each date and each other dimension. For example, in visitors, it returns a row for new visitors and another row for returning visitors for the same date. Which means that I need to take the data returned and consolidate the data by date.
I created a content hash that is date keyed, and add an array of markdown formatted strings to each date in order that I’d like the journal to appear. It’s simple, and it works.
Once all the API queries are done, I loop through the content dates, grab each array of strings, concatenate them into a body and use Slogger to create a new Day One entry for that date as of 11:59PM.
Feel free to look at the code and let me know what you think. I try to make my early code more explicit to aid debugging, and plan to return later to optimize and idiomize the code.
Appendix: Optionally create your own Google API Client keys
So just in case, here’s how to create your own Google API client keys:
As part of my research into AAPL vs AMZN Performance Madness, I noticed a lot of negative press on AAPL’s share price. So I decided to wait until after the election, then sample the press again to see whether it was a blip or a trend.
In short, the evidence points to a concerted effort to create a negative impression of Apple and push the stock price down. Lets take a look, shall we.
One thing I like to do is to have different themes for different file types in my text editor. That way, at a glance, I can guess what kind of file a text-filled window contains, especially when zoomed out using Mission Control. I’ve been using Custom Language Preferences in BBEdit preferences to set up the color scheme for each file type there. TextMate 2 users, check out Multiple Themes in TextMate 2.
Turns out, it’s easy. The file on the left is Ruby, the one on the right is Markdown (The sample code is Slogger by Brett Terpstra).
To achieve this, first install all the themes you may need. Obvious, I know!
Then set the default theme using Preferences / Color Scheme from the Sublime Text 2 menu. This sets the theme in the default preferences file which resides in ~/Library/Application Support/Sublime Text 2/Packages/User/Preferences.sublime-settings.
Next, open a file type where you would like to use a different theme, for example, open a Markdown file. It will open using the default theme. Now choose Preferences / Settings More / Syntax Specific - User from the Sublime Text 2 menu. Sublime Text 2 will create a new settings file with the selected file type as its name (in my case, the markdown settings file is Markdown.sublime-settings). If the file already exists, Sublime will open it for editing.
Set the theme in this file, as well as any other settings you like for that file kind. For example, my Markdown.sublime-settings is:
I rarely use Microsoft Office. There, I said it. And it’s true. There are electronic cobwebs on my copy. You may now run out of the room screaming.
For many, this is like saying I rarely bathe. I rarely use Microsoft Office because I have absolutely no reason to use it except for two specific cases.
Email, not Outlook
There are several reasons why I don’t use Outlook for email:
Messages are stored in a proprietary format that is not cross platform.
I don’t have an Exchange server (and even when I did, I enabled IMAP and did not use Outlook).
I believe in using an email client for email, a contact manager for contacts and a specialist calendar application for calendaring.
It’s slow, bloated, buggy as hell and not Mac-like at all.
Email to me is a means of communicating, its purpose is to engage people offline using written content. Outlook is a management tool for filing and managing documents, not communicating.
Writing, not Word
I write a lot. Blog posts, product documentation, proposals, notes, project logs, reports and the occasional letter. All writing. Writing is the activity of converting deep thoughts into readable and understandable sentences. I don’t use Microsoft Word for writing.
The results of my writing are shared on the web in HTML form and everywhere else in PDF form. I never, ever send an editable document file to a client, because I have no idea what they will do to it and then send it on as if it was sent by me.
Microsoft Word can be good for formatting documents, but Apple’s Pages is cheaper, faster and way easier to use for this. Having said this, I mostly write in Markdown and use Marked with my own CSS to format and convert the document to PDF. Zero effort formatting.
Calculations, not Excel
If I need to make a quick calculation that I cannot do in my head, I use my trusty HP 12C calculator that’s always within reach. If the calculation is more complex, I use Soulver. In fact, most of the basic calculations, estimates and models I make are done in Soulver. I don’t need a full spreadsheet to do a few calculations with some “what-if” scenarios.
If I need to manipulate data, I use BBEdit for data that arrives in text format, or a database for larger sets. That’s what databases do, using a spreadsheet for a database is wrong. In fact, the majority of Excel files I get are just data tables that would be better off sent as CSV files.
If I do receive an Excel file, I use Numbers to open and view it. Numbers is not a fast spreadsheet product, but it is growing on me. On the rare occasions where I need to create a spreadsheet model for a client, which happens about once a year, I use Numbers, and send the model as a PDF.
Keynote, not Powerpoint
I make presentations, not Powerpoint decks. You’ve all seen them, Powerpoint documents printed out and bound as books. They look awful. If I need to make a booklet deliverable, I write the content in a writing tool and create the booklet in a proper text layout tool such as Adobe Indesign or Pages.
When I need to make a presentation, I create my slides in Keynote. This product is so much faster and looks better than Powerpoint. I also present using Keynote through a projector. If I need to make a handout to go with the presentation, that goes through the booklet process. I do not just print the slides and walk away. It looks terrible. And slides are only meaningful in the context of what I, the presenter, am saying at the time the slide is shown, and useless afterwards. I do not write my presentation in slides and read that out, that’s also wrong.
But there are times when Microsoft Office is needed, and over the past 2 years, these are the only two reasons I have used it:
Testing output from programs. I do have a few clients where I have written programs which generate Excel files for them. To ensure the Excel file works, I need to open it in Excel. One can never be sure that another spreadsheet product will be sufficiently compatible.
Dealing with Lawyers. Lawyers love to send agreements with change tracking on to show us, their clients, what work they have done. You really have no choice but to open these files in Word.
That’s all I found.
Just because I almost never use Microsoft Office does not mean I am less productive. Instead, I find myself being way more productive. I manage my email better in Apple’s mail.app with plugins. I write better using Markdown and writing tools. I generate calculation models faster and better in Soulver. I create my presentations faster in Keynote. And I share everything in HTML or PDF.
I simply don’t need Microsoft Office. That’s $200 saved. And I see no reason to purchase the iOS version when it comes out next year.
Productivity is doing less stuff to get more stuff done. Managing its context is the key to becoming more productive. Here is a framework to establish and manage your productivity context to maximize your productivity when using your computer, based on what I have been doing to manage my own on my Mac. Hang in there, this is a long one.
We’ve all heard about Inbox Zero, where you try to reclaim your email, your attention and your life by clearing out your email inbox. I needed to reduce the number of inboxes first.
Those of us who have been on the Internet for decades have amassed a large number of email addresses on a variety of services. As time passed, and new services emerged, we added more. Most of us are loathe to close any of these addresses down just in case we get an email there. Which means setting them up every time we set up a new device, and checking them regularly. That is not productive!
Me, I have ten (10). Ten different mail servers where I can receive emails from. Not only that, but several of these implement many name variants, so I have many many more actual email addresses, e.g. firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, and email@example.com.
Ten is too much. So I reduced them to four (4). How?
Easily. You see, most email servers and services have the ability to auto-forward any emails received to another address. Most also allow you to delete the email once forwarded so you do not run out of space. All I did was set forwarding up.
My oldest, still active, email address from hotmail (now called outlook) now forwards to gmail.
My second oldest account, which I never use, email address from yahoo also forwards to gmail.
My family domain emails from lippies also forward to gmail.
My last hedge-fund’s emails, agamascapital, have not been relevant all year, so they no longer forward anywhere, and I no longer monitor it.
The company we set up after the last hedge-fund, envisioncp forwards to my current company email noverse.
This site’s emails, hiltmon, forward to my company noverse as well, since I now treat this site as a writing platform and no longer a family photo site.