A Foodie’s Guide to Energy Gels Update

Sunday, 14 July 2019

After a long hiatus (some running-related, but mostly finding-the-time-to-update-this-site-related), I’m happy to start adding a long-running backlog of reviews to my Energy Gel Tasting Notes (you can find linked on the Projects page).

I added the first 7 out of the 30-something I’ve got queued up, so stay tuned – I’ll be adding the rest progressively over the next few days.

Fitbit Acquires Pebble

Thursday, 8 December 2016

So the news is out: Fitbit is acquiring Pebble. Quite a few people have been asking about this, given that I was part of Pebble’s initial Kickstarter and my Product School project, proposing Fitbit developing 3rd party support on their Blaze smartwatch.

Personally, I think this is the smartest move for both companies – in fact, I advocated this as the best solution during my final presentation in Product School. It just makes sense: Pebble honestly had no place to go on their own as they got squeezed by the big players; Fitbit desperately needs a better smartwatch roadmap than the (seemingly shortsighted) one they’ve got, as they too get squeezed.

Starting a third-party platform is no joke, especially as competing platforms have already gotten the drop on you (even if they’re contending with their own challenges with the form factor). Which isn’t to say that acquisition integrations are a cakewalk either. Beyond the usual corporate acquisition headaches, you’ve got to contend with how to blend the two competing product lines in the best possible way and not lose that third-party developer community along the way.

Of course I’m not privy to any inside knowledge, but doing a little digging provides some additional insight into how things are likely going to shake out. The latest post on Pebble’s Developer Blog has this to add to the conversation:

  • Pebble expects no future software development on existing Pebble firmware.
  • Existing services for developers will continue to run for the foreseeable future but will be eventually phased out, “providing the ability for the community to take over, where possible.”

…and yet, “Third-party Pebble developers have a massive opportunity to drive how a Fitbit developer ecosystem will take shape.” So it’s pretty clear that Fitbit is being smart about this acquisition, but I’m perplexed why they’re not going to keep the existing services going indefinitely (or, at least, why they’re not avoiding saying anything about EOL plans). If anything, this might lead one to suspect that Fitbit might be planning to narrow down platform so that it hews closer to the niche-ness of the Blaze – which doesn’t make much sense.

In any event, it will be interesting to see what unfolds. Hopefully Fitbit follows a UX fundamental that Pebble ignored

How to Migrate Your Email Hosting to MXRoute

Thursday, 13 October 2016

Coming back to a technology topic, I’ve been having some nagging & kind of bizarre email issues with my hosting provider, DreamHost. While I continue to be really happy with the web hosting and DreamObjects product, I just couldn’t shake my annoyance with the email service.

After doing some research, I came across MXRoute. I finally decided to pull the trigger and found that migrating everything over, while fairly straight-forward, had a few bugaboos that might catch someone up. And so I wrote HOWTO: Migrate Your Email From DreamHost to MXRoute in the event anyone else wanted to do the same.

A Foodie’s Guide to Energy Gels

Tuesday, 27 September 2016

And now for something completely different, I posted my Energy Gel Tasting Notes, which you can find linked on my Projects page.

Calling it a “project” is admittedly a bit much, but ultimately I wanted to share my findings (surprising, pleasing, and stomach-churning) as I’ve been frustrated with the shopping experience of staring at a wide variety of gels with tantalizing flavor names only to find out mid-run that I had been lead woefully astray. I can’t imagine I’m the only one. Hopefully these notes (which I’ll be updating with new gels as I try them) will be of some use – please learn from my mistakes!

API Lessons from APIWorld 2016

Friday, 16 September 2016

Marc Andreessen claimed in 2011 that Software Is Eating the World, and as technology has progressed ever onward it’s now a case of APIs Are Eating the World more specifically or at least APIs Are Fueling the Software That’s Eating the World. Whatever your particular view on the matter, APIs are extremely and increasingly important in software and hence, the world at large. With that in mind, I spent the last few days at APIWorld, the “largest vendor-neutral API conference and expo”. Below are some insightful learnings1 I picked up during a couple of sessions that might be helpful in providing an API – whether you’re creating, maintaining, or actively growing one.

Negotiating an API: Crafting Endpoints So Developers Still Like Each Other link intact

A great talk by Travis Jungroth (@travisjungroth) from HouseCanary which could have been titled as “5 Lessons in Leveraging UX Best Practices to Create a Better API”.

  • Lesson 1: Empathy
    • Design the client side first in order to make the “ideal client”.
    • Write 20% of the calls that get 80% usage. Set the patterns that will dictate how the remaining 80% of the calls will be designed.
    • Take this time to negotiate with yourself the tradeoffs between ease of the provider (you, when creating/maintaining it) and ease of the client (your users, developers using the API for their products).
    • Get feedback early and often.
  • Lesson 2: Be Consistent, But Flexible
    • While seemingly at odds, more specifically, be consistent in your implementation but flexible in function.
    • Simplicity will often get you both at the same time, mostly for free.
  • Lesson 3: Endpoint Design
    • Make sure you’re hitting all the REST super-basics; avoid elaborate and unique implementations.
    • Jungroth noted that he prefers “coding like a lawyer”, preferring to look for and replicate prior art when he’s coding.
    • By following standard conventions & best practices, you’re providing your potential users the best opportunity to understand the API because they’re already familiar with the core basics of it.
  • Lesson 4: Get Feedback
    • It’s best to talk to actual users. When doing this, he prefers to not write it all down and instead make sure that he’s talking to all of the users approached for a few reasons:
      • Logging all feedback with the plan to query it later almost never happens; the feedback just sits somewhere never to be looked at again.
      • When the same person is doing all the user interviews, that person will be able to tune into the most important issues as they will surface simply due to repetition from a large group of users.
      • Usage & Logging Analytics are the #1 feedback source (in addition to endpoint usage), so make sure you’re fully logging errors.
  • Lesson 5: Be Careful When Making Change (the hardest thing to do)
    • An API is essentially a contract with your users; when they choose to use it, they’re putting their trust in you that it will work and work consistently. Breaking the API is breaking their trust.
    • Of course, adding endpoints is OK - the trick is how to remove or change them.
    • You set yourself up for failure if you add versioning in the URL (e.g.,…); instead, add versioning info in the headers.
    • Documentation is hugely helpful; he’s a huge fan of Swagger.

Jungroth also shared a few Do’s and Don’ts:

  • Do
    • Try to achieve flat endpoint design.
    • Use user personas when designing the client.
    • When needing to validate, split the call into a second “is_valid”.
    • If you need to provide large files, consider passing through a link for the developer to download directly (such as from Amazon S3).
  • Don’t
    • Use query/endpoint flags that dramatically alter default behavior of the endpoints.
    • Allow endpoints to use different services for different things.

The Fundamentals of Platform Strategy – Creating Genuine Value with APIs link intact

This talk by Steven Willmott (@njyx) from 3scale / Red Hat could’ve benefitted from another 20 minutes or so; fantastic information that unfortunately got occasionally rushed through as the conference ran a pretty tight ship in maintaining the schedule.

While Willmott has provided his deck on Slideshare, here are the points I was able to scribble down:

  • Platforms are both a huge opportunity for value and often a wasted resource.
    • They’re big initiatives, often mission-critical, and are for a genuine need.
    • The challenges: the APIs are not used (or don’t work), and the team responsible often isn’t properly resourced.
  • “The Jeff Bezos Moment” was a defining milestone.
    • But even if you do have one yourself, make sure you’re focused on the value you’re delivering piece by piece.
  • Plan the Value:
    • Individual APIs can have tremendous value.
    • Underpin your products, enable new business channels, provide abstraction layers.
    • Core value almost always revolves around agility.
  • Common Errors:
    • Trying something peripheral to your main business to lower your risk.
    • Replacing multiple core systems all at once.
    • Having your platform team build everything themselves; instead, share the burden among other teams.
    • Top-down designing a uniform approach.
  • Your best time to begin:
    • When you have IT driving an initiative.
    • When IT teams are ready to rethink some infrastructure.
    • When the goal isn’t too abstract.
  • Focus on the true value and identify the true users:
    • …of the API
    • …the end user on the very end of the API
    • Ask the usual Product Management questions:
      • Who is the customer?
      • Where is their value?
      • If we create this, will you use it?
      • What is the complete use case?
  • Deliver value one API and one use case at a time.
  • The Developer Experience is overrated:
    • How do the APIs need to be consumed?
    • Who else might be consuming the API?
    • Where do their effects propagate to?
    • What additional value can be added?
  • The API is not the key thing – the Value it delivers is.
    • The platform’s value is the result of the value between providers and consumers
  • Measuring Value:
    • Delivering a platform is very hard & unglamorous work.
    • It’s not a big bang.
    • People don’t like the rules and restrictions.
    • But it can be hugely rewarding.
    • Recommends measuring on business(es) enabled – number of- and revenue-created-by.

10 Mistakes to Avoid When Building Your API (And How We Learned the Hard Way) link intact

Another great snapshot of learnings from Chris Paul (@idiosynchris) of HelloSign.

  • Mistake 1: Don’t Write the Documentation First
    • Starting with writing the documentation before a line of code helps you avoid inconsistencies created in development.
  • Mistake 2: Don’t Notify Your Users of Change
    • Paul shared a story about a rebrand HelloSign underwent – and didn’t preannounce. While seemingly innocuous and inconsequential to their API users, the rebranding did break the experience for those users that chose to design their UI & UX consistently with HelloSign’s (prior) design.
    • Ultimately, HelloSign’s users really wanted the heads-up and ability to test the changes ahead of time.
  • Mistake 3: Don’t Provide SDKs
    • HelloSign took a while before providing SDKs and found that their API’s adoption rate jumped when they did; many users confirmed the assumption and cited the SDKs as a deciding factor to sign up.
    • If you don’t provide an official SDK, your community will fill the void with their own of varying quality and eventually you will be expected to support them.
  • Mistake 4: Ignore Task Automation Services
    • HelloSign initially didn’t put much thought into supporting task automation services, but eventually co-developed with Zapier their particular implementation.
    • Now they see 1 out of 10 API requests being made through task automation services.
  • Mistake 5: Don’t Use Your Own API
    • Paul joked that HelloSign’s Marketing team requested shorthanding this to “Drinking Your Own Champagne” instead of the common “Eating Your Own Dogfood” analogy.
    • Here Paul also referenced the aforementioned “Jeff Bezos Moment” and pointed out that if HelloSign used their own API it would’ve saved them a lot of development and testing effort.
    • Using your own API will also tell you whether or not your developer experience is first rate.
  • Mistake 6: Build One-Size-Fits-All Rate Limiting
    • In particular, HelloSign found that doing this made upgrading problematic. Further, customers that hit the limit often had outstanding time-sensitive documents that needed signing.
    • Now, the HelloSign support team has the ability to increase individual customer rate limits immediately.
  • Mistake 7: Ignore Duplicate API Requests
    • You can probably file this under “Don’t ever think you know everything your user is going to do with your product”; HelloSign initially took a pretty conservative view on duplicate requests but eventually found they were catching false positives resulting in customer complaints.
    • Instead, let the user define what is a duplicate. Suggested implementing an “item potency key” user-defined parameter, pointing to Stripe’s API as an example.
  • Mistake 8: Don’t Plan for Support
    • HelloSign’s developer team found that they were having a lot of their time being taken by being the ones supporting the API as their Customer Support team was oriented around end-user support.
    • They eventually built out a separate API Support team when one of their CS reps interested in programming stepped up and created the team.
  • Mistake 9: Don’t Provide Your Users Data Insight
    • Users expected this level of service, which resulted in a lot of dev team hours being used answering CS escalations about failures, etc.
  • Mistake 10: Be Inflexible
    • Paul shared a story about one of HelloSign’s customers that was in the business of assisting people pursuing adoption. The original design of the architecture never anticipated their use case of PDFs with a large amount of editable fields and this particular customer was running into time-outs and other errors because their PDFs were exceeding the POST field maximum (each editable PDF field is one POST field). The customer turned out to be very generous with their time to work with HelloSign to come up with an acceptable solution.
    • Paul referenced Postel’s Law: “Be conservative in what you do, liberal in what you accept from others”.

Consequences of an Unhealthy API – What Businesses Need to Know link intact

Rounding out the “API lessons learned” was this talk by Patrick Malatack (@patrickmalatack) of Twilio, although it really should’ve been titled something to the effect of “3 Lessons We Learned to Develop Great APIs” as he never really discussed any consequences that resulted from an unhealthy API.

  • Lesson 1: APIs Are For Humans
    • Make it easy to read (by a human).
    • Opt-in to complexity; introduce complexity to the user at their time of need for it. Malatack showed examples from the Twilio API where a simple message’s straight-forward code was slowly incremented to introduce first an image attachment, then functionality to support international messaging.
    • Hackathons aren’t just for fun; don’t just sponsor them for branding awareness but participate to learn from the contestants too. Sit down with them while they’re working and get their feedback.
  • Lesson 2: API Docs Are Marketing for Developers
    • Malatack also advocates for writing the docs first, before any code as a way to do rapid prototyping; he likens the process as the equivalent of wireframing in Design.
    • Here too, solicit feedback from users early on.
    • “Helper Libraries” are the primary interface: they account for >80% of Twilio’s API requests.
      • This is basically the manifestation of looking where your desired users are and supporting them there; Provide all documentation in every given language/framework.
      • Most importantly: make sure the documentation for the helper libraries are as good as the docs for endpoints.
      • Support standard HTTP status codes; don’t get creative.
  • Lesson 3: Move Fast and Don’t Break Things
    • Users put a lot of trust in you and your API when they choose to use it.
    • A Diamond An API Is Forever”.
    • Never have maintenance windows.
    • Consistency trumps speed:
      • Your API is just one part of what comprises your users’ product; you have no way of knowing the consequences for breaking your API to your users.
      • Measure your P99s.
      • A little latency is probably OK as long as it’s consistent (i.e., your users can expect it and design their products accordingly).
    • “Flags” are your friends:
      • Trilio created “Flag” attributes at the account level that enables them to change anything on a particular account.
      • Once you recognize that a design decision is problematic, you can introduce a new flag to isolate all the affected users and address the issue(s) going forward.
      • Flags also enable you to easily test with subsets of users.
    • Allow for independent releases; version each product/feature independently to minimize risk.

While there are some slight variances here and there, it’s reassuring that there’s a consistent story among these API practitioners: Remember your API is for people and design it for both your developer users and their end-users in mind. Staying user-focused will set a sound foundation for success.

  1. For those looking for something interesting and maybe controversial, you might want to check out Owen Rubel’s The New API Pattern deck, in which he argues for a more robust design pattern to better handle distributed architectures. 

Never Forget Your Core Function

Tuesday, 16 August 2016

I do want to preface this rant by saying that I’ve been a mostly-happy Pebble user since their launch on Kickstarter. The initial ship date window was woefully underestimated, but the watch eventually arrived and delivered pretty well on its promise. There were some hardware hiccups along the way, but the support team at Pebble has been great and kept me motoring along with my Kickstarter Edition without (unresolved) complaint. Further, Pebble continues to support the original hardware with their software updates even though it’s clearly long in the tooth and their development cycles are pretty aggressive.

That said, I’ve noticed a fairly recent change in how their firmware handles significant error conditions.1 While I’m happy to say that I don’t have a tremendous amount of experience with this undesired scenario, I recall that before the watch would default to a non-interactive, bare-bones watch mode before you’re able to address the error through firmware reinstall (or other problem-solving effort). Recently however, I’ve been presented with this:

An unusable error condition in the Pebble smartwatch firmware

This change effectively makes the watch a brick, uselessly strapped to the user’s wrist for the entirety of the time before they're able to troubleshoot.2 If the user presented with this behavior isn’t doing anything at the time and can immediately sync up the watch with their phone to install a firmware update (or reinstall it altogether), you can make the argument it’s not a big deal.

However, we all know reality doesn’t work that way. For example, the user could be in back-to-back meetings and might have no other way to inconspicuously check the time. In this scenario, Pebble has lost sight of its core function.

A Humble Suggestion link intact

If at all possible, in these sorts of significant error conditions Pebble should go back to reverting to a “low-feature mode” – a simple digital watch-face that delivers on the core function of the device: displaying the time. The annoyance of losing the super-set of smartwatch features is inescapable, but why completely disrupt the user’s experience if you can avoid it?

  1. For the purpose of this argument, I’m defining “significant error condition” as something that prevents the watch from operating normally but shy of a “fatal error condition” that I would expect to be a complete, unresponsive power-down sort of scenario. 

  2. It should be noted that I have no indication as to whether there are different stages of these significant error conditions, which might explain why I’ve seen these two different screens. All I know is that before July 1st, I never saw this particular screen and haven’t seen the bare-bones watch mode since. 

Ars Technica: HTTPS is not a magic bullet for Web security

Monday, 18 July 2016

A great overview for anyone interested in learning more about the nuance of HTTPS in how it is secure and how it can be misconstrued as secure. While there is some jargon in the article, it’s pretty easily parsible so there’s no reason to avoid reading it. Very much recommended for anyone using the web (despite the irony of Ars not offering blanket HTTPS connections).

My Experience Implementing HTTPS on SFFW link intact

Gilbertson, the author, makes it a point to share his personal traumas of trying to get HTTPS up and running on past projects he’s worked on. While I don’t dispute that, I do want to offer my own personal experience if you’re interested in implementing HTTPS on your own site(s) as I’m happy to report that it was quite the opposite for me. Late last year, Dreamhost announced that it would offer the one-click install of Let’s Encrypt SSL certificates that Gilbertson said might eventually arrive – which I took full advantage of back in February shortly after the beta program started. It’s basically a one-click install followed by a short wait for everything to get up and running, but definitely a stress-free experience requiring no sophisticated IT knowledge to implement.

Courtesy of this Dreamhost FAQ, a quick addition to my site’s .htaccess file:

              RewriteCond %{HTTPS} !=on
              RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

…and all HTTP connections to my server would be redirected to use HTTPS. With practically no effort of my own, SSL Labs’ security test rates my site an “A”.

HTTPS Everywhere link intact

I would be remiss if I didn’t put in a recommendation for EFF’s HTTPS Everywhere browser plugin (Android, Chrome, Firefox, and Opera only at the moment). Simply stated, whenever you browse to a site that also offers HTTPS but doesn’t do the aforementioned auto-redirection to it, the plugin will do it for them. Easy-peasy.

Antoine Valot: Nine Nasty UX Truths

Wednesday, 13 July 2016

A lot of good nuggets here, but not much in the way of surprises. If anything, it’s an opportunity to confess my conflicted feelings of having chosen the Golden Gate Bridge flavor of “International Orange” as the link color on this site; it’s on the reddish side of orange that brings my choice uncomfortably close to Valot’s position on red, suggesting that I “hate you and want to make you angry”.1

You should be forewarned that taking his advice literally might earn you a trip to the ranks of the unemployed though (even if the core argument – such as honing the UI/UX to help the user as a way to simplify the navigation for their ultimate benefit – is a worthy one to be considered):

On a project, do the sitemap and navigation last. Actually, never do them. Start with the most important object or screen: the one that helps the user achieve their goal. Waste all the project time and budget on making that screen perfect. Obsess over every detail. Lavish hours to the appearance of each pixel. Indulge every fancy and enjoy every minute of it.

Once there is no more time or budget, your client/boss will get very angry, and scream at you that you didn’t do all the other bullshit they wanted to cram down the user’s throat. Play dumb, apologize, and earn yourself a reputation as a flake who never finishes anything… but still, don’t design any of it.

  1. I assure you that my desire to tie in this branding decision with one of San Francisco’s greatest landmarks has no such ulterior motive. I mean, the bridge is the background image, after all – and choosing instead to go with the concrete color of the Transamerica Pyramid is … uninspired (and, honestly, not particularly smart from an accessibility point of view). 

Product School Project

Thursday, 7 July 2016

A while back I completed the Product School 8-week, part-time program that earned me a Software Product Management (SPM)® certificate. While the program is geared to giving people interested in transitioning into a Product Management role a crash course on what’s involved, I decided to enroll to do a gut-check on my experience as I became a PM the old-fashioned way: on-the-job trial-by-fire. The course is designed around going through the process of creating a Product Requirements Document (PRD) for a new feature to an existing product, culminating in giving a 5–10 minute presentation proposing the feature to the class.

While I’ve already made my presentation available for download on the Projects page for a while now, I have added the entire PRD to this site. So if you’re interested, please swing by and check it out.1

  1. I will warn you that this PRD is a bit of a monster as the proposed feature would need to touch three different platforms (smartwatch wearable device, native mobile app(s), and web) and two different user types (consumer/user and third-party developer). 

Speckyboy: 5 Lessons of Japanese Web Design

Tuesday, 31 May 2016

Differences in cultural norms have always fascinated me. As someone not well-versed in Japanese culture, I found this article interesting – even with the apparent conflict between lessons 1 & 2 and lesson 5. But as in meatspace, what initially appears as a difference sometimes really isn’t when you get down to the basics of it.

In other words, you don’t need to be in the Japanese market to have your product succeed with the help of these “lessons learned”.

Leaving Google Analytics: Piwik vs Open Web Analytics

Tuesday, 24 May 2016

First things first: this site is a zealotry-free zone. A lot of people love Google and Google Analytics (GA) specifically – all the more power to ’em. GA is a tremendously useful and powerful product; I’ve used it for many years both personally and professionally and have a lot of respect for the great people that work on it.

But I’ve been concerned about their sometimes curious choices to sunset useful products (Reader, Feedburner). Don’t get me wrong; the products are theirs to make those decisions (and I’m sure there are logical internal business decisions for why they chose to sunset them), but it doesn’t make the choices any less problematic for those products’ users.1 Between this concern, a desire for real-time analytics, and just a general curiosity to see what else was out there, I started looking around for another web analytics solution for my personal use. Hey, it’s also a great opportunity to learn some new tools, right?

What Am I Looking For? link intact

Originally, my only real requirements were comparable functionality to Analytics from someone who isn’t Google and whose business model wasn’t predicated on user data. In terms of specific functionality, I wanted geolocation and campaign tracking, as well as being able to easily check in on the go (a native app certainly wasn’t required, but responsive web was the bare minimum).

I initially tried GoSquared and Heap Analytics which, like Google, run professional hosted services. Both are primarily targeted to businesses but offer free tiers for those of us too inconsequential to fit into that target market. In the end however, their feature sets are too specific for their target users that they just didn’t pair up well with my personal needs for this humble little site.

As my research and dabbling continued, I really warmed to the thought of a self-hosted solution. I don’t have anything against hosted services per se, but having complete ownership of my data is compelling. Of course “free” (as-in-beer, but free-as-in-speech too would be better) is a great feature, which dovetailed with the desire for self-hosting.

Mint link intact

Live Demo

Screenshot of Mint dashboard, from

Way back in the day I remember thinking that if I ever wanted to try something other than GA, I planned to use Shaun Inman’s Mint. Back when Google Reader’s demise was announced, I had bought a license to Inman’s Fever and was impressed with his work. While Mint isn’t free (beer or speech), it’s inexpensive (USD$30 per site), the licensing terms are acceptable to me, and the software must be self-hosted. It’s also extensible, offering both 1st and 3rd party “Pepper” plugins (although some of the extensibility I’m looking for doesn’t appear to be offered by Mint or any Peppers). Unfortunately, much like Fever, Mint doesn’t appear to have been receiving much love lately – they both work, but Inman’s focus and interest are clearly elsewhere. Given this, I couldn’t justify spending the money on Mint (despite being happy with Fever).

Open Web Analytics (OWA) link intact

Live Demo

Nosing around on Dreamhost’s admin panel, I found that they offered a one-click install for Open Web Analytics – software I wasn’t familiar with, but the thought of offloading the minor maintenance tasks onto my hosting provider certainly made it compelling enough to give it a whirl. Taking a look at their site, it’s easy to see its appeal to someone who’s coming from GA:

Screenshot of OWA dashboard, from

Upon further inspection however, the features offered are limited and the extensibility through the module (plugin) architecture is … not really popular. For example, the geolocation module was pretty lackluster especially when compared to Piwik’s; not only did it require manually downloading & installing the city database from Maxmind (which is updated monthly), but there is no option to get visitors’ network provider information.2

While I’m sympathetic to Peter Adams’s schedule and interests aside from being the solitary maintainer of OWA, ultimately I want a tool that’s in active development and gets enhancements from outside parties.


  • UI familiar to GA users
  • Installation & maintenance easily provided by Dreamhost
  • Baked-in RSS tracking functionality
  • Country/State/City geolocation functionality
  • Free (beer & speech)


  • Hasn’t been updated since February 2014
  • Pretty inflexible and admin settings are pretty light
  • Seemingly non-existent 3rd-party module support
  • Geolocation module requires manually downloading / installing / updating database
  • Mobile experience is non-responsive and miserable; this is particularly inexcusable when there is no native mobile app.
  • Doesn’t appear to honor “Do Not Track” functionality on browsers

Piwik link intact

Live Demo

Screenshot of Piwik dashboard, from

Piwik appears to be where it’s at for self-hosted and free (beer and speech) web analytics solutions. It’s actively being developed by a company that makes its money from premium services on the product; it’s a clear and proven business model that doesn’t leverage my data in order to sell to advertising customers. The feature set is pretty comparable with GA, even if they made some strong decisions in differentiating the UI and experience – it’s not entirely my cup of tea, but it doesn’t get in my way of using the product.3



  • UI is a little rough around the edges for my taste (but serviceable)
  • No ability to track RSS (but Piwik’s developers are open to working on it if funding is secured)
  • I’m on the hook for maintenance (but I consider this to be minor as it’s to be expected for self-hosted solutions)

Conclusion link intact

Unsurprisingly, I decided to go with Piwik.5 There really was no contest here, honestly. Almost by the lone virtue of the dust covering it, OWA was a non-starter. Heap and GoSquared are simply over-designed for my needs.6 If my personal web experiments ever become complex enough to warrant it, I’ll definitely give Heap and GoSquared another look (as well as any players new to the scene) but I’m guessing that between Piwik’s current features & plugin ecosystem and active development I might never need to do that.

  1. That said, it’s extremely hard to imagine Google choosing to do the same to Analytics – especially given how valuable the data they collect through GA is to their actual Search business model. 

  2. And even after all of that, I never was able to get the geolocation data to consistently work. 

  3. It’s worth noting that I’m not holding up GA UI/UX as a gold standard here either. 

  4. I can’t believe I didn’t originally include this – as you might have already inferred given it being a self-hosted solution, there’s no waiting around for your data. 

  5. Full disclosure: As I noted earlier, I’m weaning myself off Google products – not cutting cold-turkey. I’m continuing to use GA redundantly with Piwik for the time being until I’m confident in the latter's numbers and performance. 

  6. Some follow-up research surfaced Logaholic and that Mixpanel offers a free tier. In the case of the former, it looks interesting – although the massive number of features that only become enabled when you start paying was a little off-putting given that I’m not looking to enroll in a paid subscription for my analytics tool at the moment (maybe later, should my needs change). Regarding the latter, while my site easily qualifies for the sub-25,000 data point ceiling of Mixpanel’s free tier, I’m a little unnerved at the thought of tipping over into their lowest paid tier purely due to having an unusually popular month (plus, I suspect that it might be over-designed for my needs like Heap and GoSquared). 

Mike Davidson’s “3 Years in San Francisco”

Thursday, 12 May 2016 

An interesting read1 from Mike Davidson spanning a few different topics, including the relocation experience, working at Twitter, people management, and the notion of what makes a good Product Manager:

There is a contentious ongoing debate in our industry about what the requirements and role of a product manager should be. One side says they must be deeply technical (i.e. ex-engineers), while the other says they needn’t be. I’m told the deeply-technical mindset came from Google, where one day a long time ago, they decided they needed a PM role at the company so they took some of their best engineers who were already widely respected at the company, and made them PMs. Makes total sense, especially considering the early problems Google was trying to solve (mainly search quality), but unfortunately it has caused a wave of copycatting in Silicon Valley that is bad for products, bad for diversity, and bad for business.

This is a bit of a peeve of mine, speaking as a technology Product Manager who not only was never a professional engineer but also knows a number of other solid technology PMs with non-engineering backgrounds.

Davidson also makes an excellent argument for disposing of the popular shorthand for describing PMs as “mini CEOs”. While I understand the intended sentiment, Davidson’s correct that among other things, “the term CEO is so loaded with preconceived notions, that it's just not a safe place to even start your job description”. If you’re one to insist on shorthands, personally I’m a fan of one I picked up from a fellow student in my Product School cohort – that a Product Manager is more analogous to a symphony conductor.

  1. I swear it’s not a “3” theme this week, given Sunday’s post about Sunil Gupta’s 3 Paths to Essence – rather a curious coincidence. I had no idea that not only Davidson left Twitter, but that he returned to Seattle to boot; I had just caught this on my Twitter feed the other day. I’m looking forward to seeing what he moves onto next. 

SlideShare Error Fail

Monday, 9 May 2016

I have to confess I really feel for the people who do support for LinkedIn’s SlideShare product. I don’t pretend to understand the circumstances that led to this error message, but I can only imagine what the support rep’s first thought must be after getting a case assigned to them by a user who encountered it.

An uninformative SlideShare error

I linked my LinkedIn account to SlideShare in order to upload and host my Product School deck. Unfortunately, I didn’t get much further than setting up the account as this error was the only thing communicated regarding why I was unsuccessful in my attempt to upload the deck. Realizing that the file might not conform to SlideShare’s upload requirements, I sanity-checked:

  • .pdf, .odp, .ppt/.pps/.pptx/.ppsx/.pot/.potx formats: Check
  • Less than 300 MB: Check
  • It’s not a direct video upload: Check

(I was trying to upload a PDF less than 10 MB in size)

Knowing that sometimes mysterious gremlins that prevent intended behavior can get bored and leave, I left it and tried again 2 days later. No change.

So… what now? There wasn’t any other support documentation that shed additional light into what the problem might be (not Uploading Content to SlideShare from Desktop, nor SlideShare File Taking A Long Time to Upload), which leaves contacting that poor support rep with no real worthwhile information to share.

A Humble Suggestion link intact

In the interests of not turning off users (especially first-time users like myself) and maintaining your co-workers’ sanity, I would hope that errors that are thrown are properly identified. If so, this would allow the ability to parse the error and ultimately deliver a useful message to the user (in end-user-friendly language). Giving the user the ability to self-diagnose and, ideally, self-resolve will only make things easier for everyone.

Sunil Gupta: 3 Paths to Essence

Sunday, 8 May 2016 

It seems fitting to start this site’s posting feature with sharing a link to Sunil Gupta’s “3 Paths to Essence” post. It’s about 4 years old at this point, but the wisdom conveyed is timeless:

  1. Create, then Edit – but not both at once
  2. Build half, not half-ass
  3. Discipline your schedule

While #2 is directly valuable to a product manager and their development team (and what led me to the post), #1 and #3 are worthwhile reminders for everybody.

As a product manager in an Agile / Scrum environment, I’m very cognizant of scoping sprint development into those “thin vertical slices”1 that enable the full use (and testing) of a feature and building from there.

It should come as no surprise however that the other two paths are extremely helpful not only in product management but virtually any discipline. For practically my entire life of putting words to page (literal or virtual), I’ve followed the inefficient create-and-edit-in-tandem routine. I have just recently started to retrain myself to instead get everything on the page first, then edit. I still have a ways to go, but I’m already seeing the benefits.

And being ruthless about my schedule / task list? It’s an excellent habit to get into as a product manager, but I can’t recommend it highly enough for anyone who wants to improve on their delivery.

  1. For the uninitiated, a popular Scrum teaching analogy by is to imagine your development of a tandem bicycle starting with the release of a unicycle, followed by your next release of a bicycle, then finally the tandem bicycle. At each release stage, you have a standalone usable product that is slowly iterated into the planned final offering.