After a long hiatus (some running-related, but mostly finding-the-time-to-update-this-site-related), I’m happy to start adding a long-running backlog of reviews to my Energy Gel Tasting Notes (you can find linked on the Projects page).
I added the first 7 out of the 30-something I’ve got queued up, so stay tuned – I’ll be adding the rest progressively over the next few days.
So the news is out: Fitbit is acquiring Pebble. Quite a few people have been asking about this, given that I was part of Pebble’s initial Kickstarter and my Product School project, proposing Fitbit developing 3rd party support on their Blaze smartwatch.
Personally, I think this is the smartest move for both companies – in fact, I advocated this as the best solution during my final presentation in Product School. It just makes sense: Pebble honestly had no place to go on their own as they got squeezed by the big players; Fitbit desperately needs a better smartwatch roadmap than the (seemingly shortsighted) one they’ve got, as they too get squeezed.
Starting a third-party platform is no joke, especially as competing platforms have already gotten the drop on you (even if they’re contending with their own challenges with the form factor). Which isn’t to say that acquisition integrations are a cakewalk either. Beyond the usual corporate acquisition headaches, you’ve got to contend with how to blend the two competing product lines in the best possible way and not lose that third-party developer community along the way.
Of course I’m not privy to any inside knowledge, but doing a little digging provides some additional insight into how things are likely going to shake out. The latest post on Pebble’s Developer Blog has this to add to the conversation:
…and yet, “Third-party Pebble developers have a massive opportunity to drive how a Fitbit developer ecosystem will take shape.” So it’s pretty clear that Fitbit is being smart about this acquisition, but I’m perplexed why they’re not going to keep the existing services going indefinitely (or, at least, why they’re not avoiding saying anything about EOL plans). If anything, this might lead one to suspect that Fitbit might be planning to narrow down platform so that it hews closer to the niche-ness of the Blaze – which doesn’t make much sense.
In any event, it will be interesting to see what unfolds. Hopefully Fitbit follows a UX fundamental that Pebble ignored…
Coming back to a technology topic, I’ve been having some nagging & kind of bizarre email issues with my hosting provider, DreamHost. While I continue to be really happy with the web hosting and DreamObjects product, I just couldn’t shake my annoyance with the email service.
After doing some research, I came across MXRoute. I finally decided to pull the trigger and found that migrating everything over, while fairly straight-forward, had a few bugaboos that might catch someone up. And so I wrote HOWTO: Migrate Your Email From DreamHost to MXRoute in the event anyone else wanted to do the same.
And now for something completely different, I posted my Energy Gel Tasting Notes, which you can find linked on my Projects page.
Calling it a “project” is admittedly a bit much, but ultimately I wanted to share my findings (surprising, pleasing, and stomach-churning) as I’ve been frustrated with the shopping experience of staring at a wide variety of gels with tantalizing flavor names only to find out mid-run that I had been lead woefully astray. I can’t imagine I’m the only one. Hopefully these notes (which I’ll be updating with new gels as I try them) will be of some use – please learn from my mistakes!
Marc Andreessen claimed in 2011 that Software Is Eating the World, and as technology has progressed ever onward it’s now a case of APIs Are Eating the World more specifically or at least APIs Are Fueling the Software That’s Eating the World. Whatever your particular view on the matter, APIs are extremely and increasingly important in software and hence, the world at large. With that in mind, I spent the last few days at APIWorld, the “largest vendor-neutral API conference and expo”. Below are some insightful learnings1 I picked up during a couple of sessions that might be helpful in providing an API – whether you’re creating, maintaining, or actively growing one.
A great talk by Travis Jungroth (@travisjungroth) from HouseCanary which could have been titled as “5 Lessons in Leveraging UX Best Practices to Create a Better API”.
Jungroth also shared a few Do’s and Don’ts:
is_valid
”.This talk by Steven Willmott (@njyx) from 3scale / Red Hat could’ve benefitted from another 20 minutes or so; fantastic information that unfortunately got occasionally rushed through as the conference ran a pretty tight ship in maintaining the schedule.
While Willmott has provided his deck on Slideshare, here are the points I was able to scribble down:
Another great snapshot of learnings from Chris Paul (@idiosynchris) of HelloSign.
Rounding out the “API lessons learned” was this talk by Patrick Malatack (@patrickmalatack) of Twilio, although it really should’ve been titled something to the effect of “3 Lessons We Learned to Develop Great APIs” as he never really discussed any consequences that resulted from an unhealthy API.
While there are some slight variances here and there, it’s reassuring that there’s a consistent story among these API practitioners: Remember your API is for people and design it for both your developer users and their end-users in mind. Staying user-focused will set a sound foundation for success.
For those looking for something interesting and maybe controversial, you might want to check out Owen Rubel’s The New API Pattern deck, in which he argues for a more robust design pattern to better handle distributed architectures. ↩
I do want to preface this rant by saying that I’ve been a mostly-happy Pebble user since their launch on Kickstarter. The initial ship date window was woefully underestimated, but the watch eventually arrived and delivered pretty well on its promise. There were some hardware hiccups along the way, but the support team at Pebble has been great and kept me motoring along with my Kickstarter Edition without (unresolved) complaint. Further, Pebble continues to support the original hardware with their software updates even though it’s clearly long in the tooth and their development cycles are pretty aggressive.
That said, I’ve noticed a fairly recent change in how their firmware handles significant error conditions.1 While I’m happy to say that I don’t have a tremendous amount of experience with this undesired scenario, I recall that before the watch would default to a non-interactive, bare-bones watch mode before you’re able to address the error through firmware reinstall (or other problem-solving effort). Recently however, I’ve been presented with this:
This change effectively makes the watch a brick, uselessly strapped to the user’s wrist for the entirety of the time before they're able to troubleshoot.2 If the user presented with this behavior isn’t doing anything at the time and can immediately sync up the watch with their phone to install a firmware update (or reinstall it altogether), you can make the argument it’s not a big deal.
However, we all know reality doesn’t work that way. For example, the user could be in back-to-back meetings and might have no other way to inconspicuously check the time. In this scenario, Pebble has lost sight of its core function.
If at all possible, in these sorts of significant error conditions Pebble should go back to reverting to a “low-feature mode” – a simple digital watch-face that delivers on the core function of the device: displaying the time. The annoyance of losing the super-set of smartwatch features is inescapable, but why completely disrupt the user’s experience if you can avoid it?
For the purpose of this argument, I’m defining “significant error condition” as something that prevents the watch from operating normally but shy of a “fatal error condition” that I would expect to be a complete, unresponsive power-down sort of scenario. ↩
It should be noted that I have no indication as to whether there are different stages of these significant error conditions, which might explain why I’ve seen these two different screens. All I know is that before July 1st, I never saw this particular screen and haven’t seen the bare-bones watch mode since. ↩
A great overview for anyone interested in learning more about the nuance of HTTPS in how it is secure and how it can be misconstrued as secure. While there is some jargon in the article, it’s pretty easily parsible so there’s no reason to avoid reading it. Very much recommended for anyone using the web (despite the irony of Ars not offering blanket HTTPS connections).
Gilbertson, the author, makes it a point to share his personal traumas of trying to get HTTPS up and running on past projects he’s worked on. While I don’t dispute that, I do want to offer my own personal experience if you’re interested in implementing HTTPS on your own site(s) as I’m happy to report that it was quite the opposite for me. Late last year, Dreamhost announced that it would offer the one-click install of Let’s Encrypt SSL certificates that Gilbertson said might eventually arrive – which I took full advantage of back in February shortly after the beta program started. It’s basically a one-click install followed by a short wait for everything to get up and running, but definitely a stress-free experience requiring no sophisticated IT knowledge to implement.
Courtesy of this Dreamhost FAQ, a quick addition to my site’s .htaccess file:
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
…and all HTTP connections to my server would be redirected to use HTTPS. With practically no effort of my own, SSL Labs’ security test rates my site an “A”.
I would be remiss if I didn’t put in a recommendation for EFF’s HTTPS Everywhere browser plugin (Android, Chrome, Firefox, and Opera only at the moment). Simply stated, whenever you browse to a site that also offers HTTPS but doesn’t do the aforementioned auto-redirection to it, the plugin will do it for them. Easy-peasy.
A lot of good nuggets here, but not much in the way of surprises. If anything, it’s an opportunity to confess my conflicted feelings of having chosen the Golden Gate Bridge flavor of “International Orange” as the link color on this site; it’s on the reddish side of orange that brings my choice uncomfortably close to Valot’s position on red, suggesting that I “hate you and want to make you angry”.1
You should be forewarned that taking his advice literally might earn you a trip to the ranks of the unemployed though (even if the core argument – such as honing the UI/UX to help the user as a way to simplify the navigation for their ultimate benefit – is a worthy one to be considered):
On a project, do the sitemap and navigation last. Actually, never do them. Start with the most important object or screen: the one that helps the user achieve their goal. Waste all the project time and budget on making that screen perfect. Obsess over every detail. Lavish hours to the appearance of each pixel. Indulge every fancy and enjoy every minute of it.
Once there is no more time or budget, your client/boss will get very angry, and scream at you that you didn’t do all the other bullshit they wanted to cram down the user’s throat. Play dumb, apologize, and earn yourself a reputation as a flake who never finishes anything… but still, don’t design any of it.
I assure you that my desire to tie in this branding decision with one of San Francisco’s greatest landmarks has no such ulterior motive. I mean, the bridge is the background image, after all – and choosing instead to go with the concrete color of the Transamerica Pyramid is … uninspired (and, honestly, not particularly smart from an accessibility point of view). ↩
A while back I completed the Product School 8-week, part-time program that earned me a Software Product Management (SPM)® certificate. While the program is geared to giving people interested in transitioning into a Product Management role a crash course on what’s involved, I decided to enroll to do a gut-check on my experience as I became a PM the old-fashioned way: on-the-job trial-by-fire. The course is designed around going through the process of creating a Product Requirements Document (PRD) for a new feature to an existing product, culminating in giving a 5–10 minute presentation proposing the feature to the class.
While I’ve already made my presentation available for download on the Projects page for a while now, I have added the entire PRD to this site. So if you’re interested, please swing by and check it out.1
I will warn you that this PRD is a bit of a monster as the proposed feature would need to touch three different platforms (smartwatch wearable device, native mobile app(s), and web) and two different user types (consumer/user and third-party developer). ↩
Differences in cultural norms have always fascinated me. As someone not well-versed in Japanese culture, I found this article interesting – even with the apparent conflict between lessons 1 & 2 and lesson 5. But as in meatspace, what initially appears as a difference sometimes really isn’t when you get down to the basics of it.
In other words, you don’t need to be in the Japanese market to have your product succeed with the help of these “lessons learned”.
First things first: this site is a zealotry-free zone. A lot of people love Google and Google Analytics (GA) specifically – all the more power to ’em. GA is a tremendously useful and powerful product; I’ve used it for many years both personally and professionally and have a lot of respect for the great people that work on it.
But I’ve been concerned about their sometimes curious choices to sunset useful products (Reader, Feedburner). Don’t get me wrong; the products are theirs to make those decisions (and I’m sure there are logical internal business decisions for why they chose to sunset them), but it doesn’t make the choices any less problematic for those products’ users.1 Between this concern, a desire for real-time analytics, and just a general curiosity to see what else was out there, I started looking around for another web analytics solution for my personal use. Hey, it’s also a great opportunity to learn some new tools, right?
Originally, my only real requirements were comparable functionality to Analytics from someone who isn’t Google and whose business model wasn’t predicated on user data. In terms of specific functionality, I wanted geolocation and campaign tracking, as well as being able to easily check in on the go (a native app certainly wasn’t required, but responsive web was the bare minimum).
I initially tried GoSquared and Heap Analytics which, like Google, run professional hosted services. Both are primarily targeted to businesses but offer free tiers for those of us too inconsequential to fit into that target market. In the end however, their feature sets are too specific for their target users that they just didn’t pair up well with my personal needs for this humble little site.
As my research and dabbling continued, I really warmed to the thought of a self-hosted solution. I don’t have anything against hosted services per se, but having complete ownership of my data is compelling. Of course “free” (as-in-beer, but free-as-in-speech too would be better) is a great feature, which dovetailed with the desire for self-hosting.
Way back in the day I remember thinking that if I ever wanted to try something other than GA, I planned to use Shaun Inman’s Mint. Back when Google Reader’s demise was announced, I had bought a license to Inman’s Fever and was impressed with his work. While Mint isn’t free (beer or speech), it’s inexpensive (USD$30 per site), the licensing terms are acceptable to me, and the software must be self-hosted. It’s also extensible, offering both 1st and 3rd party “Pepper” plugins (although some of the extensibility I’m looking for doesn’t appear to be offered by Mint or any Peppers). Unfortunately, much like Fever, Mint doesn’t appear to have been receiving much love lately – they both work, but Inman’s focus and interest are clearly elsewhere. Given this, I couldn’t justify spending the money on Mint (despite being happy with Fever).
Nosing around on Dreamhost’s admin panel, I found that they offered a one-click install for Open Web Analytics – software I wasn’t familiar with, but the thought of offloading the minor maintenance tasks onto my hosting provider certainly made it compelling enough to give it a whirl. Taking a look at their site, it’s easy to see its appeal to someone who’s coming from GA:
Upon further inspection however, the features offered are limited and the extensibility through the module (plugin) architecture is … not really popular. For example, the geolocation module was pretty lackluster especially when compared to Piwik’s; not only did it require manually downloading & installing the city database from Maxmind (which is updated monthly), but there is no option to get visitors’ network provider information.2
While I’m sympathetic to Peter Adams’s schedule and interests aside from being the solitary maintainer of OWA, ultimately I want a tool that’s in active development and gets enhancements from outside parties.
Piwik appears to be where it’s at for self-hosted and free (beer and speech) web analytics solutions. It’s actively being developed by a company that makes its money from premium services on the product; it’s a clear and proven business model that doesn’t leverage my data in order to sell to advertising customers. The feature set is pretty comparable with GA, even if they made some strong decisions in differentiating the UI and experience – it’s not entirely my cup of tea, but it doesn’t get in my way of using the product.3
Unsurprisingly, I decided to go with Piwik.5 There really was no contest here, honestly. Almost by the lone virtue of the dust covering it, OWA was a non-starter. Heap and GoSquared are simply over-designed for my needs.6 If my personal web experiments ever become complex enough to warrant it, I’ll definitely give Heap and GoSquared another look (as well as any players new to the scene) but I’m guessing that between Piwik’s current features & plugin ecosystem and active development I might never need to do that.
That said, it’s extremely hard to imagine Google choosing to do the same to Analytics – especially given how valuable the data they collect through GA is to their actual Search business model. ↩
And even after all of that, I never was able to get the geolocation data to consistently work. ↩
It’s worth noting that I’m not holding up GA UI/UX as a gold standard here either. ↩
I can’t believe I didn’t originally include this – as you might have already inferred given it being a self-hosted solution, there’s no waiting around for your data. ↩
Full disclosure: As I noted earlier, I’m weaning myself off Google products – not cutting cold-turkey. I’m continuing to use GA redundantly with Piwik for the time being until I’m confident in the latter's numbers and performance. ↩
Some follow-up research surfaced Logaholic and that Mixpanel offers a free tier. In the case of the former, it looks interesting – although the massive number of features that only become enabled when you start paying was a little off-putting given that I’m not looking to enroll in a paid subscription for my analytics tool at the moment (maybe later, should my needs change). Regarding the latter, while my site easily qualifies for the sub-25,000 data point ceiling of Mixpanel’s free tier, I’m a little unnerved at the thought of tipping over into their lowest paid tier purely due to having an unusually popular month (plus, I suspect that it might be over-designed for my needs like Heap and GoSquared). ↩
An interesting read1 from Mike Davidson spanning a few different topics, including the relocation experience, working at Twitter, people management, and the notion of what makes a good Product Manager:
There is a contentious ongoing debate in our industry about what the requirements and role of a product manager should be. One side says they must be deeply technical (i.e. ex-engineers), while the other says they needn’t be. I’m told the deeply-technical mindset came from Google, where one day a long time ago, they decided they needed a PM role at the company so they took some of their best engineers who were already widely respected at the company, and made them PMs. Makes total sense, especially considering the early problems Google was trying to solve (mainly search quality), but unfortunately it has caused a wave of copycatting in Silicon Valley that is bad for products, bad for diversity, and bad for business.
This is a bit of a peeve of mine, speaking as a technology Product Manager who not only was never a professional engineer but also knows a number of other solid technology PMs with non-engineering backgrounds.
Davidson also makes an excellent argument for disposing of the popular shorthand for describing PMs as “mini CEOs”. While I understand the intended sentiment, Davidson’s correct that among other things, “the term CEO is so loaded with preconceived notions, that it's just not a safe place to even start your job description”. If you’re one to insist on shorthands, personally I’m a fan of one I picked up from a fellow student in my Product School cohort – that a Product Manager is more analogous to a symphony conductor.
I swear it’s not a “3” theme this week, given Sunday’s post about Sunil Gupta’s 3 Paths to Essence – rather a curious coincidence. I had no idea that not only Davidson left Twitter, but that he returned to Seattle to boot; I had just caught this on my Twitter feed the other day. I’m looking forward to seeing what he moves onto next. ↩
I have to confess I really feel for the people who do support for LinkedIn’s SlideShare product. I don’t pretend to understand the circumstances that led to this error message, but I can only imagine what the support rep’s first thought must be after getting a case assigned to them by a user who encountered it.
I linked my LinkedIn account to SlideShare in order to upload and host my Product School deck. Unfortunately, I didn’t get much further than setting up the account as this error was the only thing communicated regarding why I was unsuccessful in my attempt to upload the deck. Realizing that the file might not conform to SlideShare’s upload requirements, I sanity-checked:
(I was trying to upload a PDF less than 10 MB in size)
Knowing that sometimes mysterious gremlins that prevent intended behavior can get bored and leave, I left it and tried again 2 days later. No change.
So… what now? There wasn’t any other support documentation that shed additional light into what the problem might be (not Uploading Content to SlideShare from Desktop, nor SlideShare File Taking A Long Time to Upload), which leaves contacting that poor support rep with no real worthwhile information to share.
In the interests of not turning off users (especially first-time users like myself) and maintaining your co-workers’ sanity, I would hope that errors that are thrown are properly identified. If so, this would allow the ability to parse the error and ultimately deliver a useful message to the user (in end-user-friendly language). Giving the user the ability to self-diagnose and, ideally, self-resolve will only make things easier for everyone.
It seems fitting to start this site’s posting feature with sharing a link to Sunil Gupta’s “3 Paths to Essence” post. It’s about 4 years old at this point, but the wisdom conveyed is timeless:
- Create, then Edit – but not both at once
- Build half, not half-ass
- Discipline your schedule
While #2 is directly valuable to a product manager and their development team (and what led me to the post), #1 and #3 are worthwhile reminders for everybody.
As a product manager in an Agile / Scrum environment, I’m very cognizant of scoping sprint development into those “thin vertical slices”1 that enable the full use (and testing) of a feature and building from there.
It should come as no surprise however that the other two paths are extremely helpful not only in product management but virtually any discipline. For practically my entire life of putting words to page (literal or virtual), I’ve followed the inefficient create-and-edit-in-tandem routine. I have just recently started to retrain myself to instead get everything on the page first, then edit. I still have a ways to go, but I’m already seeing the benefits.
And being ruthless about my schedule / task list? It’s an excellent habit to get into as a product manager, but I can’t recommend it highly enough for anyone who wants to improve on their delivery.
For the uninitiated, a popular Scrum teaching analogy by Collab.net is to imagine your development of a tandem bicycle starting with the release of a unicycle, followed by your next release of a bicycle, then finally the tandem bicycle. At each release stage, you have a standalone usable product that is slowly iterated into the planned final offering. ↩