Who am I?

The first negative surprise of changing to product management was inability to explain to my parents, what exactly am I doing at work. They have honestly tried all possible jobs they knew, trying to imagine how my typical working day looks like. But no, dad and mom,

I’m not a software developer – I don’t own source code. Developers do.

I’m not a project manager — I don’t own schedules or resources. Project Manager does.

I’m not a software tester — I don’t own product quality. Colleagues from the QA do.

I’m not a team lead — I don’t have staff responsibility. Team Lead does.

I’m not a designer — I don’t own design. Designers do.

I’m not a business analyst – I don’t own a spec, I only tell user stories.

I’m not a marketing manager – I don’t need to care about marketing of my products.

I’m not a translator — I don’t need to care about translations.

I’m not an evangelist – I don’t own product promotion.

I’m not a PR specialist – my colleague is owning PR.

I’m not a statistitian — I don’t own reports. Colleagues from the BI do.

I’m not a customer – I don’t have any project budget. My managers do.

I’m not a support specialist – I don’t own customer support. Colleagues from the Helpdesk do.

I’m not a mini-CEO of my product – my managers are.

So what am I doing then at work?

Hey, I’m a Product Owner!

Simple team productivity model

Let’s play a little with the rule of thumb that every team member you need to communicate with, reduces your Ideal time by some percentage.

Let’s assume that all developers in team are equally productive, and everyone can make the same Ideal time per Project day. And first, we start with a team, where every developer has to communicate with every other developer in team:

Let’s calculate the productivity of such teams for 5%, 10% and 15% of communication overhead:

On vertical axis, team productivity as factor to a single developer. On horizontal axis, the number of team members. The interesting result of the calculation is that, according to the model, even with unrealistically low communication overhead (5% with 4 Ideal hours per day means just 12 minutes of communication per day), the largest feasible team is around 10 members, and it is only 5.5 times more effective than a single developer.

Now, let’s now restructure the team. We split the software to be developed in several parts, and define explicit, well-documented, carefully designed and slowly changing APIs (or better yet: data exchange formats) between the parts. A team of 10 can then consist of three independent groups, and a team lead. Each group has a developer lead, communicating with the team lead. Inside the group, every member (including the developer lead) communicates with every other member:

In such a team, there are 6 developers who only have to communicate with two other team members, and 4 developers, who communicate with three other team members. This makes the team unbelievably more effective: while the fully connected team with 10 members is (at 10% communication overhead) only as productive as a single developer, the hierarchically structured team is 7.6 times more productive than a single developer. At least, according to this simple model.

Discussion

Can introducing good APIs, and structuring the team around them really improve team productivity by order of magnitude?

How to estimate

Just like Merging, estimation is one of those things that benefit from being regularly trained, but resists training because of the strong negative error consequences. Fortunately, there are some common advices that transfer the burden from the intuition to rationality, and therefore it can be learned easier.

Read the fucking spec

This sounds obvious, but in order to increase estimation quality, you need to fully understand what is going to be built. In software, everything can be connected with everything else. This means, estimating only a small fraction of the whole spec with no knowledge about the other parts bears high risks. Because there might be one single requirement somewhere in the seemingly unrelated parts, that might require usage of some specific technology or framework that you wouldn’t necessarily assume, so that your estimation (based on another technology) might be rendered useless.

Therefore, read the whole spec. Read each page, each requirement, each sentence. Look at every available design comp, and understand what each of the UI elements is doing there and why. Ask questions. Ask a lot of them. Understand, what areas are not exactly defined. Understand, why they are still undefined, and how could they be defined in the future. Know exactly what you’re going to release at the end of the project.

Grade the parts

From what you understand about the spec, grade the product parts in the following four areas:

  • Did that before 
  • Know how to do 
  • Optimistic challenge 
  • The Horror

The Did that before might have a misleading name, because software developers never develop an exact copy of some older software. But still, there are a lot of factors that don’t affect the efforts. Writing a css style for orange button will take the same time as writing a css style for green button, if the only difference is the color. Making a 1:2 column layout is not that much different from the 2:1 column layout. Creating a SQL database storing furniture web shop products and writing ORM code for them is roughly the same as creating a database for a car dealer (when using proper time units, in this case, the weeks, and rounding up).

Let’s define two software parts to be effort-equivalent, if they differ only in factors that don’t influence the efforts needed to create the parts.

To improve your estimation skills, you need to build a catalog of the typical effort-equivalent parts, along with the information how long it took to implement them before. In my case, I haven’t written this information down – I just remember, that “Interfacing to a new web API (PayPal, Credit Card Processing, Facebook, etc)” would normally take me 3 days for the first working version. “Creating a simple UI screen” needs one day and “Creating a complex UI screen” needs 3 days, both meant to be code complete stages before bug-fixing. And “Writing a requirements document” is 2 to 3 pages per day.

And the estimation of Did that before area looks like this: you split it to the effort-equivalent parts you know, and sum up the efforts. The result you get is in units of Project Time.

Time measures

There are four different time measures used during estimation:

  • Calendar Time. This is really the calendar time. If we start developing on March 3rd, and the estimated calendar time efforts are 15 calendar days, we plan to be ready on March 18th. This means, the Calendar Time includes weekends, the probability of official holidays, the probability of vacations and sick days, average number of projects developed at the same time, all kinds of interruptions and unproductive time, etc.
  • Working time. This is calendar time without weekends, holidays, vacations and sick days, but including several projects at once, all kinds of interruptions and unproductive time. Basically, every day you appear to work counts to the Working time.
  • Project time. This is Working time spent on one project. If you’re only doing one project at all, it is the same as Working time. If you have a contingent of 1 day per week for a second project, you might spend 5 Working days in some week, but only 4 Project days in the same week.
  • Ideal time. This is the Project time, without any interruptions, without meetings, without smalltalk in the tea kitchen, without helping other team mates, without work-related communication with other colleagues, without writing documentation or managing the bug tracker. This is the most limited resource. As a software developer, I used to have only 2 to 6 hours per day of ideal time on average. There were days I didn’t have a single hour. Most of the days, I’ve managed to have around 4 hours. The rest was filled with the so-called “unproductive” things. Which were most of the time of course absolutely necessary and reasonable things to do, but still didn’t contribute to writing code.

Note that the time measures constitute a onion structure. The ideal time is in its core. Eight hours of ideal time corresponds to two project days, and about 3 working days, and about one calendar week. As for the exact conversion factors, YMMV, but the overall idea is the onion. Also because of the onion, usually you would need to use different time units for different time measures: the Ideal time is usually measured in hours, the Project and Working time in days, and the Calendar time in weeks.

Different stackeholders are interested in different time measures. Product and Marketing are interested in Calendar time. Project management is interested in Working and Project time. Developers value the Ideal time.

The safe zone

The Did that before and Know how to do constitute the safe zone. As previously discussed, with the Did that before you would typically estimate in Project days (or weeks or hours), because this is the time you can easily remember when building your own catalog of effort-equivalent parts.

The Know how to do part is different. This is the part you’ve never done before, and you don’t have any effort-equivalent part that is similar enough. But you still can immediately create a possible solution or at least an approach how to solve this task, using the programming language constructs and frameworks you already have and know. A typical example of this might be parsing of some unknown but documented file format (eg. MP3 to obtain the ID3 tags). You know that the ID3 tags are written somewhere in the file, and that there should be some file markers to detect their exact location. So you plan to learn how to open the file in binary mode, print the ID3 documentation, put it on your table, and then code the walking through the bytes of the MP3 until you read all the tags, and oh by the way you will define some data structure to hold the results. Nothing too fancy, especially if there are no requirements to make it as quick as possible, or support broken files, etc.

So, to estimate the Know how to do part, you create a possible solution, then mentally go through each coding (and unit testing) step and intuitively decide how many ideal hours you’d need for them. Then you add some buffer just to be sure, and here you go, you have the estimation. This time, it is measured in Ideal time though.

The fear zone

In the fear zone, there are the areas of the Optimistic challenge and The Horror.

Optimistic challenge is something you’ve never did before and you can’t create a solution, but still optimistic to be able to estimate it with some reasonable (but not too high) precision. For example, I never developed Flash apps, but I know it is a client-side programming (and I did WinForms, HTML/JavaScript and Silverlight) and I know it is from Adobe (and I did Adobe InDesign scripting). So I just add some time for getting into technology details, installing needed tools and finding authoritative information sources in the Internet. And estimate the rest, as if it was a Silverlight app. I also check the Internet to search for the things that are similar to the one we want to build, and if I find any, I have a confirmation that the task is doable. And if it is doable, I can do it, because I have more experience than many Flash developers.

The time measure of Optimistic challenge is a little complicated. In the example above, I will estimate it as if it was a Silvelight app. If I did effort-equivalent things before, my estimation will be in Project time. I will then add some Project time for getting into technology, and get the result in Project time. If I merely know how to do a similar Silverlight app, my solution will be in Ideal time. I will then have to convert it to Project time, then add up the Project time for getting into technology.

This means, I must know how to convert from Ideal time to Project time. This is a factor (similar to Velocity in agile methods) that you can easily calculate from the past activity – estimate its ideal efforts, and compare them with the actual spent project time. The difference to Velocity is that a) you do it for yourself, not for the whole team and b) you’ll get the Project time, not number of Sprints as the output.

The Horror area is completely different. You’ve never done something like this before, and you don’t even know what it takes to implement it. This is like, implementing a new video codec if you only know how to develop web pages. Or implementing a HTML5 mobile web app, if you only know how to program video codecs. When developers meet with The Horror area, they normally become very aggressive and cynical, and start bashing everyone around them. Perhaps, this behavior is due to pressure to perform? We want to be able to estimate anything, and not being able to do it with The Horror area is a hit for our self-image and self-expectations.

The interesting thing is though that very often, stakeholders don’t need a time estimation “at zero price”. Therefore, the correct thing to do is to describe exactly, where your Horror area lies and why. This will result in a conversation with the PM, and some of the possible outcomes might be:

  • Getting another team member who is knowledgeable in this area.
  • Removing this feature from the project scope.
  • Giving you some days or weeks to get the know-how and to try out some things.
  • Sending you to some courses or training to buy the know-how.

All of these outcomes are indefinitely better for your organization than just making a random guess and estimating The Horror area just to come to any number. Do not estimate The Horror. Talk about it.

Spec estimation

So, you’ve read the full spec, you’ve graded it, you’ve determined The Horror part and discussed it with the PM. The next step is to estimate all other parts, and to convert them to some common time measure. As developer, I prefer to convert everything to Project time: estimating Working or Calendar time might be infeasible, because the estimation might happen at the time when some important data about the number of parallel projects and the project begin day is unknown. When everything is summed up, I add some buffer to it just to be sure. This buffer cannot save me in 100% of cases, but still seems to be good idea, because I might forget to estimate some software parts or activities (such as deployment or data migration).

Team estimation

So far we’ve discussed estimation are a personal activity. When the estimation has to be delivered by a whole team, there are two factors to consider.

The first is the variation of estimation. It is very easy to explain. Not everybody is as good at estimation as everyone else. This is because not everybody has read this blog post (to the end) ;)

And even if everybody was good at estimation, everyone has different professional experience: things that are The Horror for one team member, might be in the Done before area for another team member. What for one is Know how to do, for other is the Optimistic challenge.

This is also the reason why it is not the best idea to get estimation from every team member and then average them out. The better way is to let the team members speak, and to check if the lower estimation is due to underestimating some tasks, or the higher estimation is just because someone is way too far in their fear zone. Also, combining estimation and task assignment is a good idea. It wouldn’t help if the team member A estimates (and actually has experience to do) a task for 3 Project days, if this team member won’t get this task. And the team member B who’ll later actually get the task, estimates (and actually needs) 8 Project days for it.

So, for the best team estimation:

  • have each team member to grade each software part
  • discuss the grades and assign parts to members to minimize their fear zone
  • get estimation from each task owner
  • sum them up and add the second buffer

The first buffer is added by every developer for his own estimation. The second buffer is different – it is added to compensate eventual shifts in team (sick days or other changes) meaning that not the original task owner will have to implement it.

Another thing to consider during the team estimation is the team size. The larger the team is, the higher are communication costs. This means, every team member will get less Ideal hours per Project day. My personal experience with geographically distributed teams makes me believe it costs 10% of the Ideal time for every other person in the team you have to communicate with actively. So, for the team of six persons, each of them will spend 50% of their Ideal time for communication. Which means, you need twice as many Project days for the same Ideal hours. Or, the total 6-person team productivity is only 3 times of a single developer productivity. And yes, teams of more than 10 persons (who have to constantly speak with each other and develop in the same set of files) are counter-productive.

Phase estimation

So far, in the examples I’ve used only the software implementation phase. But you can estimate everything, as long as it is not in your Horror zone. And the trick is, you don’t really need to do or implement something yourself to be able to estimate it. For example, after the code complete milestone, the testing phase starts. It is actually similar, no matter if you use Waterfall, Agile or something else. The differences lie only in when testing starts, and how much it is to test at once. But generally, you can watch how much time testers need to test the part you’ve implemented (and you to bug-fix it until it passes all tests). And you can add it into your catalog. In my catalog, there is just one simple record: “testing and bug-fixing phase” – takes the same time as the implementation phase. YMMV.

Another typical phase after or in parallel with testing is the customer feedback phase. It might consist of some more testing (in realistic scenarios), as well as of some change requests or improvements that didn’t make to the spec or were recognized only after using the actual software for the first time. In my catalog, there is the record “Customer feedback per UI screen” – on average two to tree smallish change requests, costing around 3 Project days extra.

Estimation Training

Estimation training is not different from any other mental training. Write down your estimations. Implement the software. Track the time really spent. Compare. Repeat.

Eventually you’ll learn, what factors are stable and can be reliably predicted, and what are volatile. You’ll become a better estimator.

And in my experience, being a better estimator is highly correlated with being promoted.

Legacy Code

Engineering and Product have very different concepts about the legacy code.

For the developers, there are two kinds of legacy code. The one is called legacy code, and it is everything made by somebody else. Another one is called the existing platform, and it encompasses all the code they have written themselves.

The legacy code legacy code is the nightmare of each and every developer. They are ready to rationalize any expenses, to participate in any politics and to state any thesis, just to avoid dealing with legacy code legacy code. When forced to deal with it, they lose motivation, become clinically depressed, and finally either quit the job or become alcoholics.

On the contrary, the existing platform legacy code is the dear baby of software developers. They love it with all their heart, and will rationalize any expenses, participate in any politics and state any thesis, just to help their baby remain alive and continue developing it. When forced to kill it, they lose motivation, become loud, cynical and poisonous, then clinically depressed, and finally either quit the job or become alcoholics.

PMs are not so differentiated. For Product, legacy code is product obesity. When you have too much of it, nothing in the product fits anymore. It walks rather than runs, and it smiles instead of dancing. Taking careful measures replaces winning, as it is foresightful rather than spontaneous. It looks solid rather than sexy. It takes double space and time rather than being invisible, it requires custom rather than stock solutions, and it costs twice as much to build.

And legacy code is almost impossible to get rid of.

Can everything be broken?

One argument against DRM that pirates often repeat is that “every software out there has been or can be broken”. I hate this argument not only because it is factually wrong, but because most of the time it is being told by people who know nothing about software security, but still they use the same tone as if they said “the skies are blue” or “Microsoft is evil”.

In China, the answer of many people to the Great Chinese Firewall was usage of VPNs. This was also a typical example a typical copyleft fighter would tell you. The communication inside of the VPN tunnel is protected from being spied by (still) reliable cryptography, therefore, our freedom of information is guaranteed.

Well, it was guaranteed. Until China started to block VPN connections. Establishing a VPN connection is a protocol that can be automatically detected on intermediate routers, and the corresponding TCP connection can be disconnected, or optionally shaped down to some barely usable low speeds.

If you also remember that any Internet user in China has to authenticate himself with a passport (also a change introduced in 2012), and you play the situation further, you’ll finally come to the conclusion that Internet freedom is not technically guaranteed in China, and it never was.

For those who don’t believe: you can move VPN to different ports or replace VPN with another protocol alltogether – it doesn’t matter. Chinese government can mandate that only HTTP 1.1 and SMTP are allowed in their land, and close all other ports and protocols by default. You can install a proxy server listening on port 80 and tunelling the Internet over HTTP. The Chinese government can parse this traffic, detect parts of facebook markup and break the TCP connection. You can add SSL to your proxy server. The Chinese government can prohibit SSL usage in their land. SSL handshake is also a thing that can be easily detected automatically. As replacement, they can develop their own fork of the SSL/TLS protocol allowing the proper agencies to filter the traffic, and roll out this protocol in their land so that their web shop still can process payments. You can develop your own non-scannable fork of SSL, and use it with your proxy. The Chinese government can include the handshake sequence of this new protocol to their black list. And to prevent further arms race, introduce a regulation that would severely punish people who install this new version of SSL. You can use steganography and insert some information in harmless educational videos. The Chinese government can create a regulation that posessing a steganography software would automatically make you western spy and punish by death penalty. Would you risk it just to get another portion of cat photos from your facebook news feed?..

The Chinese government has here more pull. Always. And the only reasons preventing them to go deeper are economics and politics. But those reasons can be solved sometimes.

Besides, I think Chinese civilization has a several thousand years long track record of smart and sophisticated solutions to global problems. I mean, global problems, not just filtering of some dumb web traffic.

I’ve read a report that usage of some specific western site is now blocked over VPN. This means, you can establish the VPN session, and access some other western sites over the tunnel, but you can’t access this specific site. My first reaction was: that’s technically improssible! But after a second thought: what if the Great Firewall uses DNS poisoning? You first try to go to Blogger.com over the Chinese Internet. Chinese DNS server gives you a spoofed IP, which is cached in your local DNS client. You see that Blogger.com is blocked, establish VPN, try again – using the same spoofed IP. And it still doesn’t work. Voila. Except: many VPN implementations I’ve seen would reset the local DNS cache and revert the DNS traffic over the VPN tunnel, when the VPN is being established. So that it still remains a mystery.

Being the Chinese government, you surely can invent even more interesting ways to firewall your citizens, without even paying for the traditional firewall software and devices. Most of Chinese PCs I’ve seen have at least one EXE file downloaded and installed from a Chinese web page – for example QQ or PPTV or some game. Do you think it would be hard to add a piece of code to these programs that would detect a facebook home page markup in your browser and do a little whistle blowing to the Chinese version of Homeland Security?..

Therefore, I think it is very important for everybody to stop pretending that internet freedom rights can be technically guaranteed. Only by eliminating this dangerous misconception, we can see the real issues and start working on meaningful answers to this problem – in China, Russia, Iran, or elsewere.

And yes, the answer that I personally think is most promising is the collaboration with the Great Firewall. People who just want to post photos of magnificient Beijing on their Facebook timeline, shouldn’t be prevented doing that. And political activists who want to use Facebook platform to promote their agenda, shouldn’t be able to hold all other people as hostages — no matter what we think about their political program.

For all of you copyleftists who frown on the word “collaboration”, one last thing to consider. Chinese segment of the internet has a very elaborate and successful ecosystem of social networks of various kinds, paid online video and music rentals, online auctions and basically everything the big Internet has. The largest web services have the user base of 400 to 800 millions each. Don’t you think they, at least partially, owe their success the protective effect of the Chinese Firewall? Would ??? be happy to see chinese version of Facebook freely available? And ?? would not necessarily benefit if Google started their full operations again.

Not about monads in JavaScript

After having watched the first 5 minutes of video about monads in JavaScript (I normally stop listening when someone says “this helps you to reason about your code”, and they’ve managed to say it that early in talk), I’ve remembered my old hypothesis that the academia and computer scientists in fact not evil mad scientists, but simply have no idea what industry is all about, and therefore most academic programming languages solve wrong problems.

In this post, I’ll try to define a real problem — in one specific area of user interaction development.

Let’s say you’re writing a web app, and this web app has a lot of different interactions with the user. Two of those many interactions is entering user’s email and users’ job. So I have corresponding forms for users to do that. Luckily, this information is often present on Facebook, and Facebook provides an API. Therefore I decide to simplify user’s life by allowing him to fill this information from his Facebook profile. As a nice side-effect, I can inform him on Facebook whenever something interesting happens in my web app.

Fine. So I design my user flow in this way: in the first screen, I perform Facebook authentication and request from the user permissions to access his information. In the second screen, I use the information about his job to give him some meaningful interaction related with the web app. In the third step, I direct the user to his page, where he can proceed using and enjoying my web app.

Now I develop this feature. In the base class for all screens of my web app, I define the property to hold the actual permission set needed for this specific screen to work properly. By default this set is “email, job”, and just for the first screen of my user flow (where we don’t have permissions yet) this set is empty. Then I write a generic handling of permission sets that is same for each screen: before rendering, the screen would detect the permission set actually given to us by the currently logged in user, compare it with the permission set defined for the corresponding screen, and redirect the user to the Facebook authentication dialog if something is missing. Simple, logical and generic. Now, even if the user would start his voyage on my web app in the middle of my user flow, his missing permissions will still be properly handled.

I release the app, and invite users to use it. To my horror, 90% of users would drop off on the first screen – they just don’t give me the permission to use their Facebook profile. Unfortunately I don’t have means to ask them why. My hypothesis is that they don’t trust me just yet – perhaps they didn’t see my web site before. If this hypothesis is correct, asking for facebook authentication so early in the user flow is premature.

So I go and design a new user flow. The facebook users would be invited to my standard homepage. When (or if!) they use the functionality related with jobs, I will ask them to fill their data, and provide an alternative way to do that using Facebook. When they click on the Facebook button, I would display the authentication dialog, get permissions and then use the facebook profile data for my interaction.

Now, I want to develop this small change, but have to admit that I need to throw away my generic facebook permissions handling code. What I actually need, is the ability to display the Facebook Authentication dialog in some more or less random screen of the web app, get the permissions and use the data. But this is so different from my first user flow implementation!

Why it has to be so different!? I mean, what we have here, is a very typical flow:

  1. Input first data item from user
  2. Input second data item from user
  3. Use first data item
  4. Use second data item

And what I need is just reshuffle this flow a little:

  1. Input first data item
  2. Use first data item
  3. Input second data item
  4. Use second data item

And suddenly, it means a huge change in my architecture and in my code!

Here is another example of the very same problem, just to demostrate that this problem is not some rare special case. When user creates one new directory in Windows Explorer, Finder or Nautilus, he has to enter two mandatory pieces of data: the parent of the future directory, and its name. This is not just a coincidence of bad user interaction design: for some unknown reason, all file system APIs in the world are designed in the way that both data items are required. You cannot create an unnamed directory. You cannot create named directory without a parent either.

Now that’s pity. Imagine how much better it would feel for users, if both interactions were possible? The possibility to create unnamed directories would eliminate directories named “1”, “tmp”, “asdf”, “New Folder (74)” and similar shit every computer in the world has on its harddrives. This shit happens when I have created some useful piece of data, and I have to close the editor quickly, so I need to save the file, and no I don’t want to save it to my common documents folder where it can be lost in hundreds of other files. If I could create a new unnamed directory and store this file there, I would be able to return to this directory later and give it a reasonable name.

Parentless directories also make sense. You download a zip archive from the internet with some source code. It contains a dozen of subdirectories. You want to unpack it somewhere, build the software, install the binaries on your system, and then leave this parentless “somewhere” place, allowing thereby the file system to reuse it for other data (think automatic garbage collection).

So, let’s say you’ve decided you want to change the way how it works right now. Will it be easy or complicated to do? Well, that’s not user configurable – this has to be developed. With Microsoft and Apple you’ll have no luck persuading them to implement this change. But even in the world of OSS, the first thing you’d need to do is to agree and issue a new version of POSIX, defining the mkdir function allowing to create unnamed directories. And even if you’ll succeed with that, your next step will be to patch all several hundreds of already implemented different file system drivers to support this new function. And only then you’ll be able to patch your Nautilus to enable new user interactions. To summarize: this problem is intractable.

Seriously? I mean, just reshuffling the usage order of two simple arguments in a single trivial function is an intractable problem?

Well, solve it, and you’ll create the new mainstream language of the new generation.

Going all LED

Just like most of us, I believe that we all should take care about the nature and are responsible for protecting the environment so that it remains livable. But I won’t vote for the green party in the next elections. The reason for that is their explicit despise against the science. And I believe, if anything, science is the way to achieve better environment, better society and better politics.

Unfortunately, many Germans don’t really think so, and the premature shut-down of nuclear power plants is the best testimony to that. It is natural to have fear against deadly things you can’t see or smell, but making economic decisions only based on this fear is something different.

But I digress. Actually, I wanted to rant about the so-called energy-saving light bulbs. If you open any mass-media publication about them, even including such normally authoritative sources like “Stiftung Warentest”, you will see a simple saving calculation: normal light bulbs take X watt, energy-saving ones need only Y watt, given H hours per year you have light on, you gonna save (X-Y)*H of watt*hours energy.

Now this is piece of crap. One of the fundamental laws of physics (some scientists believe it is the fundamental law) states that energy doesn’t disappear or be created, but is just transformed from one state to another. Most of the time, it is transformed to heat. When you have a normal light bulb and feed it with 60 watt, 3 watt of this energy will be transformed to light, and 57 watt will be transformed to heat. When you replace it with an energy-saving option consuming 6 watt, 3 watt will go to light, and 3 watt to heat. This means, you will lose 54 watt of heating in your living room. In Summer it is fine, but yet again, days in Summer are long, and lighting duration is short. And in Winter, you will have to compensate the missing 54 watt with your normal room heating. So, in reality, you haven’t saved these 54 watt of energy, you’ve just moved the heating source from your ceiling to your walls.

Now, even if you use electric heaters, a watt*hour of energy is normally cheaper than the lighting energy. Using gas or oil or other sources are normally even more cheaper. So, by switching to energy-saving light bulbs you do in fact save, but to calculate your actual savings, you have to take the difference between how much a watt*hour of normal electric energy costs and how much a watt*hour of your heating energy costs, and only then multiply it with the 54 watts from the previous example. As a result, you will normally get values around 10% of those calculated by “experts” and mass-media magazines. Another important consequence is, of course, that you don’t only save that much money, but you also don’t save that much energy, and therefore do not do that much for the environment.

So, saving money and energy wasn’t the reason why I have replaced all light bulbs in my apartment with LEDs. The actual reason was the comfort and safety. Most of my light bulbs were halogen ones, and for some reason, every month or so, one of them would break, and you have to replace it, and go buy replacement bulbs — this is not how it should be! Light should just work. I wanted to install it once and never care about it again.

Another reason was the case of explosion depicted on the right. I don’t want to stay in the rain of sharp glass and heated metal parts.

To give you an overview, here is a full list of all light bulbs in my apartment (one month ago):

  • Living room: 6 halogen spots with 50 watt each, E14, and 2 halogen spots with 50 watt, GU10
  • Kitchen: 2 halogen spots with 50 watt, E14
  • Bedroom: 3 halogen spots with 50 watt, E14
  • Bathroom: 2 halogen spots with 50 watt, GU10, and one energy-saving daylight bulb equivalent to 60 watt, E27
  • Floor: one normal 60 watt bulb, E27

As you can conclude, I like it when it is bright. When you turned on all light, it had consumed 870 watt. And this is also the reason why I haven’t switched to energy-saving bulbs. They are not so bright, especially when just turned on, and they normally have must larger physical dimensions, while I only have very small lamps (as you can conclude from heavy usage of E15 and GU10 sockets). Nevertheless, I’ve tested one energy-saving lamp in the bathroom, and I wasn’t fully satisfied with the results.

Now, replacing all that to LEDs is a challenge. My primary goal was achieving the same bright light. I’ve spent some time and money finding an optimal solution, and have finally found a, still not perfect, but an acceptable one, so I want to share my finding.

The first and primary gotcha when going LED is not taking their spot angle into account. I don’t know why is it not such an issue with normal and halogen bulbs, perhaps it is normalized there by law, but with LED, it is wildly different. The spot angle is, as you might think, the angle of light going out of the bulb. To give you some values: 360° means it will light up all around, just like a normal non-halogen light bulb. A value of 220° still looks like a un-directed bulb, while 120° already looks like a bigger spot, for example halogen E14. A value around 38° means the smallest spot typically to be used for table or reading lamps - this corresponds to a GU10 halogen spot bulb.

Values below 38° are just shit. Don’t buy these, at all. See the OSRAM example on the right: according to adverts it looked like an equivalent of a 30 watt halogen E14 spot, but this equivalence is only achieved in a very small angle of 25°, so, basically, you will have a dish-sized bright spot on your wall in a dark living room. If you don’t see the value of spot angle on the package or the bulb itself, don’t buy it is either, no matter what they say to how much watt it is equivalent to.

Another gotcha is the color temperature. Naturally, LEDs are manufactured with a wide range of color temperatures. The color temperature doesn’t mean much per se – already after a week of regular using some color temperature you will accomodate to it and automatically consider it to be white. The actual issue here is the “all or nothing” principle. You cannot just replace some light bulbs in your appartment or house and leave the others to be old. The older one will look yellow and the newer one will look blue, and this will make your aestetics crazy. You also cannot use for one room one kind of LEDs, and for another room another kind of LEDs. I mean, mixing the manufacturers and socket types should not be an issue, but you should absolutely match the color temperature. So, basically, you have to select the color temperature you want to have, and then buy only LEDs in this range. My favorite is what is called “warm-white”, it is between 2800 and 3000 Kelvin. Note that even though it is called “warm”, it is still a much higher temperature than my halogen bulbs.

And the last thing to know is very simple – to get a feeling of how bright the LED is, divide its light flux (Lichtstrom, measured in lm) by 10, and you’ll get the watts of a normal or halogen light bulb. Sometimes, manufacturers would put another value, measured in cd. This value is called light intensity (Lichtstärke), and it combines both the spot angle and the light flux. Because you still need to know the spot angle separately, you can ignore this value. If the manufacturer only gives you the light intensity, but not the other two values, just don’t buy this LED, perhaps they want to hide their small angle.

All in all this means the following: switching to LED is not just a simple action, it is a project. You will most probably buy some LEDs, look how they work for you, learn something (eg. what color temperature you like, what angles you need where, what maximal physical dimensions your lamps support, etc), then send them back and get another one. Consulting a competent electronics dealer nearby might be a good idea. I’ve used another one: just order the things in the Internet, and use the 14-day return right.

And here is my solution. I’ve decided to use the LEDs made by Bioledex. First, they were the very first bulbs that looked as bright as the normal bulbs (before, I didn’t even fully believed that a LED can be so bright). Second, it is a German company situated in Augsburg and making (at least part) of their LEDs in Germany. Theoretically, I still can assume that OSRAM, Philips or Toshiba can also manufacture LEDs this bright, I’ve never seen such bright LEDs before. For some reason, Saturn, MediaMarkt, Hornbach and real do prefer to offer only some shitty LEDs from the other mentioned manufacturers, so dark barely suitable to play the role of tea lights.

Bioledex LEDs can be ordered at a number of internet shops, you’ll get the whole list of them on the web site of this company. I particularly prefer spar-helferchen, because of low prices and because when paying with PayPal, they typically give up your package to DHL next business day (I’ve ordered on January 1st, and the package was on the road early in the morning on January 2nd). Full disclosure: I’m not affiliated with them in any kind, besides of two packs of gummy bears I’ve found in my parcels.

My E14 50 watt halogen spots I’ve replaced with Bioledex RUBI LED Spot. It takes 7 watt, costs around 17 euro and gives 410 lm with 120° and 2900K. As you can see, it is also much bigger than the halogen spot, therefore it looks pretty weird out most of my lamps. Had I waited for another 3 to 5 years, I’d perhaps be able to buy a LED in the same size as the old bulb, but I was too tired to wait any longer.

The GU10 50 watt spots, I’ve simply replaced with PERO LED Spot, they take 4.5 watt, cost around 11 euro, and give 250lm with 38° and 3000K. The difference beween 2900K and 3000K is not perceivable to me. As you can see, they are only half as bright than the original 50 watt spots, but I’ve compensated it by placing them closer to the table. Alternatively, Bioledex also has much more brighter LEDs with GU10, but they are also much larger and would look very weird in my lamps.

The E27 light bulbs have created the most confusion. I’ve first thought that having the largest socket available, their corresponding holders and lamps are also big enough and can hold just any bulb. So I’ve first ordered the huge and very bright Bioledex LIMA Bulbs, which take 17 watt, give whopping 1200lm and 240° with 2900K. This is the best and biggest socket bulb by Bioledex, and it costs 38 euro. On the picture, it is the left-most LED. Unfortunately, they were too big, and instead of buying new lamps, I’ve additionally ordered smaller Bioledex BEON Bulb, which take 8 watt, cost 18 euro, and give 600lm with 270° and 3000K. They are shown on the right. In the middle, my previous E27 bulbs for comparison.

There is yet another interesting fact to consider: the turn-on behavior of these LEDs is different. RUBI and PERO turn on instantly, just like you expect for LEDs. LIMA shows some flicker similar to daylight lamps, but much shorter. BEON takes half a second to turn on! And when you turn BEON off, it doesn’t go instantly off, but dims slowly for a second. Well, perhaps I’ll replace the BEONs with something else: NUMO and NIDI also look promising.

And finally, the price. Yeah. (just to remind you, I’ve just spent 700 euro on some new iPad and Apple TV, just to buy AirPlay support.) So, the price of the LED comfort is: 308 euro plus shipping costs. Included in the price is one spare LED of each used type, to be used as hot-swap. Actually I’ve spent 50% more on this, because of trying out various other LEDs. But I’m not going to return the huge LIMA Bulbs, because their 120 watt equivalent of light is really impressive. Perhaps I’ll present them to some friend who has big enough E27 lamps :)

Now, if I turn on all the light, it will consume 111 watt and not 870 watt as before. But actually, I don’t really care :) And now, I’m wondering, when exactly the first LED is gonna break. Well, hopefully not next month :)

Kommunikation

Ein Jäger wollte einmal in die Taiga jägen gehen. Da ihm die Gegend unbekannt war, nahm er einen Fremdenführer. Und so sind sie gegangen: vorne ging der Fremdenführer mit einem langen Axt und machte im Gebüsch den Weg frei. Der Jäger folgte ihm auf den Fersen.

Plötzlich kommt ihnen aus dem Dickicht ein Bär entgegen. Beide Parteien sind überrascht, der Fremdenführer erstarrte gleich so wie er war, der Bär auch.

Der Fremdenführer sagt dann sehr leise dem Jäger, ohne sich umzudrehen:

— Komm her…

Und er hört von hinten gar nichts. Dann wiederholt er etwas lauter:

— Komm her!

Wieder nichts. Der Fremdenführer, schon unruhig und ganz laut:

— SCHEIßE, KOMMST DU?!

Und dann hört er den Jäger ihm direkt ins Ohr flüsternd:

— Bist du bescheuert, wieso rufst du den Bär herbei?

Apple User Experience, part II

Last year I’ve presented my mother an iPad and shared my mixed experiences with it. It turned out, that most of the time, my mother has used it as a YouTube client: my parents don’t have Russian TV subscription in the cable network, so that she was watching Russian content present on YouTube.

This year, I’ve decided to improve her experience, and make her possible to watch YouTube on their (older) Sony TV set. Well, technically, she already can do it, because they have an XBOX, and this year, a YouTube app is appeared there. But this would mean my mother has to manipulate the XBOX controller (which has twenty times more buttons than iPad) or use Kinect, which is too hard for her. So I bought an Apple TV. Using a Apple TV remote control didn’t seem to me much easier than using XBOX controller, so that the actual reason was the AirPlay.

Unfortunately, it has turned out that iPad1 doesn’t support AirPlay (I didn’t check, but the Internet has this opinion). Which is very suboptimal. DLNA is a relatively straightforward and simple protocol that can be supported by almost any hardware, even by cheap standalone harddrives or DSL modems slash routers. But no, Apple absolutely had to invent an own proprietary AirPlay protocol having less functionality and incompatible with many things. For example, with this poor iPad1. Therefore, I’ve also bought some new iPad. I have no idea what exactly it is, because Apple doesn’t have an easy to grasp versioning scheme for iPads. It is not iPad Mini for sure, but I have no idea if it is iPad 2, 3 or 4, and frankly speaking, I don’t care. All I wanted it to be is iPad1 with AirPlay support.

Well, and because I’ve bought this newer iPad, I’ve also bought this magnetic cover thing. Because, well, it made me feel less foolish paying a ton of money just to gain AirPlay support. Now, my mother now can also turn on iPad display by opening the cover. She smiled when she saw this first time, and this has at least partially compensated the 500 euros I’ve paid for the iPad.

In a sense, this has worked out good for Apple, they have earned 700-something euro on me. But I hated this customer experience, and this means, I would dis-loyally turn my back on them, as soon as there will be a possibility to do so.

Especially considering the fact that the Apple TV doesn’t have a HDMI cable included. This was an almost shocking discovery, and luckily, I’ve opened the package beforehand. Otherwise, imagine, my mother would cut-open her Christmas gift, and find out a black box, and a power cable, so that you can turn it on, and the only outcome of this present is a white LED on this box. My mother certainly wouldn’t be able to appreciate it.

So, I bought some HDMI cable beforehand, and connected the Apple TV to my parents’ TV set. As usual, all devices were immediately ready to be used (in this case, the Apple TV remote control already had the batteries inserted, and the iPad came fully charged), which is a very positive thing and should definitely be copied by all manufacturers, unless Apple has already patented this.

I’ve first started with the Apple TV. I switched the user interface to Russian, and proceeded with the usual settings, including setting the time zone to Germany. When it has finished, it has shown a couple of tiles. I remember seeing Movies, Music, Podcasts, Flickr, YouTube and WSJ. The latter was actually an early warning sign. What the hell a magazine in English is doing in the Apple TV bought in Germany and switched over to the Russian languange? But I’ve missed it, and went straight to the YouTube. Then I selected the Search navigation element, and the Apple TV has displayed an on-screen keyboard. The user interface itself (i.e. for example the buttons “delete”, “clear” and “apply”) was still all Russian, but the keyboard had only displayed latin set of characters. This has immediately rendered YouTube unusable, because of course you need to use cyrillic characters when searching for titles on YouTube. I then went to the Movies section. Some Hollywood movies appeared in selection (none German movie and none Russian movie). My sister has spotted some Twilight movie, so I have selected it, and a movie detail page appeared. What I saw next was simply shocking. The movie description as well as other additional information, was all in English. I’ve started a movie trailer, and it buffered quickly and started to play smoothly, but the audio track was also in English.

People. Apple TV is a disaster. It is the best example of how to do internationalization and localization. NOT.

Well, you might say that Apple TV is targeted only to the well-educated German audience, who have no problem (or eventually even prefer) to consume content in English. But I tell you, I’ve bought this device in Saturn, and there were two shelves fully stocked with them. There were like really tons of them, definitely more than for example any single Android phone model. So, it doesn’t seem to be perceived as a niche product.

So, I put the remote control far away and said: “So, mother, forget what you’ve just seen, you don’t need to learn it. I’ll show you now how you’re going to use your new devices.”

I took her iPad and went through the initial setup procedure. It is funny, while Windows is asking less and less questions from version to version, Apple is asking more and more. The first iPad didn’t try to upsell me iCloud, for example. And I’m pretty sure that this initial setup couldn’t be done by my mother alone. How is she supposed to answer the question about iCloud, properly? How is she supposed to be able to remember and enter properly her Apple ID and corresponding password?

Why the hell does she need any ID or password at all? I remember the times when the phones weren’t smart, you didn’t need to know or understand the concept of a password. You’ve just turned on your phone, and could start using all its functions. I’m sure, for tablets, it is possible to create a totally password-less experience, and I believe this would be very important for my mother. This whole Internet dirty hack concept of IDs and corresponding passwords, she cannot grasp it.

When her health insurance sent her a snail-mail letter stating that they are re-launching their web site so that my mother has to re-set her password, it has ruined her day, and she was very upset and worried. She believed, her future health insurance coverage depended on proper understanding and acting on this letter, and she didn’t even know what this letter mean!

Anyway. I’ve finished the initial setup and as always, removed the English keyboard (that is a known plague that both Microsoft and Apple share: when you install a German OS and select Russian as your language, you’ll get three keyboards installed automatically: English, German and Russian. Again, who the hell needs the English keyboard here in Germany?).

And then, I handed it over to my mother telling her, so, now, everything it JUST like your first iPad. No need to learn something new. Now, just start YouTube and search for some sample video clip. Just like you always did, mother.

Yes, you’ve guessed it.

She didn’t find the YouTube app. Because there was no pre-installed YouTube app!

This is how easy your device can loose all trust of users that its previous version managed to gain beforehand. Just do not pre-install an app that 99,9999% of users would install and use anyways.

I’ve installed the app, and handed it over to my mother again. She has found out some video clip, and started the playback, and now I wanted to turn on the AirPlay. Up until this point, I didn’t know how to turn on AirPlay.

This is what happened:

1) I’ve first looked for some button resembling AirPlay.
2) I’ve first found the “share” button, tapped on it, but didn’t find any AirPlay option.
3) I have then tapped on the video itself, and have found there a new icon.
4) And then I’ve told to my mother: “So, mother, now you’re gonna tap on this icon, and this will bring your video to the TV”. And tapped on the icon.
5) Apple TV has immediately shown a loading sequence, and two seconds later, the video was playing on the TV set!

This was, like, an epiphany. Especially by contrast with the previous UX disaster.

Generally, there are three positive things about AirPlay:

  • You don’t need to pair devices, which is better than Bluetooth. I suppose, you still can limit access to your Apple TV, but by default, any device in the local network can stream to it. This is the proper way to do.
  • AirPlay is not an app by itself, it just a mode that can be present in several different apps. You just look for the known AirPlay icon, and tap on it, and it works similarly everywhere. This is better than how DLNA was implemented in Android 2.x, where you have to use the iMediaShare app, or whatnot. I’m not sure about the 4.x versions of Android though; I saw some very similar DLNA-based functionality built-in, but I didn’t have the opportunity to test it yet. Martin T., if you’re reading it, I’m waiting for a SW version with DMP ;)
  • Apple TV is quite remarkably tuned specifically for AirPlay; player startup and buffering time together are below two seconds. For comparison, you need to wait much longer, when Apple TV is starting the Flickr app.

Next day, I wanted to bring some web page on the TV screen. I launched Safari and searched for an AirPlay icon. Nothing there. How can that be, I thought. So I’ve googled. It turns out, there is another mode of AirPlay where it is just mirroring the screen onto TV. In Internet, they write this “just mirroring”, as if it was something clear and easy to understand, but actually, what the hell is different? Why I as user have to differentiate between “send that video to TV” and “sent that screen to TV”? I expected AirPlay, while proprietary, to be at least so smart and advanced as the Microsoft’s RDP, so that it could determine, what parts of the screen are playing video, and handle this situation differently. But OK, this is suboptimal, and I thought I have to live with it. I still didn’t know how to turn on this AirPlay mirroring mode. What I’ve found was “hillarious”. You have to double-press the hardware button! Then a stripe of previously run apps will appear. Now that’s nice, and Android had this feature since ages, with one exception: Android app are actually running in background. But so far, this has nothing to do with AirPlay. Now, wait for it, you have to swipe with your finger from left to right over the app icons. Then an AirPlay button will appear, so that you can tap it. Whew! Two absolutely un-intuitive actions for a simple task.

And this is what I can see quite often in Apple products. This is like ejecting the disk by dragging it into the waste bin. Or performing some special magic by holding some known keys while the Mac is booting. Quite often in this cases, when their UX designers had a tradeoff between making the UI more crowded, and making the operation impossible to find, they would choose the latter. When I’ve seen an iPod first time, I’ve spent 5 minutes trying to understand how to use it, and I’ve failed. I didn’t watch any videos beforehand so that it didn’t occured to me that you can let your finger follow the sensor wheel, and that this movement could mean something meaningful in terms of interaction. I suppose the first iPod versions had a physical wheel, so that one could understand it better, but I’ve missed them too. Therefore, my sentiments about the AirPlay mirroring function might also be skewed: who knows, perhaps in the world of Apple fans, double-pressing on the single button has always been used to perform cool and meaningful operations…

I might report on our further experiences, when my mother actually uses her new iPad.

Bridging the gap, step one

It is not a secret that there is a gap between engineering and business worlds. They speak different languages, and they are often in conflict. Switching sides and going from engineering to product management, I was looking forward to see the world from business point of view. I was sure that I’ll become a lot of insightful information. The kind of information, that might help to bridge the gap the between engineering and business. Today I want to write about the first piece I’ve learned.

This Fall, I was walking together with my father, who is an electrical engineer. We were talking about his and my jobs. I’ve asked: what is your typical feedback delay? He was, like, “what are you talking about?”. I’ve explained: say, you have a circuit, and you need to improve it, and at some point of time you snap your fingers, because a promising idea has come into your mind. So, you first change your drawing, re-calculate and simulate the new circuit in software, then you solder the changes in your actual physical developing board, then you turn it on, fix soldering problems, ensure it works, measure its parameters – and now! Finally, now you know if your idea was good or bad. So, what is the typical time span between snapping your fingers and knowing if the idea was good or bad?

He responded calmly: well, it’d take a couple of days. If everything goes well, perhaps one can even make it in one day.

I was shocked. I’ve expected something in range of several hours. I’ve said: as a software developer, I consider feedback delay above 5 minutes to be hell and absolutely inappropriate. Longer delays will surely knock me out of “the Flow” state of mind. This means I would become several orders of magnitude less effective. My typical feedback delays in web development used to be 2 to 15 seconds. In the TV set software development, I had initially delay of 40 minutes (especially when a full re-build was required), and I have invested a lot of energy to reduce it to 20 seconds in the best case, and to (barely acceptable) 6 minutes in case kernel code had to be re-compiled and flashed. I have also explained to my father, that there are some newest state-of-the-art IDEs in the web development that achieve the feedback delay below human-perceptible fraction of second, and the people who have used them say that it would mean another order of magnitude improvement in their efficiency.

My father was bored. Reducing feedback delay never occurred to him as something important. Most of the time, he told me, he would require some components that are not available on the internal stock and have to be back-ordered. This is where the couple of days delay comes from, he said, and you just switch and do something different meanwhile.

I’ve just started as PM back then and I thought, this might be a useful advice. The way the lean PM works, at least in theory, is very similar to any research, including software development or engineering. You snap your fingers, because you have an idea how to improve your product. You spec the idea, shape it together with all stakeholders, and then you kick-off the development project, and the idea will be developed, and at some point you get it for testing, and the software will be bugfixed, and at some point you’ll release it live, and users start using it, and you wait some more to gather enough data for it to be statistically relevant, and you analyze the data, and finally learn if and how your idea has worked.

I knew, as a PM I couldn’t have feedback delay of seconds, minutes or hours. I was thinking about weeks. So I’ve figured out, I’ll just learn how to run several ideas in parallel and switch between them to overcome the necessary feedback delay.

What I wasn’t prepared to, is the world where my feedback delay might be up to 6 months, or even longer.

Up to six months.

This is 180 days, or 1440 hours, or 86400 minutes, or 5184000 seconds of waiting for idea to be tested.

Man, people change significantly in 6 months. The world changes significantly too. By the time my idea will be released into production, I might be fully disconnected from it, because I’ve learned and changed so much in the past 6 months.

Also, put it in this way. As a software developer, I usually need around ten feedback loop iterations to develop even a simplest thing (like additional simple checkbox on a web form). I didn’t become smarter switching to PM, so I supposedly would still need my ten iterations before I find something useful. Ten iterations a 6 months each, would mean investing 5 years of life, just to find out one small working product!?

Luckily for me, this math is irrelevant. Every hour I spend on specifying, means one day of work for designers, and perhaps one week of work for several developers. Performing ten 6-months iterations to find product/market fit will most probably be more expensive than any expected earnings from the product. So, the program will be terminated much earlier.

This all boils down to the following equation:

P ~ N = Inv / Ci

The probability P of finding a reasonable product is proportional to the number of iterations N, which is equal the investment Inv we are willing and/or able to make for this program divided by average cost of a single iteration Ci.

In reality, this equation is a bit fuzzy, because Inv isn’t strictly speaking a constant, but the overall factor relationships remain. Which will inevitably bring us to the point: if we want to make everything we can to improve our chances of finding a product, we have to reduce iteration costs.

I’ll repeat we last sentence in the engineering dialect: in the beginning of the new product, we have to write code that is just (barely) good enough. That is, not commented, not unit-tested, not necessarily being able to handle more than a handful of users simultaneously, not always readable, not maintainable in the long term, handling only 80% of all cases (i.e. not handling any corner-cases properly), maybe a little slow, and perhaps eating a gigabyte of RAM for trivial tasks. But, written as quickly as possible, and providing the perfect user experience (for the 80% of use cases).

To put it in another way, we have two choices: either work for six months producing a single release with a cool source code, which will be then thrown to the garbage bin, because we didn’t manage to find the product. Six months of life wasted. Nothing gained. Or, work for six months producing twelve releases of good enough source code, but having finally found a working product. And then we can gradually improve our source code. In a couple of years, nothing will remain of the initial code the new product was based on, and we all could be proud of having created a new product.

Back in 2006, my choice was clear: never compromise on quality. I was so full of this shit every software engineer is being fed in the university and in the most of books – the holy grail of craftsmanship, and you have to be proud of the quality of your work. I overestimated complexity of coding, and underestimated complexity of marketing and product. I used to say, “you’re business guy, you have to ensure your product idea works even before you bother me coming with a kickoff meeting. Just like I’m responsible for proper implementation, you’re responsible for proper business value.”

In 2006, we’d got a first project involving some new technology. And the business guys had told me that other similar projects might come. Therefore, I took over the role of a technical product manager: already in the first project, I’d extracted parts that might be reused, and implemented them in a generic way, laying over the foundation of the future product. I was already proficient in creating reasonable software architectures quickly and efficiently, so that it didn’t take too much additional time and therefore went successfully.

When the second customer with similar requirements appeared though, two things happened at once. First, I’d discovered that the first version of my product, however generic, could not handle all requirements of the second customer. This was expected. From the beginning on, I’d followed the YAGNI and KISS principles to avoid creating a mega-configurable monster that could still fail to be generic enough, but would require a lot of efforts up-front. Instead, I was planning product development in a modern, agile way: start with some reasonable point, and iterate, continuously extending the genericity of the product, driven by actual customer needs, until upcoming projects can be implemented by merely reconfiguring the product.

What was totally unexpected though, was that I was fully used in another project having tight deadlines, and neither I nor my management could afford delaying it. In my opinion, the situation was clear: because I’m unavailable, we should delay the implementation for the second customer, or even turn him down and wait for a next possibility. But the business guy who had owned the second customer decided differently. He had just copied the source code of the first project into another folder, changed the code in the simplest way possible to satisfy requirements of the second customer, and rolled out this web app. (did I mentioned this business guy was also an engineer?)

This had naturally lead to a heated discussion between us. My points were:

  • He had missed an opportunity to improve our baby-product.
  • He had created a second instance of a similar software, which had to be bugfixed and maintained too.
  • Evolving a generic product step-by-step out of customer projects was a technically challenging non-trivial task I was happy to perform. But if our strategy was rather copying source code of a previous project and changing it from customer to customer, such an experienced engineer like me was overqualified for this task and the business guy should go find some junior developer and stop wasting both company’s resources and my own limited lifetime on this product.
  • The mess he had created was not worth of those some thousands of euros he could earn from the second customer.

That business guy was shocked and had just said that he didn’t and doesn’t see how he could do things differently.

From my current perspective, I suppose he was just carefully testing the waters: is there a feasible market for a product? This was perhaps a communication problem as other business guys had told me to work on productization, but actually, there was no point to invest into software productization unless there was some objectively measurable or at least subjectively perceivable market potential. He had one data point, and was handling the second one, and perhaps after the third one we could see where it was going. When we were still testing the waters, the only things that were valuable were time-to-market, and as little efforts as possible.

And here is the key learning of this story: we’ve never got a third customer.

This has resolved our tension in a natural way. But suppose we’d have gotten another customer; what were a proper strategy for me, knowing everything what I know today? I didn’t do much wrong in the first project. Well, I had introduced some not very necessary admin UI, wasting a day or two of efforts, but most of the time, this was just good enough code with focus on time-to-market. The issue with the second customer wasn’t a problem in my strategy, but rather a coincidence (my unavailability). Therefore, I should have continued to evolve the source code of the first project, just barely enough to be able to support requirements of the third customer – basically, proceed with my agile strategy, leaving the solution for the second customer aside (and accepting it, and counting it as necessary evil).

If the things went well, eventually, we’d have had a continuous stream of new customers wanting our baby-product, and this would have been a proper time to come to the business guy and discuss if we’re going to invest some more into the product itself, what our main features are, what about building and training a team to maintain the running web apps, what about change and release management, public versioning and changelog, cloud solution, marketing activities and all the other stuff pertinent to slowly maturing products.

The reason why we haven’t got a third customer: there was almost no market for it back in 2006, and none at all today. As of today, three huge corporations have each a free-to-use product, rendering our idea useless.

So, to summarize, when working on a new product, it might be useful to see it in the following phases:

1) Market Exploration phase (refer to the slide 26). Here, develop as quick as possible. You are allowed to develop dirty. You’re also allowed to develop less dirty – but only if this doesn’t increase development time measurably.

2) Productization phase. Here, some more or less dirty code has to be gradually converted into more generic one; new features are mostly developed in a generic way, but still, the code has to be of the good enough quality, that is, being barely able to handle requirements.

3) Maturing phase. We start focusing on source code quality, scalability, security, performance, maintainability, processes and so on.

Hell yeah, I know they say that security cannot be added, it must be built from the very beginning. Or, scalability cannot be added, it must be built from the very beginning. And, and, and… Now this is a challenge of course, and I’m interested to write more about this topics in the future. But yet again, if you spend time for those properties in the first two phases, you’d probably just waste your time and the time of your team on a dead-born product.