On corporate politics

My father has been living in the USSR for 54 years, before moving to Germany. During all this time, he has only owned two cars.

Owning a car in the soviet union was something only for people with big balls. The story started with the impossibility to buy a car. You could not just save money, go to a shop and buy a car. There were simply no dealerships. New cars were one of the most scarce articles in the country, so that they weren’t sold, but rather distributed. Many state-owned companies had got a contingent of several cars per year, and the local trade union committee had distributed them personally across the most politically active employees.

And no, “distributed” didn’t mean they were for free – only the right to buy was for free, but for the car itself, you still had to pay the full price. Which was around four to six years of salary for a lead engineer – an exorbitant price. Nevertheless, there were more people who wanted to buy a car than the yearly car contingent, so that a waiting list had been organized.

When I was 10 years old, I’ve asked my father why didn’t we have a car yet, and he told me he is on the waiting list, and considering the current situation, we will get the right to buy a car in around 10 years. Somehow, this was a satisfying answer for me. First, we had enough time to spare money. Second, I’ve figured out that I will be around 20 years old when we’ll get a car, so if I get the driving license with 18, I’d only have to wait for two years.

Of course, there also were used cars in the soviet union. But first, they were only available on an illegal black market (for some ideological reasons, the government didn’t like the idea), and second, their price was not much different from the new car price, considering it was virtually impossible to buy new cars.

But, even after buying a car, your story just started. The car assembly quality was awful. Therefore, after buying a brand-new car you had to go through each and every part of it, and fix it, because many parts weren’t properly installed, or nuts not screwed to the end. But there is more: the car design was even worse. Easily corroding materials have been used, without proper coating. Parts that had to be serviced regularly, were not designed to be easily removable and installable. For other parts, lifetime increasing improvements have been developed by car-owners and popularized under the car owners community. Therefore, the usual procedure after buying a car was to uninstall many of its parts (including dismounting and opening the engine), check them up, fix the defects, apply improvements, coat all surfaces with anti-corrosion agent, and assembling them back – this time properly.

The improvements, as well as proper procedures for dismounting and mounting of parts, have been popularized by the magazine “Za Rulem”, which was one and only car magazine in the USSR and had 4 millions subscribers. For many car owners, this was the only way to use the car – this means, to service it by themselves. Well, there were some government-owned car services in the USSR, but they were even more challenging to use than buying a car. You had to wait for months, until you will get an appointment. And on the appointment, you were typically told that some scarce spare part needed to service your car is currently not on stock, so you either had to buy an exorbitant bribe (around monthly salary) so that this part will be “magically” found on stock, or bring your own part, obtained illegally on a black market.

Because of this reasons, my father avoided going to the service at all, and serviced his car himself in the garage. To be able to do that, he had welding machine, lathe, milling tool, car lifting and tilting device, and all kinds of saw, drills, hammers and wrenches. As well as all kinds of liquids, raw materials and spare parts. His garage neighbors went to my father whenever they needed some tool, and my father went to his neighbors, whenever he needed another pair of helping hands.

If the term “garage neighbor” doesn’t tell anything to you: from where we’d lived, to get to the garage, my father had to walk 30 minutes to the nearest bus stop, then take a bus for 20 minutes, then walk another 20 minutes, until he reached a big industrial park. In the part of it, several thousands of garages were built. One of those belonged to my father. He would typically unlock the door, drive out the car, check it up, quickly fix whatever new problem he’d found, then drive the car all the way back to our house to pick up my mom and me, and then we were off to drive to whatever destination we were heading to (eg. to a food market to buy for the next week). After arriving home, the reverse procedure had to be done.

On every car ride, my father had to spend around 3 hours of walking, taking a bus, and servicing the car. During all the time our family had a car, I can’t remember any single day my father had time for hobbies, sport, culture or any other recreation in his free time. When I speak with him about it, he becomes very sorrow and regretful and complains that all his life has been miserably wasted in an effort to make a decent living, including “having a car”.

I love my father, and with all my heart, I hate everything responsible for his sorrow. One of the factors is the communism. Communism means that a state has to be organized like a huge corporation. There is no room for free market, nor for any agile mechanics in the communism. Living in a communistic state means you are part of a huge bureaucratic corporation, with a lot of politics going on. And the corporate politics is the second, most important factor I hate. Everyone who has ever lived in a corporate state, can immediately give a lot of very explicit examples, how corporate politics directly translates into a miserable life.

It is not that a plan economy cannot even theoretically provide a reasonable car supply. Just plan enough cars, invest enough money, carefully plan to overproduce to take into account all kinds of defects as well as demand spikes. Everything sounds manageable.

The problems start when some middle-level boss in the weapon ministry starts playing power games against his colleague from the car ministry, and wins, and therefore the state spends more money on producing tanks than on producing cars. Not that the weapon boss genuinely thinks his motherland really urgently needs more tanks than cars. It is all about his power versus the power of the car boss. And because, being a middle-level boss, he already owns the best possible car, he is not personally interested in having any more. And the folk? The folk doesn’t play any role in his corporate politics game. In a communistic, corporate state, the folk doesn’t have anything to say.

Corporate politics is in my opinion the major ultimate source of unhappiness, dissatisfaction, depression, health problems, and deaths caused by improper handling of patients; source of all kinds of waste, including waste of not renewable energy and materials, all kinds of cultural and knowledge loses, ecological dangers (Fukushima was not a technical, it is a corporate problem) and many, many more.

The only people who think they profit from corporate politics are the one who is playing this game; but statistically, most of them would lose the battle most of the time. And even the ones who win, only have more power and more money, but not more happiness and more life. Because you can’t be truly happy until your conscience is clean, and theirs is not.

陈勋奇 – 醉生梦死


How M should an MVP be?

Minimum Viable Product is now mainstream. But what exactly does it mean?

In my opinion, MVP is just an example of a more generic principle: Fail Fast. In other words, if you have to fail, it is better to fail in the very beginning, reducing the amount of burned investment.

If my idea is good, using MVP is counterproductive: some early adopters will get bad first impression due to lack of some advanced features or overall unpolishness, and we will need to spend much more money later just to make them to give us another chance.

If my idea is bad, MVP will save us a lot of money.

Because there is no sure way to know if my idea good or bad beforehand, it is safer to assume it is bad and go with the MVP.

But how exactly minimal the product should be? Do we want to reduce the feature set? Or don’t care about usability? Or save on proper UX and design? Does it mean it may be slow, unresponsive, unstable? Can its source code be undocumented and unmaintainable?

Well, the reason of MVP is reducing the overall investment. The principle behind it, is investing just that much to achieve a sound and valid market test, and not more. This means, when deciding about MVP, you tend to cut the area what costs you most.

For example, let’s assume we have a product development team that needs only 1 day to design a screen, 3 days to develop the backend for that screen, and 10 days to develop the frontend. It is naturally, that MVPs produced by this team would tend to have great visuals combined with an awful and buggy UI and a very good backend.

Let’s assume now that a team needs a week to design one screen, 1 day to develop the frontend, and 5 days to develop the backend. MVPs of that team would tend to have ugly, but responsive and user-friendly UI that would often need to show the loading animation because of a sluggish backend.

What does it mean?

This means that a double advantage is given to teams capable of designing and fully developing one screen per day: not only their MVP will be released sooner (or alternatively can have more features, better look and performance and more user-friendly UI), but also it can be a well-balanced and therefore mature-looking product (that’s an advantage to be mature-looking).

And this also means, if you want to identify where your business has capacity issues, just look at your typical MVPs: if some areas of them are substantially worse than other, you know what areas of the product team can be improved.

張靚穎 – 印象西湖雨

Client Driven Development

When I first tried out the test-driven development (it was around 1998, I think), I was fascinated how it helped me to design better APIs. My unit tests were the first clients of my code, so that my classes obtained a logical and easy-to-use interface, quite automatically.

Some time later I’ve realized that if you have a lot of unit tests, they can detect regressions and therefore support you during refactoring. I’ve implemented two projects, each took a couple of years, and have written around 200 unit tests for each.

And then I’ve stopped writing unit tests in such big counts. My unit tests have really detected some regressions from time to time. That is, around 5 times a year. But the efforts writing and maintaining them were much higher than any advantages of having detected a regression before manual testing.

But still, I was missing the first advantage of TDD, the logical and easy-to-use interfaces. So I’ve started to do Client Driven Development.

The problem with the unit tests is that they don’t have any direct business value per se. They might be helpful for business goals, but in a very indirect way. I’ve replaced them with a client code that does have some direct business value.

For example, I’m developing a RESTful web service. I roughly know what kind of queries and responses it must support. I start with developing a HTML page. In there, I write an <a> tag with all the proper parameters to the web service. I then might write some text around it, documenting the parameters of the service. Then I open this page in the browser and click on the link, which predictably gives me a 404 error, because the web service is not yet implemented. I then proceed with implementing it, reloading my page and generally using it in place of a unit test.

Of course, this approach has the drawback that, unlike in a unit test, I don’t check the returning values and thus this page cannot be run automatically. If you want, you still can replace this link with an AJAX call and check the returning values – I personally don’t believe that these efforts would pay off at the end of the day. More important is that this page has an immediate business value. You can use it as a rough and unpolished documentation for your web service. You can send it to your customer, or another team writing some client, etc.

If the web service is designed in a way that it is hard to get away with <a> and <form> tags, I would write some JavaScript or Silverlight code to call it properly. In this case, the page might have more business-relevant functions. For example, when it loads, it might request and display some data from the web service, in a sortable and scrollable grid, and allow you to edit them, providing you with a very low-level “admin” interface to the service.

This approach is not constrained by web development. I’ve used it for example for inter-process communication, and if my code has not yet been refactored out, it is flying now in passenger airplanes, and running inside of TV sets in many living rooms. In this variant, I start developing the inter-process communication by creating a bash script or a trivial console app that would send the messages to another process. I implement corresponding command-line options for them. When I’m ready, I start developing the receiving part, inside of some running process. This has the similar effect on the API design as unit testing, but has the advantage that you can use it during debugging, or even in production, for example in some startup scripts.

I’m not an inventor of this approach, indeed I often see this approach in many open source projects, but I’m not aware of any official name for it.


Smart TV application software architecture

Someone come to my blog searching the phrase in the title of this post. To avoid disappointments of future visitors, here is a gist of what the architecture looks like.

First of all, let’s interpret the architecture very broadly as “most important things you need to do to get cool software”. According to this, here is what you need to do:

1) Put a TV set on the developer’s desk. And no, not “we have one TV set in the nearby room, he can go test the app when needed”. And no, not “there is a TV set on a table only 3 meters away”. Each developer must have an own device.

2) Get a development firmware for the device so that you’ll get access to all log files (and ideally, access to the command line). A TV set is a Linux running WebKit or Opera browser.

3) Most Smart TVs support CE-HTML and playing H.264 / AAC videos in MP4 format. Just read the CE-HTML standard and create a new version of your frontend. Alternatively, you might try to use HTML5, because many Smart TVs would translate remove control presses as keyboard arrow key presses; and some Smart TVs support the <video> tag.

4) If you’re interested in a more tight integration with the TV, eg. be able to display live TV in your interface, or switch channels, or store something locally, you need to choose a target ecosystem, because unfortunately there is no standard app API spec today.


For some reason, I meet people every day who don’t agree with my MUSE framework or at least implicitly have a different priority framework in their minds. Usually it looks like this:

“Let us conceive, specify, develop, bugfix and release the product, and then ask the marketing guys to market it”. Well, what if we first ask marketing, what topics can bring us the cheapest leads and then conceive the product around them, or at least not against them?

“Solution A is better from the usability standpoint than solution B, therefore we should do A”. Well, B is better for motivation, because it looks more beautiful, and beauty sells. I don’t care if something is easy to use, if it looks so ugly that nobody would want to use it.

“So does this bug prevents users from using the feature, or it is just some optics, and the functionality is all in place and working?” Well, users first needs to have a reason to use our product, second must be able to understand the product. Unless these two requirements are satisfied, it doesn’t matter if the functionality is working. It is different from, say, enterprise software, where users are in the work setting and have to use the software. In the entertainment market, nobody has to read the book, listen the song, or watch the movie to the end. Or use our web site.

“MVC is a great idea, because it allows us to decouple logic from view. Let’s quickly find and use some MVC framework for HTML5!” Yes, MVC is a great idea for enterprise software, because it makes UI easier to test, allows designers and developers to work in parallel, and provides reusable components in a very natural and effortless way. But one of the MVC drawbacks is that you can easily hit a performance issue in the UI, because this architecture prevents you from squeezing 100% of the performance from the UI technology. Besides, HTML5 MVC frameworks often wrap or abstract away DOM objects and events, and therefore make it complicated to define exactly, what is shown to the user. Another MVC drawback is that it helps you to believe that reusing exactly the same UI components overall in the app is always a good idea, because hey it is good for implementation and good for usability. But a seductively looking beautiful design is more important. Even if it looks a little different on different screens.

And sometimes it is quite hard for me to understand these opinions, because MUSE sounds for me so natural and logical that I can’t imagine any other consistent priority framework.

MUSE – an attempt of product priority framework

Steve Jobs said, product management is deciding what not to do.

But how do you decide?

This is the priority framework I’m trying to live today. It works like a Maslow’s pyramid: until the first level is solved, it is not possible or not necessary to solve the second level.

Motivation. People must be motivated to start using the product. If they don’t even see a reason to start using it, nothing else matters.

  • Good marketing or packaging.
  • Product / Market fit.
  • Seducing optic (design for conversion).

Usability. People must be able to use the product, and have fun using it. If they fail to use the product, nothing else matters.

  • It must be not too hard to learn how to use the product.
  • People must understand the product.
  • Any unnecessary task people have to do must be eliminated.
  • If possible, limit the product with the functions that are easy to make easy to use. Or at least start with these features.

Service. This is where the functionality and features come.

  • Not just a function, but a service, solving a user’s problem.
  • Not just a service, but a trusted first-class high-quality service, with a sincere smile, and the good feeling of having made a right choice.

lovE. This is the ultimate dream.

  • People that are attached to your product.
  • People that identify part of themselves with your product.
  • People that not only recommend your product, but start flame wars defending its drawbacks.


Software Architecture Quality

What is a good software architecture?

Too many publications answer to this question by introducing some set of practices or principles considered to be best or at least good. The quality of the architecture is then measured by the number of best practices it adheres.

Such a definition might work well for student assignments or in some academia or other non-business settings.

In software business, everything you ever spend time or efforts on, must be reasonably well related with earning money, or at least with an expectation to earn or save money.

Therefore, in business, software architecture is a roadmap how to achieve functional requirements, and non-functional requirements, such as run-time performance, availability, flexibility, time-to-market and so on. Practices and principles are just tools that may or may not help achieving the goals. By deciding, which principles and practices to use, and which tools to ban, a software architect creates the architecture, and the better it is suited for implementing the requirements, the better the architecture quality is.

Like everything in the nature, good things don’t appear without an equivalent amount of bad things. If some particular software architecture has a set of advantages helping it to meet requirements, it has to have also a set of drawbacks. In an ideal world, software architect is communicating with business so that the drawbacks are clearly understood and strategically accepted. Note that any architecture has its disadvantages. The higher art and magic of successful software development is to choose the disadvantages nobody cares about, or at least can live with.

For already implemented software, the architecture can be “reverse engineered” and the advantages and disadvantages of that architecture can be determined.

And here is the thing. Implemented software is something very hard to change, and so its architecture. Thus, the set of advantages and disadvantages remains relatively stable. But the external business situation or even company’s strategy might change, and change abruptly. Features of the architecture that were advantages before, might become obstacles. Drawbacks that were irrelevant before, might become show stoppers.

The software architecture quality might change from being good to being bad, in just one moment of time, and without any bit of actual software being changed!

Here are two examples of how different software architectures can be. This is for an abstract web site that has some public pages, some protected area for logged-in users, some data saved for users, a CMS for editorial content and some analytics and reporting module.

Layered cake model

On the server side, there is a database, a middleware generating HTML, and a web server.

When a HTTP request comes, it gets routed to a separate piece of middleware responsible for it. This piece uses an authentication layer (if needed) to determine the user, the session layer to retrieve the current session from a cookie, persistence layer to get some data from the database (if needed), and then renders the HTML response back to the user.

Because of these tight dependencies, the whole middleware runs as a single process so that all layers can be immediately and directly used.

If AJAX requests are made, it is handled in the same way on the server side, except that JSON is rendered instead of HTML. If the user is has to input some data, a form tag is used in HTML, which leads to a post to the server, which is handled by the server-side framework “action” layer, extracting the required action and the form variables. The middleware logic writes then the data into the database.

CMS writes editorial data in the database, where it can be found by middleware and used for HTML rendering.

A SQL database is used, and tables are defined with all proper foreign constraints so that data consistency is always ensured. Because everything is stored in the same database, analytics and reporting can be done directly on production database, by performing corresponding SQL statements.


  • This architecture is directly supported by many web frameworks and CMSes.
  • Very easy to code due to high code reuse and IDE support, especially for simple sites.
  • Extremely easy to build and deploy. The middleware can be build as a single huge executable. Besides of it, just a couple of static web and configuration files have to be deployed, and a SQL upgrade script has to be executed on the database.
  • AJAX support in the browsers is not required; the web page can be easily implemented in the old Web 1.0 style. Also, no JavaScript development is required (but still possible).
  • Data consistency.
  • No need for a separate analytics DB.


  • Because of the monolithic middleware, parts of the web site cannot be easily modified and deployed separately from each other. Every time the software gets deployed, all its features have to be re-tested.
  • On highly loaded web sites, when using the Web 1.0 model, substantial hardware resources have to be provisioned to render the HTML on the server side. If AJAX is introduced gradually and as an after-thought, the middleware often continues to be responsible for HTML rendering, because it is natural to reuse existing HTML components. So that the server load doesn’t decrease.
  • Each and every request generates tons of read and write SQL requests. Scaling up a SQL database requires a very expensive hardware. Scaling out a SQL database requires very complicated and non-trivial sharding and partitioning strategies.
  • The previous two points lead to a relatively high latency of user interaction, because each HTTP request, on a loaded web site, tends to need seconds to execute.

Gun magazine model

Web server is configured to deliver static HTML files. When such a static page is loaded, it uses AJAX to retrieve dynamic parts of the user interface. CMS generates them in form of static html files or eg. mustache templates.

The data for the user interface is also retrieved with AJAX: frontend speaks with web services implemented on the server side, which always respond with pure data (JSON).

There are different web services, each constituting a separate area of API:

  • Authentication service
  • User profile service
  • One service for each part of the web site. For example, if we’re making a movie selling site, we need a movie catalog service, a movie information service, a shopping cart service, a checkout service, a video player service, and maybe a recommendation engine service.

Each area of API is implemented as a separate web application. This means, it has its own middleware executable, and it has its own database.

Yes, I’ve just said that. Every web service is a fully separate app with its own database.

In fact, they may even be developed using different programming languages, and use SQL or NoSQL databases as needed. Or they can be just 3rd-party code, bought or used freely for open-source components.

If a web service needs data it doesn’t own, which should be per design a rare situation, it communicates with other web services using either a public or a private web API. Besides, a memcached network can be employed, storing the session of currently logged in users that can be used by all web services. If needed, a separate queue such as RabbitMQ can be used for communication between the web services.


  • UI changes can be implemented and deployed separately from data and logic changes.
  • Different web services can be implemented and deployed independently from each other. Less re-testing is required before deployment, because scope of changes is naturally limited.
  • Different parts of the web site can be scaled independently from each other.
  • Other frontends can be plugged in (mobile web frontend, native mobile and desktop apps, as well as Smart TV optimized web frontend)
  • Static files and data-only communication style ensure lowest possible UI latency. This is especially true for returning users, who will have almost anything cached in their browsers.


  • Build and Deployment is more complicated: at very least, some kind of package manager with automatic dependency checking has to be used.
  • Middleware code reuse is more complicated (but still possible).
  • Data might get inconsistent, and software has to be developed in a way it can still behave in a reasonable and predictable way in case of inconsistencies.
  • AJAX support on the client side is a requirement.
  • A separate data-warehouse or at least a SQL mirroring database is required to run cross-part and whole-site analytics.

Tabu search

There is no simple solution how to achieve the best or at least a good conversion rate. Usually, one just makes an assumption, develops a page, tracks user behavior, analyses the data — and then makes another assumption. This repeats until one can find a satisfying conversion rate, or one runs out of time.

Interestingly enough, there is a quite large amount of mathematical work regarding a similar problem. In the mathematical slang, they call it the global optimization problem.

Mathematically, conversion rate optimization can be described as finding local (or at best, global) maximum of function f(X), where X is a vector of different factors influencing conversion rate. Unlike a typical mathematical optimization problem, the function f is not known beforehand, and both the function f and the factors included in the vector X might change after each optimization step.

There are a lot of metaheuristics formulating different strategies that can be used for the optimization. I’ve just read about a very small subset of them, and I find that the strategies are very interesting. Especially, their names. Who has said mathematicians are not creative?

This is my interpretation of them, as applied to the conversion rate optimization problem.

Gradient descent
You make a small change of all factors in X at the same time, to maximize the conversion rate as you see it. Then you measure it. Then you try to understand, what factor change worked positively and what negatively, and revert changes in wrong factors while increasing the change in the right factors. Iterate, until a local maximum is found.

Hill climbing
Start with a baseline of factors X. Now change the first factor x1 in X. Measure conversion rate. Revert x1 and change second factor x2 in X. Measure. Repeat testing, until all factors in X are tested. This procedure can also be done parallelly using a multi-variate A/B testing. However the testing is done, at the end of the day we can find out, which winning factor xi has made the largest improvement in conversion rate. Now establish a new baseline X’, which is just like X, but with the improved winning factor xi. Continue iterating from this new point, until a local maximum is found. It is quite probably that in each iteration a different factor will be improved.

These two strategies have the following drawbacks:

  • Walking on ridges is slow. Imagine the N-dimensional space of factors X, and the function f as a very sharp ridge leading to a peak. You might be constantly landing on a side of ridge, so that you are forced to move to the peak in zigzag. This takes valuable time.
  • Plateaus are traps. Imagine that no change of the current baseline X is measurably better than X. You know you’re on a plateau. Perhaps it is also the highest possible point. Or perhaps, if you make some steps, you’ll find an even higher point. The problem is, in which direction should you go (i.e. which factors do you want to change?)
  • Only local maximum might be found. If your starting point is nearer to a local maximum than to a global maximum, you’ll reach the smaller peak and stop, because any small change of the current baseline X will be measurably worse than X.

To fight these problems, a number of strategies has been developed.

In the shotgun hill climbing, when you’re stuck, you just restart the search from a random position that is sufficiently different from your latest one.

In the simulated annealing, you are allowed to go into directions what actually show worse conversion rates, but for some short time. This can help to jump over abysses in function f, if they are not too wide.

And in tabu search, if one is stuck, one explicitly ignores all the positive factor changes and explores all the other factors (that previously didn’t play a big role).