This Week in Twitter

  • Habt ihr schon gesehen, wie schön das Kerzenlicht ist? Aber wirklich gesehen? #
  • Note how people communicate with self-created flawless infographics and typography. I wish I learned it at school. http://t.co/MZ24goGL #
  • On December 18th another 230 tonnes of radioactive water leaked at Fukushima. Mass media not interested (?) #
  • Eine sehr interessante Statistik: http://t.co/xRkvNrUE #
  • Checked out Firefox for Android: very quick and pretty, font size just right (small but not unreadable), and cool gestures. #like #
  • I liked a @YouTube video http://t.co/0NRW8lgt Journey – Don't Stop Believing Cover (Piano/Electric Guitar/Voc #

Powered by Twitter Tools

Adventures in embedded C land

Preface

I don’t think of myself as a software developer.

My motivation to learn and work in software development was based on the idea of building things. But if you think about it, software alone is not a “thing”, it is just a component of software product. Even worse, it is an invisible component nobody really cares about, as soon as it is not overly buggy.

Software products interact with users thru their UI. The UI together with use-cases supported by the product and its marketing platform create a user experience, which IS the software product from the user’s point of view. Note how the software itself is not part of this formula :)

If software is poorly written, it makes it hard to evolve it in the future versions. Unfortunately, more often than not the responsible managers either don’t care or can’t tell clean and good software from dirty and poor one. So even this internal aspect of a “thing” – its evolvability – is often neglected.

Having understood that software per se neither a “thing” for the users nor a “thing” for product owners, I’m moving to positions allowing me to define what really matters – UI, Usability and the Product as a whole. But because I have much more experience as a programmer, I both still have and want to work on some down-to-earth, in-the-trenches software development. At least part-time.

Currently I’m discovering embedded programming in C and want to share my impressions.

Horror

I have used various high-level programming languages in the last 19 years, and the last 12 years were exclusively in high-level area. Of course I’ve used assembler and C for some of my first programs back then 20 years ago; I also had some course works at my university to write in C, and I’ve created a commercial firmware for a telecom device in 1999 using ASM.

But returning back to C after having obtained all the experience and knowledge of other languages is something different. C was the third or fourth programming language I’ve learned in my life (after BASIC, ASM and Pascal). I was 15 at that time. In this age, when you learn a new language, you just accept it as it is and only care about how to bend you mind around it to produce a compilable program. This time,  learned designs of quite different programming languages (such as Smalltalk, C#, Javascript, etc), I had the possibility to actually evaluate the C language design and the paradigms behind it.

And my first impression was horror.

C doesn’t have reasonable integer types
This one was perhaps the biggest negative surprise for me. I mean, C is currently used only to write some low-level platform and/or performance-critical stuff. Often, the exact bit size and alignment of your variables is very critical. And still, C’s built-in integer types are absolutely unusable. When you write int in C#, you have your 32 bits. Guaranteed, fixed, always the same for any platform from a tiny mobile phone to a powerful Azure server cluster, and this will never ever change. When you write int in C, you’ll get something, depending on your platform. The only thing you know for sure is that its bit size is greater or equal than char, and less or equal than long int. Well, thanks for nothing!

Because the integer types are so unusable, there are efforts to create appropriate pre-processor directives that will try to figure out the current platform’s native bit size and #define useful types. Unfortunately, there are several such efforts, and in the real-life down-to-earth C source code, you will see variables declared as int, int32_t and gint32, all meaning the same, and used in the very same function. This happens especially often when you have a software using several another components, which are open source.

C doesn’t have byte and bool
This is another WTF moment. C is often used in constrained memory conditions, so that you expect a very powerful bit and byte manipulation engine. But there is nothing.

Instead of byte, I saw char is being used (as well as uint8_t, guint8 and BYTE). Such a byte of course does not support bit manipulations out of the box (which is even worse than some assemblers!), so that you have to spend hours trying to figure out what values you have to & and | with some int to get its bits from 3rd to 18th.

As for bool, it is often #defined to be char or int. Sometimes, boolean types together with the false and true constants are #defined several times per module (one #define situated in some indirectly included header while another is directly in the file). But this definition of boolean is quite lousy one, because if and while are happy to accept any integer, so that compiler doesn’t have any static checking support. You can forget to dereference a pointer to your “bool” variable and the if would happily accept it as a true value; you will not get even a warning from the compiler!

Generally, C compile-time checks are lousy
To demonstrate the point, let’s look at this code:

int main (void)
{
  printf("This is a test!\n");
  return 0;
}

Just put it into the file test.c and then execute

gcc -o test test.c
./test

Now, as a naïve ex-C# developer, I would expect the first command fail with the error telling me that the function printf is undefined. Right, gcc would link with libc by default, but I’ve forgot to #include <stdio.h>.

Instead! Instead, gcc would tell you the following:

test.c:3:3: warning: incompatible implicit declaration of built-in
function ‘printf’ [enabled by default]

How on the earth is can be a warning, you think. Then you check your working directory and see the executable test there. And then you execute it with the second command, and what does it do? Crashes? No, it prints out the given string! Are you amazed?

It turns out that, in case C detects something looking like a function call, and the function is undeclared, it just assumes this function takes an int as its argument, and returns an int back.

No, I’m not kidding!

And no, I don’t know why it is an int->int and not float->void, for example. And why the hell this automagic is required at all…

But wait, okay, okay, whoa! We’re passing a string to printf, so at least after assumption its signature is int printf (int), the C should print an error and stop, for Lords sake? Well, you see, string is just char*, and this is a pointer, and a pointer… well, from where C is sitting, the pointer is just an int.

Sooo, let’s try it out:

int main (void)
{
  int ret = foobar(5);
  return 0;
}

int foobar (int a)
{
  return a + 1;
}

then

gcc -Wall -o test test.c
test.c: In function ‘main’:
test.c:3:3: warning: implicit declaration of function ‘foobar’ 
[-Wimplicit-function-declaration]
test.c:3:7: warning: unused variable ‘ret’ [-Wunused-variable]

So, in C, an arbitrary assumption that undeclared functions are int->int has the same severity level than detection of an unused variable. If not the -Wall option (almost the highest warning level of gcc), it wouldn’t even print any warnings at all!

Let’s now go wild and explore the situation a little further. The following source code could easily occur after / during a slight refactoring of function signatures:

#include 

int main (void)
{
  int i;
  int num_records;
  char* input;

  init();
  input = read_input_data(&num_records);
  for (i = 0; i < num_records; i++)
  {
    process_data(input, i);
  }
  return 0;
}

void init(int security_token)
{
  printf("Initializing with token %d\n", security_token);
}

int read_input_data(char* out_buf, int security_token)
{
  printf("Reading with token %d to buffer %d\n", security_token, out_buf);
  out_buf[0] = 'a';
  out_buf[1] = 'b';
  out_buf[2] = 'c';
  out_buf[3] = '\0';
  return 3;
}

void process_data(const char* buf, char* out_ptr, int num, int security_token)
{
  int i;

  printf("Processing from buffer %d to buffer %d %d items with token %d\n",
  buf, out_ptr, num, security_token);

  for(i = 0; i < num; i++)
  {
    *out_ptr++ = *buf++;
  }
}

When you compile it, gcc will print a lot of warnings but never an error, and produce the executable. When executed, it might print something like this before it crashes:

./test
Initializing with token 134513456
Reading with token 0 to buffer -1076015340
Processing from buffer 3 to buffer 0 11529179 items with token 12862244
Segmentation fault

So, apparently this implicit int->int function declaration is an overly simplistic description of how C handles unknown functions, which of course increases the number of wonderful cases where you have to fix a sudden segfault. I have no idea from where the values of missing parameters are coming, and what awesome security implications might happen when just commenting out some existing function and defining another one with additional parameter void*, which would allow you to write... where? On the stack?

Generally, it is hard to write a future-proof code in C
I like the following example:

#include 

typedef struct
{
  int  id;
  char* name;
  int age;
} Employee;

int main (void)
{
  Employee ceo = {0, "Bill Jobs", 55};

  printf("%s is %d\n", ceo.name, ceo.age);
  return 0;
}

It works as expected. Now, let's say, we want to add department to the Employee struct. Piece of cake, right?

#include 

typedef struct
{
  int  id;
  char* name;
  char* department;
  int age;
} Employee;

int main (void)
{
  Employee ceo = {0, "Bill Jobs", 55};

  printf("%s is %d\n", ceo.name, ceo.age);
  printf("%s works in %s\n", ceo.name, ceo.department);
  return 0;
}

The same example implemented in C# would print "Bill Jobs works in", because the department is not initialized and is null. On C, the second printf will segfault, because the department string is being initialized with the CEOs age. There is one step between a perfectly working software and a sudden segfault. Either that, or you have to extend any existing structs by adding new fields to the end.

These discoveries I've made in a mere several first weeks working with C. I'm looking forward to post even more similar war stories here. But horror was not the only feeling I had. Curiosity and a sudden recognition of how C is designed were a great fun for me.

Fun

Modularity concept

In the OOP world, we think in classes. It's a habit. I was never concent with the silly tradition of C++ / C# / Java to store source code in files, because files add avoidable complexity. Best OOP languages like Smalltalk don't need files and store the source code in a database. Therefore I was fascinated when I've first heard that Microsoft's TFS was going to store source code in a database... Well, TFS turned out to be one of the most dissapointing Microsoft products to me, but that's another story.

In C, they think in files, and they really use and need files. A file is a first-class concept in this language. Files are means of modularization. There are two kinds of them - the .c and the .h files.

In a .c file you normally put one or several functions. This is your module. Functions that belong together are stored in the same .c file. Some of them are exposed for usage from other modules (.c files), others are private (in C you use the keyword static to mark such functions).

Now, to call public functions of module A from another module B, you have three two options:

1) You can manually declare exported functions in the beginning of the module B.
2) You can manually declare them in a .h file and then #include it in the module B.
3) You can use implicit declaration for your int->int functions as described above.

This makes the .h files to be roughly an analog of interfaces or public class members in OOP. The difference is that you are not constrained by any formal rules. For example, you can combine exposed functions of several modules in one .h file, or have different .h files for the same module, or even do all that for the code you don't own (and it might be already compiled). This is more flexible.

So, generally, when I write a new module, first I write a .h file to define its public interface, then I #include only those .h files into my .h file that are needed for my function declarations (mostly typedefs of missing built-in types). Then I write the .c file, #include the corresponding .h file to forward-declare public functions, then forward-declare the private functions, and then #include the .h files of all the other modules I need when implementing my module.

This is radically different from how I did that in C++ ten years ago, where I used to #include all possible header files to any other file, because I was pissed of by this manual file management and wanted to think in terms of classes and interfaces only, considering files as one (and the worst) of many possible source code storage backends.

Program for the compiler
When programming in modern languages, you have two very distinct modes: the run-time and the compile-time. My comeback to C has made me think about any program as being a double-program: the one for the compiler (executed at compile-time), and the other one for the run-time.

In the modern languages, the compile-time program is purely declarative without side effects. In C#, if you write

public class Point
{
  public int X;
  public int Y;
}

this is in essence a declarative instruction to the C# compiler to create a new class with the given members. It is declarative, because it doesn't matter if this class appears in the source code before or after some other class; and the order of its members also doesn't matter. The declarative compile-time style of modern languages makes it easy to think about it, so that you can distribute more of your focus to the run-time.

Not in C. There, the best way to think about a program is a double-helix DNA where procedural compile-time instructions are intertwined with procedural run-time instructions:

int open (void);
int write_data (int fd);

int main (void)
{
  int fd;

  fd = open();
  write_data(fd);
}

Reading this source code as compile-time program, first there is a command to put "open" and "write_data" with the corresponding signatures into the name table, then a command to put "main" into the name table, then a command to start writing compiled code of "main", then the command to put "fd" to the local scoped name table, then a command to compile a function call to "open" and add it to the "main" function object code, and so on.

Thinking about it this way makes it easier to grasp the behavior of the language. Especially when you start using macros (and in C, you have to). And it explains quite naturally the need of forward function declarations and the importance of the struct member order.

In a sense it reminds me how Smalltalk (also an ancient language) works; there, new classes or methods are also defined by calling a procedural method. The difference is that in Smalltalk the syntax for the compile-time is almost the same as the run-time syntax so that you don't have to learn two languages instead of one.

C with classes: GLib
C++ is often called "C with classes", but this is not the only truth; the pure C has its own OOP implementation; in GLib they've managed to do it without modifying the programming language itself. And boy it is a funny hack, I must say.

Consider the following code:

typedef struct
{
  int  id;
  char* name;
} BaseObject;

typedef struct
{
  BaseObject parent;
  int age;
  int department_id;
} Employee;

Employee*  ceo;

They use the fact that according to the C standard, fields of the parent field are stored in the declaration order at the beginning of the Employee structure, so the memory representation of it looks like this:

struct Employee_in_memory
{
  int  id;
  char* name;
  int age;
  int department_id;
}

This allows you to cast pointer to Employee to pointer to BaseObject, because, hey, its fields are at the beginning of the memory. This enables polymorhism like this:

typedef struct
{
  BaseObject parent;
  int num_employees;
} Department;

#define MAX_REPOSITORY_OBJECT_COUNT 1000;
BaseObject* repository[MAX_REPOSITORY_OBJECT_COUNT];

int read_repository(void)
{
  int stream;
  int i = 0;

  stream = open();
  while(!is_eof(stream))
  {
    switch(get_next_type(stream))
    {
      case EMPLOYEE:
        Employee* emp = deserialize_employee(stream);
        repository[i] = emp;
        break;
      case DEPARTMENT:
        Department* dept = deserialize_department(stream);
        repository[i] = dept;
        break;
    }
    i++
  }
  return i;
}

void dump_repository(int top)
{
  int i;
   
  for(i = 0; i < top; i++)
  {
    printf("%d,%s\n", repository[i]->id, repository[i]->name);
  }
}

As for methods, you could just add function pointers to the structs, but this would mean you copy them with each object instance, which is a big waste of memory (at least according to the C ideology), besides, it would allow you to have different methods per instance of the same class, which is normal for languages such as JavaScript, but just too weird for the conservative C. Therefore, in GLib, in their very base object GObject they put a pointer to another structure, the class structure, which has the function pointers of the class methods. This has the added benefit of run-time reflection, because this class structure has a couple of fields allowing you to read the class name, query for properties and so on.

But, because of this, as soon as you have any non-trivial hierarchy with virtual methods and so, it would be too complex to use it directly in plain C (because of constant casting and using obj->parent and obj->class), so that GLib has added a lot of #defines hiding this complexity (but also preventing an easy understanding of what exactly is happening). Here is a good example of how a simple OOP code with GLib looks like. All in all it feels like programming in C++ with all its black box covers removed. But this is yet another story.

This Week in Twitter

  • Cool looking mix of Apple and HTC phones for just 200 euro http://t.co/XkunpXFz #
  • Acer, Lenovo, HTC and Samsung – all but ASUS – plan 4core tablets early next year. ?? is more important for them than Christmas? A shift? #
  • Apple, Facebook, Twitter, Google and co. are the new Hollywood, bring American lifestyle to the world. #
  • Against commercialization of christmas? Your presents are too good! Mine are completely useless and therefore are only of spiritual value. #

Powered by Twitter Tools

This Week in Twitter

  • unglaublich! http://t.co/yxvuIOZR Danke, GfK Retail : Na dann, mein Ziel ist es, aus 1 Prozent 48 zu machen :) Auf dem Metz Fernseher #
  • Nikolaus hat mir keine Schokolade, sondern Entscheidung gebracht, wieder einen Abnehmen-Versuch zu starten. Mit 130 Kilo die höchste Zeit! #
  • 45m3 water with 260GBq leaked from a Fukushima cleaning apparatus. 150 l. are already in the ocean, rest in puddles http://t.co/zHC1JyCC #
  • YESS!!! B-) http://t.co/NAHW3vGU #
  • Now this is what I call absolutely amazing web design, the very essence of how the modern web (should) look like: http://t.co/CPyBSNbd #

Powered by Twitter Tools

iScreen

Before we start, I’d like to remind that this post, like all other posts on this site, reflect only my personal opinion, not of my employer.

There are more and more rumors that Apple will announce a new TV set around March next year, and the press speculates about its features and look.

Well, I think, TV sets have no future, and Apple will announce a TV set killer, not a better TV set.

Because, what is a TV set? It is a TV tuner plus a big screen. Modern devices have many additional features, including various inputs, network and Internet access, time shift, etc. But these features are all not defining. Remove time shift, and it is still a TV set. But remove the tuner, and it is just a monitor. But in broader sense, TV set is not only a device. It also defines how the television industry is structured. And it is also the way how viewers perceive role and place of television in their entertainment and informational workflows.

The life of a modern television user is hard. To operate the TV, he has (or expected to) understand

  • the differences between analog and digital TV
  • the differences between DVB-T, DVB-S and DVB-C
  • the difference between free TV and pay TV, and understand the need of set-top boxes and CI slots and cards, and has to be a guru to understand what kind of set-top boxes is compatible with what kind of pay TV stations
  • what do SCART, HDMI, VGA, DVI, S-Video, etc. mean and what adapters are needed or possible
  • the difference between PAL, SECAM and NTSC
  • How teletext, EPG and Hbb-TV are different from each other, and how to use all of them (differently for each TV station)
  • CD, DVD, DVD+-RW, Blu-Ray, USB sticks, SD and CF cards: when to use them, and how to play their contents on TV
  • Files: Xvid, DivX, WMV, AVI, MP4, MPG, MOV, MKS, RT, TS, VOD, M2TS… and how to play them on TV
  • What is a media center, and why there are different media center concepts: a set-top box implementing media center functions? A Blu-Ray player with integrated media center? Media center in a hard drive? Or in the Internet router? Or in a NAS storage? Or inside a low-noise PC besides the TV set? And if using PC, what software to use: Windows Media Center, xbmc, … Or better an XBOX besides the TV extending the media center on PC?
  • What is a game console, and what is the difference between XBOX, PS3, Wii…

I guess, there are a lot of paid jobs out there, requiring to know and to understand less than the TV industry currently demands from their customers to know and understand, just to be able to entertain themselves.

People are not like this. They don’t like knowing and understanding things. They just want to be informed and entertained, and the device in their living room should “just work”. No matter if they want to play a networked first-person shooter game, to check the current Dow-Jones index, to enjoy some movie, or to observe a football match – the device must work consistently and straightforwardly.

Apple’s success in other industries suggests that contemporary people are ready and willing not only to pay a huge premium to somebody who would allow them not knowing technical details, but also give up a bit of their privacy and freedom for that. This gives the possibility for Apple to enter to this market – the market’s consumers are ready for the change (they just yet don’t know it).

And they are ready for the change, because the current situation is so unsatisfying (from the usability perspective). The reason for that is the TV industry structure, consisting of stations, networks, and CE manufacturers. Heritage of governmental control times, this structure is the primary reason of the current deadlock and absence of innovations (3D video leading to headache and requiring to put on glasses, like children playing in doctors? come on! This is exactly what the most viewers were missing so far!)

Just think about it. CE manufacturers create screens attracting eyeballs of huge population for unbelievable several hours a day, every day. Social network startups capable to attract a tiny fraction of this love are being sold for billions of dollars. Yet the manufacturers neither able nor know how to monetize it. As a result they have to live on near zero margins. No wonder they cannot innovate; money is just not there.

And even those who have money, wouldn’t design their devices to be perfectly usable by the end user; instead, they design it to be conforming to various industry standards, and to be appealing to the sellers. Typically, earnings of TV set manufacturers depend not on viewer satisfaction, but rather on sellers satisfaction. It doesn’t matter how well the device can be used, it is only matter how well it can be sold. So, the usability of devices is just “nice to have”, and this is the reason of the famous “blinking 00:00 VCR display” issue: you have to bend your mind around to understand how to set the clock.

Network companies have had a terrible, huge, unbelievable high investment in the infrastructure and cannot allow any innovations not compatible with it, until the investment pays off. Their earnings do depend on user satisfaction, but they have educated the viewers that television HAS to consist from three components (the device, the cable and the stations), so that they are and feel themselves only responsible for their part and would therefore happily sleep another 100 years monetizing their DVB infrastructure.

Stations have potential, knowledge, understanding and talents to improve television and make it more immersive and more user friendly. Alas, they get all the advertisement money, so that their motivation is rather altruistic and artistic rather then dictated by the hard rule of the market. Besides, they don’t have the possibility to change technology used in the infrastructure and in devices. And developing their own devices and using another infrastructure (for example, Internet) doesn’t seem to be their core competency. The best they have produced (in Europe) in this area was the Hbb-TV.

Apple has the possibility to unlock it. The secret of success for their mobile devices was that they simultaneously:

  • Own the usability and user experience of the device, both in hard- and in software
  • Own the entertainment content distribution
  • Partially control the network by partnering with mobile operators and providing their server backend for iTunes
  • Partially control the marketing, by selling their hardware directly to the customers, on-line and in Apple stores, and selling the software via tightly controlled App Store.

These factors were responsible for providing entertainment intensity of levels of magnitude higher than those of Apple’s competition.

Apple also has experience unlocking such convoluted markets. The mobile market in the pre-Apple era has had similar issues: mobile operators invested in the infrastructure and wanted to sleep forever monetizing it, and device manufacturers didn’t sell directly to the customers, but had to satisfy the sellers (mobile operators and electronics chains) and couldn’t monetize the usage of their devices. As the history teaches us, people have readily paid up to 10x times more for something more usable, more immersive and more entertaining.

So, how this TV set killer device could look like? I don’t know. It depends heavily on talents Apple has, on result of negotiations with other industry players they might have conducted, on commercial feasibility of some specific technologies, etc, I have no idea about all that. If I was in charge and didn’t have any limitations, I would do the following:

1) Create a technology for games similar to XNA allowing to write games for iPhone, iPad, Mac and the new device, all using the same toolchain. And convince some key players in the game industry to port their successful franchises to this platform.

2) Ensure a live streaming cloud capable of taking broadcast signals and streaming them via the Internet in near real-time with high quality, no interruptions and integrated time shift and VoD. And convince some key TV stations to license their programmes (this is where Google TV has failed).

3) Partner with somebody helping me to convince others, for example with a best ISV in the country.

4) Create a device, which would have a big bright screen with the best video processing (400 Hz, motion compensation, scaling, etc), terrific multi-core processor with several TFLOPS, but still without any noisy rotating parts (I’m looking at you, XBOX), a modified iOS, huge and quiet HDD or SSD, best available Wi-Fi, perhaps some web cameras, but nothing else: no other connectors (except of power), and may be one power button.

5) Ensure all kinds of content can be streamed to the device via WiFi using the Internet protocol: VoD movies, music, and apps from the Apple store servers, live broadcast from the new cloud service, and user-own content from his Apple devices in his local network, or from his iCloud. As well as converting locally available signal sources (cable, satellite, VCR, PC) for those who still need them, using an optionally available adapter box.

6) Create a content-centered UX concept. Viewers wouldn’t switch between signal inputs (channels, connectors, sources) as they do now; they would choose between contents. Do I want to rent this movie, or watch that live sport event, or look at my own photos shot by iPhone and uploaded to the iCloud, or would I rather play this game? This is the kind of choice viewers will take. And for lean-back scenarios, a partner TV station network will provide some live channel that will be “tuned on” by default.

As for the actual interaction technology, I do believe Apple will invest much in it, be it just a remote control, Kinect-like NUI, Siri-like voice control, or something else. But in my opinion, this wouldn’t be a big variable in the equation. The mere absence of all this stuff users have to know to operate TV would be already a huge difference. The Apple’s device will “just work”, i.e. just inform and entertain.

If Apple will really do that, and this will really work, all the traditional industry players will have hard times competing. One realistic option would be to jump into the Google TV bandwagon, a similar strategy many mobile players went with the Android. Another one is to go with Microsoft, who have recently announced some interesting and revolutionary changes in the XBOX (which I to my confusion didn’t yet have had time to check out) and clearly aiming at the same market. And the last option: to give up the TV set market, and try to earn money on something else.

So, before I close this very long post and having predicted the close future, I’d also like to predict the distant future. After unlocking the TV market, where Apple will going to look next? My bet is that it will be cars and homes. Both industries are stagnating, both have a pretty awful usability (operating 3 pedals and several levers just to drive from A to B with high risks for the life? Ridiculous! Having to endure bad neighbours, just because it is so hard to move a house? Stone age!). So get ready for your iCars and iHomes.

And after solving that, we can then slowly approach what really matters: the human beings, with their bodies and their psyche…

NB. I’m sorry for typography of this post. “3D” and “PS3” look really awful. Unfortunately, I’m limited here by the standard WordPress editor and don’t know how to improve it.

This Week in Twitter

  • I liked a @YouTube video http://t.co/pM4ccfp2 dont try and rob an asian #
  • Important things like love and death happen not so often, so nobody is a pro. We are puppies, hitting all the walls trying to find a door. #
  • Do you also have to think about OLPC when looking at this? http://t.co/BDcPnNg9 #
  • is wondering if it is possible to make a Skype client using WebRTC: http://t.co/ThXPQd3o #
  • Fireworks show around the largest Christmas tree in South America, LIVE streaming now in 360º video: http://t.co/kg0D9hsY #

Powered by Twitter Tools

Christmas Feeling

Christmas is an accumulator of childish happiness.

Children can be much more happy than adults. Adults have experienced death of dear ones, farewells or illnesses, or realization that some of their goals they wholeheartedly wanted to achieve cannot be achieved anymore in this life. This all remains constantly in the head and cannot make room enough for a full and absolute happiness. Though children do have this room, and therefore can be absolutely happy.

If you as a child were made absolutely happy on every Christmas (or other big holiday in other cultures), this holiday can become a “tag” for that childish happiness, which can be used later in the adult life to remember of those happy times and to try to be happy again.

For me, this happiness tag is triggered by the smell of the christmas tree and the mandarines. Yesterday I was in subway, when I saw one tiny fir tree brach lying on the floor and radiating its smell. And I uncontrollably laughed.

Open Source: past perfect?

What motivates people to create open source software? On the one hand, efforts required for it are greater than those couple of days spent by students on solving their toy assignments, on the other hand it is almost impossible to sell, and it is very hard to do “consulting” kind of business around it. So why is it worth efforts?

Keeping in mind those 7% of strange altruistic people, the resting 93% of OSS developers are apparently developing it because of the following reasons:

  • They are being paid for it by companies, who want to compete with closed source companies
  • They want to find a (better) job, so they need both skills and publicity
  • They want to be popular.

Web 2.0 and mobile apps have seriously disturbed the latter two motivations.

Several years ago, if you wanted to be a cool hacker, you created some OSS software worthy to be included in GNU or Apache repositories. Today, if you want to be a cool startuper, you just create a web service or a mobile app.

Open sourcing web services is useless, because their source is often trivial, and whenever it is not, it cannot be reused, except of creating an exact clone. And web service clones are not interesting, because the original service would usually soak up all possible user base, and without users, Web 2.0 apps are pathetic.

In this respect, uselessness of web services source code is quite similar to the source code of Adobe Flash Player or Microsoft Silverlight. Because it doesn’t matter what you can do with these sources, what matters is which version is installed on the most PCs out there.

With the mobile apps it is even more interesting. Apple’s license agreement is explicitly not compatible with GPL, and Free Software Foundation is understandable scolding them for that. Besides, source code of most mobile apps should be quite trivial, because the most of their added value lies in their interaction design / UX; and in the connected web services.

So, if you want to get better job or get popular, you can just create a web service (and a couple of mobile apps for it). And as a nice side-effect of NOT open-sourcing them, you can even earn some money (from the App Store or by selling the app/service to somebody big).

So what is the future of Open Source? Are we evidencing its peak today? Can it be saved? And… do we want to save it?