This Week in Twitter

Powered by Twitter Tools

The Greater Internet

How valuable is the Internet technology for the human society? Well, HTML is not ideal as information storage format, because is mixes content with design, doesn’t provide indexes and metainfo, and so on. HTML is also not good for capturing design, because it looks differently on every other rendering engine. And there are much simpler and straightforward ways for peer-to-peer communication than HTTP. And cookies are not the best way to distribute the application state. And the data model of WWW doesn’t suit any real-world application except Wikipedia (interestingly enough even academia don’t use it, preferring to publish their findings in PDF). The most useful and interesting results of “Internet as a technology” are the concepts (and practical proof), related to organization of a highly distrubuted and secured application state (eg. the RESTful principles and TLS/SSL). But other than this, there is not much value in there.

What makes Internet really valuable (and sometimes priceless) is its reach. Due to the fact that Internet platform reaches billions of people all around the world, the true value of the Internet is created. If makes sense to provide information or services, in the Internet, because they can be successfully monetized due to the huge Internet reach. It is reach that makes UGC from a niche scene phenomenon to the wonders of Wikipedia, Twitter, blogosphere and so on. Various Google services, social networks, music, video and games streaming, all this has been made possible not because of some ingenious technology (and in fact, sometimes despite of shortcomings of the Internet technology), but because the Internet reach has created an appropriate economical environment for priceless content and services. And the cool content and services attract even more Internet users. So to say, the Internet itself is the largest and oldest Internet viral loop.

Just like with any viral loop, maintaining and expanding the reach is critical, and there are a lot of ways to break or seriously disturb it. In the Internet, it is the inter-compatibility of Internet technology what is one of the main factors for reach (another two are speed and quality of ISP services, and presence/absence of Internet censorship).

Now, when I look at it closely, I find out that the Internet reach is very fragmented. Not every Internet user can use every web site. Sometimes, their Internet is too slow, sometimes Big Brother blocks web sites, sometimes web sites geo-block themselves due to licensing issues, whatever. Also, from the software point of view, not every site can be used, perhaps because Flash is required, and the user cannot or do not want to install it, or perhaps, because the web developers block Internet Explorer.

But in reality, the Internet reach might be much more fragmented than this: do we want to consider iOS and Android platforms to be parts of the Internet?

Technologically, they aren’t quite the Internet. Most apps don’t use Internet connection after installation, and on Android, they can even be installed without the Internet. But conceptually, I want to include them in a broader definition of the Greater Internet. It can be especially clearly seen with mobile editions of multichannel mass media. No matter if I consume The Economist from the web site or by using an app, exactly the same business happens (The Economist provides insightful analysis and forms my opinion in exchange of subscription fees and ads). Same goes for Instagram, and same goes for Facebook – no matter what app I’m using, the business is the same, and the viral loop is the same (not just similar, but is in fact identical), because engagement of Facebook users who are using it from Android cross-pollinates users from the web, iOS, Smart-TV and what else channels Facebook has; and all engagement has the common base of being stored somewhere on the Facebook servers.

But also mobile apps that do not have any WWW counterparts are in my opinion part of the Greater Internet. Dictionaries, calculators, games, photo and text editors, and productivity suites, all these mobile apps that are now available for free or for laughably cheap prices, can only exist because their developers plan to use reach of the corresponding mobile platform. What they lose in price, they wanted to make up in circulation. This is exactly the same type of the viral loop the classic Internet had: providing some things cheap or for free, making it up with huge circulation, thereby attracting even more users providing even larger circulation.

So, what I would actually expect (if the world would turn around us the software makers) is that all these parts of the Greater Internet would use the same underlying technology, just to ensure the biggest possible reach with smallest possible investment. I mean, okay, okay, the smartphone user experience is so different from the tablet/PC one that you have to create different apps, but at least, you should be able to use the same toolchain and knowledge doing it, and at least, writing one smartphone app you should be able to run it on any smartphone in the world, no matter which manufacturer.

Alas, this is not the case. Just like there are web sites that I cannot use, because they don’t support Internet Explorer, there are iOS apps I cannot use, because I don’t own any iOS devices. There are the web sites using old-style HTML, web sites using HTML5, and among the latter, there are web sites supporting only the WebKit, and web sites also supporting some other browsers. In the Apple county, there are phone and tablet and desktop apps (and Apple TV apps). If it is Android… Well, everybody has seen this scary huge fragmentation infographic of the Android.

Now sit down if you are staying, or hold your chair if you’re already sitting.

It turns out that Microsoft is currently doing the best job in terms of unification and ensuring the best possible reach of the Greater Internet. Yes, yes, the former evil empire is the saviour of the Internet. Microsoft has already very satisfying situation in their own devices. Basically, if you know XAML and C# and Visual Studio, you can already go ahead and create classic desktop apps, Metro-style apps for their tablets, Windows Phone apps and RIAs. Add some HTML5/CSS3 knowledge and you’ll be able to make web apps (with C# in the server side). All that using the same IDE, same frameworks, and same infrastructure. When speaking about the other platforms, Xamarin is doing great job by covering iOS and Android.

Another viable approach is using JavaScript everywhere, which is currently available in form of PhoneGap for mobile platforms and Node.js for the server-side.

Still, these efforts are not part of the mainstream. And you know, this kind of fragmentation is nothing special or new. This is one of the core issues of capitalism, perhaps even the way how it works. Coke or Pepsi? BMW or AUDI? McDonalds or BurgerKing? Cube bikes or Stevens? Adidas or Nike? So, do we really need all these choices? Some choices are meaningful. For exampe, black iPhone or white iPhone. Cold bubble tea or hot bubble tea. But developing a whole model line of cars, so sophisticated but so similar as BMW and AUDI? Sure the car drivers would promptly correct me saying these cars are as different as hell and heavens; for me their difference is just like Coke versus Pepsi. Do we as a society really want to spend double or triple efforts for developing such similar products and services, only to see how timber, metals, plastic, time and energy are wasted to produce them, and half of them fail to win the market, and will be sold below the price, and then scrapped and produce even more garbage? Do we really want to spend expensive electrical energy on Bing servers, while there are a lot of Google servers doing the same thing?

For me, the answer is clearly no, we don’t want to waste resources, but the problem is that the only known alternative, the communism and the plan economy, has even worse issues, leading to even more “impressive” waste of ressources, not to mention high correllation with genocide, mass humiliation and deaths, or other “fine” features of totalitarism.

Unfortunately for me, there are a lot of Internet developers who don’t share my beliefs. Just like the communists, they declare rightful and reasonable goals (for example unifying the Internet technology), and just like the communists, they intend to use force, humiliation, discomfort or discrimination to achieve their goals.

I’m looking at you, all the web developers who develop their sites for WebKit only. You have abused IE6 users for no good reasons, as most of your web sites could as well be programmed with plain old HTML 3.1 with no limitation in functionality. You plan to put your bloody hands now on the throats of IE7 and IE8 users. You believe doing good service for Internet, cause no IE means less fragmentation. While this is technically true, the road you choose to go on is a very predictable one: first, you join your forces to eliminate IE, when you succeed, you will suddenly discover that Firefox is the source of all evil. Unlike IE there are no huge corporations behind Firefox (anymore), so that it will die rather quickly and unspectacularly. And then you will start the Safari against the Chrome war, the last one. And the winner will stop evolving because it won’t be interesting for the WebKit community any more, and the new generations of web developers will hate “that fucking WebKit community that can’t fix that silly bug since ages”.

Internet, especially Greater Internet, badly needs unification and consolidation. But force, hatred and ideology is not the way how it works.

This Week in Twitter

  • I liked a @YouTube video http://t.co/3145FTWK Max-margin training and inference on structured models for info #
  • The sad thing is that some of us will probably see passing away the last American women in space. @verge: http://t.co/yAgY3CEg #
  • Kann mir jemand sagen, warum billigfluege.de 2 minuten braucht, um 6687 Flüge durchzusuchen, während Google in 200ms die Milliarden von … #
  • "I'm a software developer, and therefore tend to explain any human quirks with professional deformation" (from bash.im) #
  • @LinkedIn, you spam me with unsolicited (and unsuitable) job offers, even though I've unsubscribed and tried to contact you several times. #
  • ? http://t.co/J2c7GOuq ????11????????????????????????? #

Powered by Twitter Tools

This Week in Twitter

Powered by Twitter Tools

This Week in Twitter

Powered by Twitter Tools

This Week in Twitter

  • What a weak move, Microsoft. Windows 8 upgrade should have been for free for Windows 7 and Vista users. #
  • @eldarmurtazin Oppo is not the first cool looking Chinese phone. Xiaomi was before. #
  • I liked a @YouTube video http://t.co/7OA4n93U ??? – Adult Crap #
  • ????????????:Yesterday http://t.co/VINAa1lW #
  • What the hell is happening lately? RT @CrackMeIfYouCan: Large gaming site in Germany hacked. Hashes and emails leaked. Over 7 million. #
  • Happy! This was one of the most exciting "first weeks on a new job" I've ever experienced (p < 0.0025). #
  • Google seems to be not yet aware of necessity for technical evangelists http://t.co/iyV3VYnp #
  • ?????????????????????? :) RT @isaac RT @iheartbeijing: ???????????????????????#???? http://t.co/ggWLNSNI #
  • Just posted a photo http://t.co/3KEgRMNv #
  • Stalin invented them 80 years earlier. Google for sharashka @verge: Big dreams and tiny spaces inside 'hacker hostels' http://t.co/NOHvDfXa #
  • @bobuk ??, ??? ?? ??????, ??? DeafObject, ? ?? nil. nil ?????? ?????? Exception ??? ?????? ?????? ??????. #radiot #
  • ???? ????!!! #radiot #
  • ???, GoldEd ?????????? ? ???? markdown #radiot #

Powered by Twitter Tools

This Week in Twitter

  • The best fans of the championship so far http://t.co/78ISRwWw #
  • Gib mir eine Chance http://t.co/46EHnCzt #
  • OMG! Look at the price! http://t.co/DakBxZzr #
  • This guy is also pissed off by unwriten rules of "open source", and from now on writes them explicitely down :) http://t.co/lrCtSsq9 #
  • Can you believe that iPhone is ONLY 5 years old? #
  • Android Jellybean Camera UI is pretty cool! A slight "borrowing" from WP7 though :) #
  • Jellybean notifications are REALLY AWESOME!!! #
  • OMG, why nobody from the live publicum is EXITED about the notifications? Have I already said it is a game-changer awesome idea? #googleio #
  • Google Now! A little creepy for paranoics, but another crazy game-changer! Wow! #
  • Want. Android. Jelly. Bean. Right. Now. #
  • @eldarmurtazin YYEEEESSS! #
  • Movies and TV shows in Google Play are only in US??? #io12 #
  • Wow! Compass mode in Google Maps! #io12 #
  • While Microsoft and Apple are talking about game possibilities, Google shows amazing games off! #io12 #
  • Google is releasing its own xbox, called nexus q :) #io12 #
  • Google has changed itself significantly. Very strong! #io12 #
  • Updating my Google+ app… #io12 #
  • GOOGLE!!!!! #io12 #
  • Google is true to its principles. Ability to instantly photograph your baby from your point of view it not just a feature, it is more #io12 #
  • The only bummer is that so many of these things are available only in the US #io12 #
  • Nexus 7 looks just like my HTC Flyer. Only quicker. #
  • I think Nexus Q is an intermittent product to compensate for the slow start of Google TV. Why having yet another box in the living room? #
  • Did you know there is such thing as Google+ Ripples? It looks cool, but VERY hidden. I had to use help page to find it. #
  • Suddenly found integration between Android YouTube App and YouTube TV: http://t.co/sCHkNQ4P #
  • I liked a @YouTube video http://t.co/8CEvegSb ???? ?????? ???? ?? ???? #
  • When you watch movies, you need a black bezel. When you read books, you need a white bezel. Why tablets don't variate its color? #
  • Industrial design in iOS UI :) I wonder if there is already an iOS physics engine for folding different materials. http://t.co/NV42VGbH #

Powered by Twitter Tools

The core marketing

Traditionally, marketing of gadgets used a lot of quasi-technical terms, like for example Gigaherz and Terabytes, to provide easily comparable indicators of product quality. Often, the technical specialists who originally conceived these parameters and have used them in a very specific context, were unsatisfied with this tendency. While the chip frequency in GHz simply means a frequency used to synchonize specific units of some limited area of the CPU, and therefore is non-trivially correlated with the overall performance, marketing speak has often used it as one and only indicator of the device performance. Normal consumers were satisfied, because it was an easy concept for them to grasp. Engineers were unsatisfied and have pointed out that different CPU commands require different number of cycles to complete, and that different parts of CPU are clocked with different frequencies, and that there are a lot of use-cases where CPU performance is irrelevant to the overall performance… But nobody cared.

Apple has changed that a little, by using qualitative and emotional parameters for the marketing speak. Well, may be, it is the part of an answer to the question, why are there so many good software developers (Google, Facebook, Yandex, etc) who choose Apple devices. May be, it is because Apple marketing doesn’t piss them off? That simple.

Unfortunately, other manufacturers are not ready to switch, and are still using quasi-technical terms to describe their devices. They have a problem though. Intel and AMD have stopped to increase the CPU clock frequency. The new CPUs have the new advantage though, it is the number of cores they have. Again, technical specialists know exactly that the number of cores is non-trivially correlated with the overall performance. If anything, having more cores is better for the system responsiveness rather than system performance. And again, nobody cares how the engineers feel, and the number of cores is beginning to be used as a substitute of the performance.

Especially grotesque this marketing speak looks in the TV set area. In the last couple of months I’ve observed at least two large TV manufacturers to announce a dual-core Smart TV sets, implying they would be quicker than the usual TV sets.

It is grotesque, because a usual modern TV set already has seven to eight cores. They are:

  • All-purpose MIPS or ARM core to execute the application performing the traditional TV tasks (tuning to channel, switching of different inputs, controlling the display, changing volume, reacting on remote control, displaying the user interface and so on)
  • Another all-purpose MIPS or ARM core to implement Smart-TV functions, for example the TCP/IP stack, the UPnP/DLNA stack, a WebKit browser, a media pipeline for Internet formats, the PVR function, and so on.
  • Dedicated MPEG-1, 2, 4, and AVC decoder.
  • Sometimes, an additional dedicated MPEG-2 only decoder. It might not be strictly needed, but can be a remainder of some previous hardware platforms supporting only MPEG-1 and 2.
  • Dedicated AAC/AC3/DD decoder.
  • Dedicated transport stream engine with real-time demuxing, filtering and PSI processing of several TS streams in parallel.
  • A slave MIPS or ARM core implementing graphics acceleration, typically some kind of OpenGL ES. This is needed to efficiently blend the TV user interface, the web-page shown by WebKit, and the running full HD video together.
  • A slave MIPS or ARM core implementing real-time full HD video enhancements, including complicated de-interlacing algorithms, high-quality (bicubic?) down- and up-scaling, noise reduction, adaptive dynamic contrast, color and sharpness correction, and, very important on large screens, motion estimation and compensation (including detecting and fixing the 3:2 pull-down).

There is also a stand-by controller, which is always-on even when the other cores are powered off, and is responsible for IR remote control interfacing and the booting sequence. If you count it as a core, there are nine different cores. And I’m not including FPGA arrays or other interfacing needed to handle CI+ modules.

Now, when a TV manufacturer announces a dual-core TV set, what exactly does it mean? If it means that their TV user interface is displayed not by the same core that is rendering HTML pages, this is hardly an innovation and is more or less a virtual marketing bubble. If it means that their WebKit or Opera has two cores available for rendering, this is a different story. On the other hand, it is hard to believe that they dedicate those two cores for the WebKit only; other tasks like the Internet media pipeline, or a PVR function, would typically also reuse those two cores. Therefore it is hard to compare performance and responsiveness of a “single-core TV set” (whatever it means) and a “dual-core TV set” (whatever it means).

I believe, TV manufacturers should follow the example of Apple and talk about qualitative parameters instead. How many users, in percent, perceive a significant lag between action and reaction when using Smart TV functions? A simple, beautiful scalar quality parameter.

This Week in Twitter

  • Just posted a photo http://t.co/eQOJ2c8F #
  • Touch Cover for iPad was shown in ??. Type Cover is like Acer Iconia. w/o hw specs and prices it is hard to say if Surface is really cool #
  • RT @verge Italian airline Alitalia adds Motorola Xoom 2 for in-flight entertainment, customer service http://t.co/OJuaF7jB #
  • Windows Phone is bravely marching into its very own fragmentation nightmare. #
  • DIRECT-X SUPPORT ON WINDOWS PHONE. yes, that's big. #
  • Windows Phone Wallet: take that, Passbook. #
  • Is somebody already working on Android VM for Windows Phone 8? #
  • ??????? ????-??????. ????? ????????, ?? :) #
  • I was drinking Bubble Tea even before it appeared at McDonalds #

Powered by Twitter Tools

How free is free?

Linus Torvalds says “fuck you, NVIDIA”. He blames the company for being uncooperative with Linux driver support, and mentions that this is especially sad because the company is selling chips for Android devices, which are based on Linux.

For one day, I’ve believed NVIDIA would not respond. Too bad, they did respond, and it believe it is very unfortunate for them. The best they could do is to ignore that clown that not well-considered emotional outburst of probably very tired and jet-lagged Torvalds.

This story has made me think about the open source.

Clearly, this situation can never happen on the closed source market.  If a hardware manufacturer doesn’t see any commercial benefits providing a working driver for Windows 7, it just says that his device is unsupported on Win7, and that’s it. Nobody will come to idea to blame him. Just because he has sold (and still selling) so many devices for Windows XP, it doesn’t mean he has obligations to do anything else, including supporting other operating systems.

In a sense, closed source to open source is like western culture to asian culture. In the former, you write a contract setting clear and explicit obligations for both parties; as long as both parties comply with it, they don’t have to worry about anything else. In the latter, you present something as a gift, but implicitly expect something in return, and if the other party fails at guessing what your idea about the proper compensation exactly is, you are disappointed. Nevertheless you don’t show it directly and rather hint about your disappointment, and they don’t get your hint either, and at some point, you ultimately believe the other party is evil or amoral and then you start a war, and the other party also thinks you are evil, because they have spent so much trying to compensate you, but you are still unhappy, and they also declare war on you.

But wait, isn’t open source supposed to be free as freedom? Meaning, there are neither explicit nor implicit obligations when using it?

Not so easy.

According to the Free Software Foundation, free software is any software complying to the four essential freedoms: free to use, free to study and change source code, free to redistribute, and free to base your own work on it. When I read this list, I don’t see anything that would oblige NVIDIA to provide better support for Linux. Moreover, the first essential freedom suggests the very opposite, i.e. absense of any obligations. When reading this definition alone, one might think the ultimate goal of people developing free software is or should be to give it out for free. The focus is on giving. They don’t expect anything in return, it is the very fact of providing freedom for free what is essential.

And this is a very important point, because many people would take these four essential freedoms and confuse them with other things like copyleft or open source software. So, to reiterate, the only goals of free software is to be free and to provide freedom for free.

Now, the very same Free Software Foundation has conceived a different thing called copyleft. They do it out of pragmatic idealism, as they say, but as the matter of fact they make a step towards less freedom. Contrary to the truly free software, which you can do with just absolutely anything you want, copyleft forces you to license your own software under a no-less powerful copyleft license, thus creating a viral effect, but restricting your absolute freedom. Which, depending on your situation, may or may not be a huge issue for you. Even the least viral copyleft licenses come with an implicit, moral obligation to you to give something back in return. Copyleft restricts your freedom to use the software with the goal to force you creating more copylefted software, which is considered by FSF to be ethical and desirable social change.

Now, Stallman isn’t tired to repeat that open source is not free software. According to him, the difference is that open source doesn’t pursuit social changes or ethical goals. Open source is just believed to produce a better software quality and avoid vendor lock-in. Nevertheless, its definition is also viral, restricting the full freedom of the software user by requiring it to promote the virality. For the purpose of this article, in theory it is not different from copyleft.

In reality, open source also adds some implicit obligations to the users, at least in the mind of Linus Torvalds.

Reconstructing logic of Mr. Torvalds, NVIDIA earns a lot of money by selling chips for Android. As far as I know, designing and manufacturing the Tegra chipset doesn’t require to use any software created by Mr. Torvalds. And if any open source software has been used in this process at all, it is very unprobably that the particular copyleft license has explicitly required NVIDIA to provide support for desktop Linux drivers, which is a quite different product. The only logic Mr. Torvalds could have in mind is, because NVIDIA earns money on making chips, which are then used by handset manufacturers to make Android phones, and Android has Linux kernel, and Linux kernel is open source software, it means NVIDIA has implicit obligations towards the open source community in general and particularly to the Linux, and therefore they must invest in a market which doesn’t bring them a cent (namely, developing Linux drivers for desktop operating systems).

This is exactly the point why so many enterprises are very careful about using any open source software. As soon as we go into the moral and implicit obligations, things tend to become very fuzzy. You might be a happy camper using free as a beer open source software, until one day some random guy you don’t even know tells “fuck you” straight in the camera, and it turns out this guy is influential, and his video goes viral on YouTube, and therefore your PR is suddenly nuked.

The irony here is that NVIDIA doesn’t even need or require Linux, or this whole open source stuff in general. If Android wasn’t based on Linux, or didn’t use any open source software, it would be equally as popular among the end-users as today, and NVIDIA would be equally happy to design and manufacture chipsets for it.