Product or Platform

The infamous internal-accidentally-turned-external post of Steve Yegge (now deleted, but saved for history in many places, for example here) is a typical rant. You know, that kind of rants where you start writing about one big topic, then jump to another unrelated one, and then bounce in-between trying to come-up with a resume reasonably combining them into something that ought to appear a neutrally-tempered educational article rather than an outburst of your accumulated emotions.

I like rants. I like passionate people, and I like watching them showing their world. And, as a rule of thumb, rants usually deliver a valuable insight or unexpected point of view.

This time, it was the Product or Platform story. Summing it up in a nice bite-sized paragraph, it goes like this.

Great products must hit the nerve of your users. Product managers are bad at predicting where exactly the nerve is, before their product hits the market. Thus, they often have to adjust the product (sometime drastically) in its further iterations. Therefore, you should develop not a product in the first place, but a platform for making products. This will allow for quick after-TTM product adjustments; and for external contribution into killer product features.

Just like with many right ideas, it is not new per se, new is only that is it written down so explicitly.

Take Axinom for example. Instead of creating yet another version of their CMS, they did a revolutionary thing: they have created a specialized UX platform — a collection of patterns, concepts, and technology allowing them quickly produce highly ergonomical and immersive information management applications. Having that, they have now all the means to implement really killing apps for their customers. And they’ve invented it way before the Mr. Yegge’s rant.

With the SilverHD DRM product, we went along the similar path. Based on our know-how about all kinds of typical business models used by different VoD shops in the European market, we’ve created concepts and technology allowing to securely define entitlements (who is allowed to consume which media under what restrictions). The concrete DRM mechanism to enforce these entitlements was purposedly left variable; we’ve started with PlayReady and WM-DRM; and it will perhaps be adjusted in the future with other DRMs such as Widewine or Marlin.

Bungee

Switching jobs is always stressful. But changing from web development with Microsoft technologies to embedded Linux development is like bungee jumping. Not that I’ve ever jumped bungee. But I like overstated comparisons :)

Seriously, judge for yourself.

Before, I was proud that I’ve ever compiled Linux kernel from its sources before (which is untypical for a hardcore Microsoft fan). Today, I re-complile the kernel several times a week.

Before, I was proud that I know what DirectShow filters and the graph are (which is untypical for a normal Silverlight developer). Today, I fix bugs and develop own filters for GStreamer, the open-source alternative of DirectShow.

Before, I thought http and TLS are parts of operating systems. Today, I’m fighting with gnutls trying to cross-compile it properly.

Before, I thought 100 Mb of source code is “a lot”. Today, I’m working on 6 Gb of sources.

Before, I’ve heard about TS files, which were mysterious creatures coming out from content providers and had to be transcoded ASAP into a more usual format. Today, TS is my common denominator, and I juggle with all these PATs, PMTs, SCTs, PCRs, PIDs and PTSes (per stream).

Before, I’ve thought 1Gb of video file is a full-length movie, and 8Mbps is a lot of a bitrate, and 720p is HD Video. Today, 1Gb is a short 5-minute Full HD clip.

Before, I feared of JavaScript, because you inevitable have to deal with HTML when working on JavaScript. Today, I fear of JavaScript, because when I cross-compile source code of WebKit with too much optimizations, its JavaScriptCore engine will expose all kinds of weird errors.

Before, I was ironical about how low-level the .NET 1.1 and .NET 2.0 were in comparison with Smalltalk. Eventually Microsoft has promoted C# to be a reasonably-high level programming language in .NET 4.0. Today I work in an environment where they think C++ is a high-level language, but is overly complicated, while the pure C is just the right level.

Before, I thought Windows 7 is on the verge of getting old. Today, my colleagues think Windows XP is not yet outdated.

So, in some aspects, this is a pretty much “upside down” experience, but I hope I will find my place in this new world, just like I’ve found myself in the web development seven years ago.

This Week in Twitter

  • Googled all-vegan restaurants in Zirndorf: 0. In Fürth: 0. In Erlangen: 0. In Nürnberg: 2, and one of them is closed. JFYI. #
  • This SHOULD be some kind of hoax. I can't believe it has really happened. http://t.co/Izu9Q3rX #
  • I liked a @YouTube video http://t.co/9diWyZQ4 Marlene Dietrich – Sag mir, wo die Blumen sind #
  • RT @sinosplice Translation fun: "thank you" = 10Q (English abbr.) = 3Q (Chinese abbr.) #
  • I favorited a @YouTube video http://t.co/61kxYGK3 ORIGINAL Elephant Painting #

Powered by Twitter Tools

This Week in Twitter

  • Taiwan is in ultrabook competition now: both Acer and Asus have released very sexy ubooks recently. #
  • Facebook is currently down. I wonder if twitter and Google+ were also down, where would we post about it? #
  • ???????????????????????????????? ???????????????? #
  • @XaocCPS ???. ?????? ?? ??????. #
  • simply epic strip, love it. http://t.co/KFB9yePY #
  • Edelste Rohstoffe, massiert noch lange nach dem Putzen, biologisch wirksam, wissenschaftliche Zusammensetzung: http://t.co/OHzemlXu #

Powered by Twitter Tools

This Week in Twitter

  • Amazing menu design and impressive typography with color coding. If it was a cookbook I'd ordered it right now. http://t.co/fO9bgLnN #
  • ????? ??????? ?????? ???? ?? ????. #shortiki http://t.co/xHn8XsnR #
  • ??????? – ??? ?? ?????????? ????, ????? ?????? ????? ?? ??????? ? ????????? ??? ???? ?? ?????. #shortiki http://t.co/JhOojME5 #
  • "It's really hard to design products by focus groups. A lot of times, people don't know what they want until you show it to them." #
  • "Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work." #
  • "And the only way to do great work is to love what you do. If you haven't found it yet, keep looking. Don't settle." #
  • RT @TechCrunch Upcoming Film Could Be Litmus Test For Theater And On-Demand Release Strategy http://t.co/vqjQ6Rv1 #

Powered by Twitter Tools

Security of Web 2.0

There are quite a lot white papers about security on software level. You know, all those situations when an attacker sends some information not in the format expected by the software, and the latter fails; or passing some pieces of code in the registration form in places not intended for that and ending up with executing this code, or similar issues.

There are much less works describing security of some existing and popular Web 2.0 services (Facebook, Flickr, Google+, Picasa, Xing, LinkedIn, etc). But at least there are some.

What seems to be absolutely absent are white papers describing security (and more specifically, privacy issues) of the Web 2.0 ecosystem as a whole. Meanwhile, the situation there is quite remarkable. Fans of conspiracy theories would immediately assume that intelligence services of many countries are currently holding their breath observing rapid and voluntary de-privatization of many netizens; gathering all the information and preventing hackers from publishing their findings. Well, if it should be true, you are currently NOT reading this text, because it wasn’t successfully published. A more rational explanation would be, that just lazy me didn’t do any research before writing this blog post and has instead just bluntly asserted that there are no white papers on this topic to made his blog post more appealing.

Anyways.

To depict the current status quo, I’m going to show a couple of legal techniques to gather private information about a person from public sources.


1. Profile Scouting
. This is obtaining links to public profiles of a target person, in a given Web 2.0 service:
                a) By known real name. Many Web 2.0 services allow (and even motivate) their visitors to search profiles by known real name. This step can be either performed manually for each Web 2.0 service using the corresponding search field, or automatically using pipl.com.
                b) By known username. Some Web 2.0 services display the username publicly, either in the web page itself, or at least as part of the public profile url. So, either public profile url can be constructed manually and checked if a given Web 2.0 service would return a profile or a 404 page, or some automated service can be used for this task, for example namechk.
                c) By known place of living, company, school or interests. Many Web 2.0 services allow to search using these kind of metadata; from the resulting list of persons the target person has to be found using some additional information, for example their known appearance (looking at the profile photo). A variation of this method is using groups or forums; for example, if a target person is interested in some dance type, and some Web 2.0 service offers a group, it is possible to find them by looking up the members of the group.
                d) By tagging. For example, a group photo on Facebook might be tagged with corresponding profiles; knowing appearance of the person of interest, it is possible to obtain their public profile. Another variation of this method is tagging of Flickr photos, where tags containing person names, cities and event names are used.


2. Profile Mapping.
Having a profile in one Web 2.0 service, it is often easily possible to find out profiles of the same person in another Web 2.0 services; for example, by searching the same known real name. Many folks out there use the same username (or same couple of usernames) across several Web 2.0 services, so that their profiles can be mapped that way. The easiest way to map a profile is just a link, for example, it is possible to enter a link to Flickr account in the Facebook profile, and make it visible for everyone.


3. Social Graph Leveraging
. This means, analyzing the “friends” of a target profile. This technique has the following shapes:
                a) Leveraging Faulty Security Concept. For example, the target person has closed their photos on Facebook for public viewing, but opened them for their friends. A friend of the target person has a publicly available timeline and comments on a photo of the target person. Faulty Facebook allows anybody to follow to this comment and to see the original photo, even though it ought to be visible only “for friends”. I believe, this bug Facebook has at least since I’ve joined it in 2009.
                b) Leveraging Different Privacy Settings. Let’s say the target person has closed their photos for public, but their friends haven’t. Some friend would publish their own photo, showing themselves, but also the target person (perhaps in the background or showing their back, but not necessarily so). Another variation of this technique is consuming the publicly available timeline of a friend of the target person, if it is known they interact closely in the real life (for example, study in the same university). By observing events, life style and mood of the target person’s friend, it is possible to conclude that the target person themselves should also have comparable mood, life style and perhaps participate in the same events.
                c) Second Level Scouting. Let’s say, the target person A doesn’t want to publicly befriend another person B (due to any reason whatsoever). But, A’s friends C, D and E don’t have this constraint and all have B in their friends. By analysing common friends of the friends, it is possible to find a missing link. This technique has quite limited usefulness, as your typical Facebook profile has 100 to 200 of friends, the total number of friends of friends can be around 10000 in the worst case, which is way too much to be analyzed manually, and I don’t know any ready-to-use software that would automate such a “friends scouting”.

Combining these three techniques sequentially, it is possible to achieve impressive results. For example, it should be possible to start looking up the target person A by searching their real name and current city on Flickr. By a lucky chance, one could find only a couple of photos, and most of them would depict the target person. One then could go to the Flickr profile of these photos’ author, person R, and map their profile to Facebook. On Facebook, by a lucky chance, one would be able not only read the public timeline and obtain more photos, but also discover a couple of friends of R who would live in the same city, for example persons H and D. By mapping of H’s profile to spaces.live.com it could be possible to obtain additional photos, and by mapping D’s profile on a Web 2.0 service for travel reports, one could obtain additional information about some events happened.

I do believe these techniques are quite legal, because they leverage only the data made publicly available by respective owners / copyright holders. If this should be “problematic”, then Google and other spiders should be even more questioned and investigated.

On the other hand, depending on exact situation and on what exactly the researcher will do with the information found, this might be anything from being perfectly moral to being absolutely cruel. In any case, often it is the case that information flow is not as intended by the target person, and that’s why I think this issue is a security issue, and has to be publicly discussed and addressed.

I don’t know any handy solution for that, besides of trying and opening my own social profiles to the most possible extent. If I cannot prevent this kind of information gathering, at least I want to lead and control it by providing the most of information myself “from the first hands” and thus minimizing any possible misunderstanding or misinterpretations. But I do see that this approach is not suitable for every kind of situation.

So what do you think about it? I’m kindly requesting for your comments.