How to stop fearing and start using Kubernetes

The KISS principle (keep it simply stupid) is important for modern software development, and even more so in the Data Engineering, where due to big data and big costs every additional system or layer without clear benefits can quickly generate waste and money loss.

Many data engineers are therefore wary when it goes about implementing and rolling out Kubernetes into their operational infrastructure. After all, 99,999% of the organizations out there are not Google, Meta, Netflix or OpenAI and for their tiny gigabytes of data and two or three data science-related microservices running as prototypes internally on a single hardware node, just bare Docker (or at most, docker-compose) is more than adequate.

So, why Kubernetes?

Before answering this question, let me show you how flat the learning curve of the modern Kubernetes starts.

First of all, we don’t need the original k8s, we can use a simple and reasonable k3s instead. To install a fully functional cluster, just login to a Linux host and execute the following:

curl -sfL | K3S_KUBECONFIG_MODE="644" sh -s - --secrets-encryption

You can then execute

kubectl get node

to check if the cluster is running.

Now, if you have a Docker image with a web service inside (for example implemented with Python and Flask) listening on port 5000, you only need to create the following YAML file:

apiVersion: apps/v1
kind: Deployment
  name: my-microservice
    app: my-microservice
      app: my-microservice
  replicas: 1
        app: my-microservice
       -name: my-microservice
          image: my-artifactory.repo/path-to-docker-image
           -containerPort: 5000


kind: Service 
apiVersion: v1 
  name: my-microservice
  type: LoadBalancer
    app: my-microservice
    - port: 5000
      targetPort: 5000

Conceptually, Kubernetes manages the computing resources of the nodes belonging to the cluster to run Pods. Pod is something akin a Docker container. Usually, you don’t create Pods manually. Instead, you create a Deployment object and it will then take care to start defined number of Pods, watch their health and re-start them if necessary. So, in the first object defined above, with the kind of Deployment, we define a template, which will be used whenever a Deployment needs to run yet another Pod. As you can see, inside the template you are specifying the path to the Docker image to run. There, you can also specify everything else necessary for Docker to run it: environment variables, volumes, command line, etc.

A Kubernetes cluster assigns IP addresses from its very own IP network to the nodes and Pods running there, and because usually your company network doesn’t know how to route to this network, the microservices are not accessible by default. You make them accessible by creating another Kubernetes object of kind Service. There are different types of the Services, but for now everything you need to know is that if you set it to be LoadBalancer, the k3s will expose your microservice to the rest of your corporate network by leasing a corporate network IP address and hosting a proxy service on it (Traefik) that will forward the communication to the corresponding Pod.

Now, when we have our YAML file, we can roll out our tiny happy microservice to our Kubernetes cluster with

kubectl apply -f my-microservice.yaml

We can see if it is running, watch its logs or get a shell access to the running docker container with

kubectl get pod
kubectl logs -f pod/my-pod-name-here
kubectl exec -it pod/my-pod-name-here bash

And if we don’t need our service any more, we just delete it with

kubectl delete -f my-microservice.yaml

Why Kubernetes?

So far, we didn’t see any advantages compared to Docker, did we?

Well, yes, we did:

  • We’ve got a watch-dog that monitors Pods and can (re)start them for example after server reboot or if they crash for any reason.
  • If we have two hardware nodes, we can deploy our Pods with “replicas: 2” and because we already have a load balancer in front of them, we can get high availability almost for free
  • If the microservice supports scalability by running several instances in parallel, we already get a built-in industrial grade loadbalancer for scaling out.

Besides, hosting your services in Kubernetes has the following advantages:

  • If at some point you will need to pass your internal prototypes for professional operations to a separate devop team, they will hug you to death when they learn your service is already kubernetized
  • If you need to move your services from on-premises into the cloud, the efforts to migrate, for example, to Amazon ECS is much much higher than the changes you need to do to go from k3s to Amazon EKS.
  • You can execute batched workflows scheduled by time with a CronJob object, without the need to access the /etc/crontab on the hardware nodes.
  • You can define DAG (directed acyclic graphs) for complicated workflows and pipelines using Airflow, Prefect, Flyte, Kubeflow or other Python frameworks that will deploy and host your workflow steps on Kubernetes for you
  • You can deploy Hashicorp Vault or other secret manager to Kubernetes and manage your secrets in a professional, safer way.
  • If your microservices need some standard, off-the-shelf software like Apache Kafka, RabbitMQ, Postgres, MongoDB, Redis, ClickHouse, etc, they all can be installed into Kubernetes with one command, and deploying additional cluster nodes will be just a matter of changing the number of replicas in the YAML file.


If you only need to host a couple of prototypes and microservices, Kubernetes will immediately improve their availability, and, more importantly, will be a future-proof, secure, scalable and standartized foundation for coming operational challenges.

Now when you’ve seen how easy the entry into the world of Kubernetes is, you don’t have the “steep learning curve” as an excuse for not using Kubernetes already today.

How to choose a database for data science

“I have five CVS files one Gb each and the loading them with Pandas is too slow. What database should I use instead?”

I often get similar questions from data scientists. Continue reading to obtain a simple conceptual framework to be able to answer this kind of questions yourself.

Conceptual framework

The time some data operation takes depends on the amount of data and the throughtput of the hardware, as well as on the number of operations you can do in parallel:

Time ~ DataSize, Throughtput, Parallelization

The throughput is the easiest part, because it it defined and limited by electronics and its physical constraints. Here are the data sources from slow to fast:

  • Internet (for example, S3)
  • Local Network (for example, NAS, also hard drives attached to your local network)
  • (mechanical) hard drives inside of your computer
  • SSD inside of your computer
  • RAM
  • CPU Cache memory / GPU memory if you use GPU for training

You can reduce the data processing time by moving the data to the faster medium. Here is how to:

  • Download the data from the internet to your harddrive, or to local network NAS, if the data doesn’t fit into your machine.
  • As soon as you read a file once, your OS will keep its data in RAM in the file cache for further reads. Note that the data will be eventually evicted from memory, because RAM size is usually much less than the harddrive size.
  • If you want to prevent eviction for some particular files that fit in memory, you can create a RAM drive and copy the files there.

There are a couple of tricks to reduce data size:

  • Compress the data with any losless compression (zip, gzip etc). You will still need to decompress it to start working on it, but the reading from slow data source like a hard drive will be quicker because there is less data to read, and decompression will happen in a fast RAM.
  • Partition the data, for example by month or by customer. If you only need to run a query related to one month, you will skip reading the data for other monthes. Even if you need the full data (for example for ML training), you can spread your data partitions into different computers and process the data simultaneously (see parallelization below)
  • Store the data not row by row, but column by column. Thats the same idea like partitioning, but column-wise. So if for some particular queries you don’t need some column, in a column-based file format you can skip reading it, therefore reducing the amount of data.
  • Store additional data (index) that will help you to find rows and columns inside of your main file. For example, you can create a full text index over the columns of your data containing free English text, and then, if you only need rows containing the word “dog”, this index will read only bytes from the storage with the rows containing “dog” inside, so you read less data, so you reduce amount of data to be read. For Computer Science, this is where their focus is on and they are most excited inventing additional data structures for especially fast indexes. For Data Science, this is rarely helpful, because we often need to read all the data anyways (eg. for training).

As of parallelization, it is a solution for the situation where your existing hardware in under-used and you want to speed up things by fully using it, or you can provision more hardware (and pay for it) to speed-up your processes. At the same time this is also the most complicated factor to optimize, so don’t go there before you try the stuff above.

  • Parallelization on macrolevel: you can split your training set to several servers, read the data and train the model simultaneously on each of them, calculate weight updates with the backpropagation and then apply them to a central storage of weights and redistribute the weights back to all servers.
  • Parallelization on microlevel: put several examples from your training set at once onto GPU. Similar ideas are utilized in a small scale by databases or frameworks like numpy and is called there vectorization.
  • Parallelization on hardware level: you can attach several hard drives to your computer in a so-called RAID array and when you read from them, the hardware ensures that it reads from all of the hard drives in parallel, multiplying your hardware throughput.

Parallelization is sometimes useless and/or expensive and hard to get right, so this is your last resort.

Some practical solutions

A database is a combination of smart compressed file format, an engine that can read and write it efficiently, and an API so that several people can work on it simultaneously, data can be replicated between database nodes, etc. When in doubt, for data science use ClickHouse, because it provides top performance for many different kind of data and use-cases.

But databases are the top notch full package, and often you don’t need all of it, especially if you work alone one some particular dataset. Luckily, also separate parts of the databases are available on the market. They have various names like embedded database, database engine, in-memory analytics framwork etc.

An embedded database like DuckDB or even just a file format like Parquet (Pandas can read it using fastparquet) are in my opinion the most interesting one and might be already enough.

Here is an short overview:

Data compressionNative compressed format with configurable codecs. Also some basic support of Parquet.Native compressed format. Also supports ParquetConfigurable compression codecs
PartitioningNative partitioning. Also basic partitioning support for ParquetSome partitioning support for ParquetManual partitioning using different file names
Column-based storageNative and Parquet, as well as other file formatsNative and ParquetParquet is a column-based format
IndexesPrimary and secondary indexes support for native format. No index support for Parquet (?).Min-Max index for native format. No index support for Parquet (?).Not supported by fastparqet to my knowledge, but Parquet as file format has indexes.

Backup is working!

So guys, I have got some lessons learned about restoring from a backup this week!

Every computer geek shames for his backup strategy. Most of us know we should backup our data, but we don’t do it. Those who do backups regularly, haven’t ever tried to restore from a backed up file to check if it is really working. And only the geek gods like Umputun backup into several locations and probably also check their backups regularly.

Until this week, I’ve belonged to the minority above: I was making my backup, but never tried to restore from it. This is a kind of interesting psycological trap. On the one hand, you think your backup software is working properly (because if it isn’t, it makes no sense to backup at all), but on the other hand, you are scared to restore from a backup file, because if that will go wrong, it will both destroy your original data and leave you with an unusable backup file.

And it also doesn’t add trust that my backup software has been developed by AOMEI, a chinese company (no, having a chinese wife doesn’t make you to trust just any chinese software more than before).

People who work as IT admins don’t have this problem, because they can always use a spare drive to restore from backup into there and to check if the backup is working without the risk of losing the original data. Or else some of their server drives will die and they are forced to restore to a new drive anyway.

The latter scenario has finally happened to me this week. My booting drive (128 Gb SSD) has died. But don’t worry, I’ve had it backed up (sectorwise) using the AOMEI backup into my small Sinology NAS.

So I have ordered and recevied a new SSD (thanks Amazon Prime). Now, how would I restore my backup file on this new drive? Even if I could attach it to Sinology (I would probably need some adapters for that!) I don’t think that Synology can read the AOMEI file format, or that AOMEI has developed an app for Synology. Also, I cannot even login into the Synology Web UI, because it requires credentials, and I have stored the credentials on KeePass, which is luckily for me on a separate drive, but it is not replicated to my other devices. And also, Synology doesn’t allow to restore password by sending it to me via E-Mail. I only can reset it by pressing some hidden reset button with a needle. My NAS is physically located in a place I could hardly reach, so it would be yet another adventure.

Lesson 1: store your backup NAS IP, username and password not on the same PC you would need to restore.

So I have attached a DVD drive to my PC (for all of you Gen-Z: this is a cool technology from last century, where you would store data on disks, but no, the disks are not black and from vinyl, but rather rainbow colored and from plastic) and installed a fresh Windows 10 onto my new SSD system drive.

After booting I was happy to see all the data from my second drive are still there, including the KeePass database. I just didn’t have KeePass to read it. No problem, I will just download it! I went to the KeePass web site, but the download didn’t want to start, probably because I was using some old version of the Edge browser. Okay. So I needed to download and to install Chrome first. Thank God Windows 10 installation has automatically recognized all my drivers including the NIC (for all of you Gen-Z: in the past, Windows didn’t have access to the Internet after the installation. You would need to spend some time configuring your dialup connection).

So, downloaded and started KeePass, copied the NAS credentials from there, now I can see the backup file. Cool. Next step will be to restore it. Okay, need to download and install AOMEI. Thank God, the company is still in business and they still have a web site and their newest software is still compatible with my backup file.

Lesson 2: download a portable version or an installer of your backup software and store it not on the same PC you would need to restore.

Installed and started AOMEI and pointed it to the backup file. It said, I need to reboot into a system restore mode. Okay. After a restart, the AOMEI app has booted instead of Windows and after some scary warnings started to restore the system. Also, at this point I was happy I didn’t need some special drivers for NIC or something. I could have forced me to copy the backup file into USB-Stick first though, and I am grateful it hasn’t.

The restoration process took several hours, but after it has finished, the PC has rebooted again, into my old Windows, with all software installed and old data on the system drive restored!

Summary: I am happy! And I am less afraid of restores now!

Question: are there any home NAS on the market that allow you to restore sectorwise system backups directly onto a new drive (you don’t need to attach it to your PC, just insert the new drive directly into the NAS)?

On Death of the Russian Culture

I was born in the USSR. When my country died, I didn’t feel anything special, maybe just a little hope: this was a new opportunity to do better, to learn from our mistakes, to become a member of a peaceful global world.

Besides, I’ve clearly separated the country – a bureaucratic, infected with the communist ideology, useless monster that have destroyed lakes and rivers and killed millions of its own citizens in Gulag – from the Russian culture.

With over 1000 years of history, the Russian culture came up with novel philosophical ideas to answer major questions of life, discovered natural laws, invented useful technology and created art on the international level.

The Russian Culture has survived enslavement by the Mongols, Tsarist Regimes, the Communist Revolution, and two World Wars, so I didn’t need to worry back then, when USSR fell apart. I’ve had my Russian Culture. It would live with me and I’d pass it on to the new generations.

Well, not after February 24th, 2022.

All the dear old fairy tales my parents have read to me in my childhood, all the lullaby songs they sang to me, all their explanations of what is good and what is bad, all the Russian ways I know how to live life and how to be successful and how to resolve conflicts and how to solve problems, all the religion and philosophical ideas, all my favorite, deeply meaningful songs, smart movies, tender cartoons, awesome books, and finally, the most important, my understanding of what is Love and how to love that I have learned from my mom – all of it became obsolete.

Yes, it still lives within me. But how am I supposed to pass it over to the new generations? The culture of the very that nation that did Bucha, Mariupol, Charkow, that has raped, beaten, tortured, and murdered children, women, elderly and innocent civilians in the hundreds of towns and villages in Ukraine. The culture of people who lied about their military involvement in Krim and Donbass, who poisoned their opponents worldwide, who threaten the whole world with their nuclear weapons, who let their leaders to brainwash them, who have shelled pregnant women, murdered tiny girls in front of their mothers, destroyed homes of hundreds of thousands of people and inflicted hunger and energy emergency around the world?

How can I ask my nephew if he wants me to read a Russian fairy tale, if I am deeply ashamed of being Russian and I am calling myself “German” in public now?

Millions of brilliant Russians had ideas, discoveries, lessons learned, and art the whole world needed to hear and to benefit from. Now, all of it is gone. Now, a big part of me is dead.

Data Lakes are Technical Debt

I’ve been working on big data since 2014 and I’ve managed to avoid taking the technical debt of data lakes so far. Here is why.

Myth of reusing existing text logs

For the purpose of this post, let’s define: a data lake is a system allowing a) to store data from various sources in their original format (included unstructured / semistructured) and b) to process this data.

Yes, you can copy your existing text log files into a data lake, and run any data processing on them in the second step.

This processing could be either a) converting them into a more appropriate storage format (more about it in just a minute) or b) working with the actual information – for example, exploring it, creating reports, extracting features or execute business rules.

The latter is a bad, bad technical debt:

  • Text logs are not compressed and not stored by columns, and have no secondary indexes, so that you waste more storage space, RAM, CPU time, energy, carbon emissions, upload and processing times, and money if you casually work with the actual information contained in there.
  • Text logs doesn’t have a schema. Schemas in data pipelines play the same role as strict static typing in programming languages. If somebody would just insert one more column in your text log somewhere in the middle, then your pipeline will in the best case fail weeks or months later (if you execute it for example only once a month), on in the worst case it will produce garbage as a result, because it wouldn’t be able to detect the type change dynamically.

Never work off text logs directly.

A more appropriate format for storage and for data processing is a set of relational tables stored in compressed, columnar format, with a possibility to add secondary indexes, projections, and with a fixed schema checking at least column names and types.

And if we don’t work off the text logs directly, it makes no sense to copy them into a data lake – first to avoid the temptation to use them “just for this one quick and dirty one-time report”, but also because you can read the logs from the system where they are originally stored, convert them into a proper format, and ingest into your relational columnar database.

Yes, data lakes would provide a backup for the text logs. But YAGNI. The only use-case where you would need to re-import some older logs is some nasty hard-to-find bug in the import code. This happens rarely enough to be willing to use much cheaper backup solutions than the data lake.

Another disadvantage of working with text logs in data lakes is that it motivates to produce even more technical debt in the future.

Our data scientist needs a little more information? We “just add” one more column into our text log. But, at some point, the logs become so big and bloated that you won’t be able to read them with your naked eye in any text editor, so that you’d lose the primary goal of any text log: tracing the state of the system to enable offline debugging. And if you add this new column in the middle, some old data pipelines can silently break and burn on you.

Our data scientist needs information from our new software service? We will just write it in a new text log, because we’re already doing it in our old system and it “kinda works”. But in fact, logging some information:'Order %d has been completed in %f ms' % (order_nr, time))

takes roughtly as much effort as inserting it into the proper, optimal, schema-checked data format:

db.insert(action='order_completed', order=order_nr, duration_ms=time)

but the latter saves time, energy, storage and processing costs, and possible format mistakes.

Myth of decoupled usage

You can insert all the data you have now, store it in the data lake, and if somebody needs to use it later, they will know where to find it.

Busted. Unused data is not an asset, it is a liability:

  • you pay for storage
  • you always need to jump over it when you scroll down a long list of data buckets in your lake,
  • you might have personal data there so you have one more copy to check if you need to fulfill GDPR requirements,
  • the data might contain passwords, secure tokens, company secrets or some other sensitive information that might be stolen, or could leak.
  • Every time you change the technology or the clould provider of your data lake, you have to spend time, effort and money to port this unused data too.

Now, don’t get me wrong. Storage is cheap, and nothing makes me more angry at work than people who would delete or not store data, just because they want to save storage costs. Backup storage is not so expensive as data lake storage, and de-personalized parts of data should be stored forever, just in case we might need them (but remember: YAGNI).

Storing unused data in a data lake is much worse than storing it in an unused backup.

Another real-world issue preventing decoupled usage of data is how quickly the world change. Even if the data stored in the data lake is minutiously documented up to the smallest detail – which is rarely the case – time doesn’t stand still. Some order types and licensing conditions become obsolete, some features don’t exist any more, and the code that has been producing data is already removed, not only from the master branch, but also from the code repository altogether, because at some point the company was switching from SVN to git and they had decided to drop the history older than 3 years, and so on.

You will find column names that nobody can understand, and column values that nobody can interpret. And this would the best case. In the worst case, you would find an innocent and fairly looking column named “is_customer” with values 0 and 1, and you will mistake it for a paying user and you will use it for some report going up to the C-level, only to painfully cringe, after somebody would suddenly remember that your company had an idea to build up a business alliance 10 years ago, and this column was used to identify potential business partners for that cooperation.

I only trust the data I collect myself (or at least I can read and fully understand the source code collecting it).

The value of the most data is exponentially decaying with time.

Myth of “you gonna need it anyway”

It goes like this: you collect data in small batches like every minute, every hour or every day. Having many small files makes your data processing slow, so you re-partition them, for example into monthly partitions. At this point you can also use a columnar, checked store and remove unneeded data. These monthly files are still to slow to be used for online, interactive user traffic (with expected latency of milliseconds) so you run next aggregation step and then shove the pre-computed values into a some quick key-value store.

Storing the original data in its original format in the lake in the first step feels to be scientifically sound. It makes the pipeline uniform, and is a prerequisite for reproducability.

And at the very least, you will have three or more copies of that data (in different aggreation state and formats) somewhere anyway, so why not storing one more, original copy?

I suppose, this very widespread idea comes from some historically very popular big data systems like Hadoop, Hive, Spark, Presto (= AWS Athena), row-based stores like AWS Redshift (=Postgresql) or even document-based systems like MongoDB. Coincidentally, these systems are not only very popular, but also have very high latency and / or waste a lot of hardware resources, given the fact that some of them written on Java (no system software should be ever written in Java), or use storage concepts not suitable for big data (document or row-stores). With these systems, there is no other way than to duplicate data and store it pre-computed in different formats according to the consumption use-case.

But we don’t need to use popular software.

Modern column-based storage systems based on the principles discovered with Dremel and MonetDB are so efficient that in the most use-cases (like 80%) you can store your data exactly once, in a format that is suitable for a wide variety of queries and use-cases and deliver responses with sub-second latency for simple queries.

Some of these database systems (in alphabetical order):

  • Clickhouse
  • DuckDB
  • Exasol
  • MS SQL Server 2016 (COLUMNSTORE index)
  • Vertica

A direct comparison of Clickhouse running in an EC2 instance with data stored in S3 and queried by Athena (for some specific data mix and query types that are typical at my current employer Paessler AG) has shown that in this particular case Clickhouse is 3 to 30 times quicker and at the same time cheaper than the naive Athena implementation.

Is it possible to speed up the Athena? Yes, if you pre-aggregate some information, and pre-compute some other information, and store it in the DynamoDb. You’ll then get it cheaper than Clickhouse, and “only” 50% to 100% slower. Is it worth having three copies of data and employing a full time DBA monitoring the data pipelines health for all that pre-aggregating and pre-computing, as well as using three different APIs to access the data (Athena, DynamoDB and PyArrows)? YMMV.


Data lakes facilitate technical debt:

  • Untyped data (that can lead to silent, epic fuck-ups)
  • Waste of time
  • Waste of money
  • Waste of hardware
  • Waste of energy and higher carbon footprint
  • Many copies of the same data (that can get out of sync)
  • Can be against of the data minimization principle of GDPR
  • Can create additional security risks
  • Can easily become data grave if you don’t remove dead data regularly

Avoid data lakes if you can. Mind the technical debt you are agreeing on and be explicit about it, if you still have to use them.

Flywheel Triangles

For a business to survive and become somewhat sustainable, it needs a self-sustaining business process to earn money, such as it would be very hard to destroy it by management errors or market changes.

I’ve heard it to be called “Flywheel” at StayFriends.

Business flywheels are positive feedback loops leading to business growth, and can be depicted as triangles. Here is for example how the flywheel of StayFriends looked like:

Users generate content. Content could be used to generate ads, or sold to other users, and the resulting revenue could be used to buy ads, bringing new users.

This effect was called “viral loop” back then, but now I understand that this kind of flywheels exist in any successful business and are not limited to social networks.

This is for example the flywheel of Axinom in its early years:

Any custom projects developed based on the AxCMS resulted to more generalized features being added to this CMS, and to more good looking references for it, so that it has attracted more customers, and thus it has generated new projects. Note how the quality of the produced software was not part of this triangle. Theoretically, you could run projects that have left customers dissatisfied, but you had still added something to AxCMS and could use it to win other customers.

Winning new customers might be harder than winning new projects of an existing customers, so that Axinom was working on a second flywheel:

Having several flywheels supporting each other seems to be a feature of companies demostrating sustainable growth. Here is for example the landscape of flywheels of Immowelt (including a potential new one):

It is interesting to see that even a damaged flyweel can support the business for decades. I can demonstrate it on example of Metz, a TV manufacturer. Initially, when television was not yet ubiquitos, the company was participating in the following flywheels:

Where Metz has owned only the lower triangle, but the growth has happened in the upper two: people discovered some new cool show they wanted to see, then needed to buy their first TV set for that, having done that, they became TV Viewers, that motivated creation of new TV shows both directly as well as because of more money from the advertisers. This worked until virtually every family had a TV set. From this time on, the link between TV Sets and TV Viewers became broken, so Metz remained with this:

Basically, they had only the pair of dealer -> TV sets, and a very weak third corner: by producing very high-quality TV sets, they could consistently win various Tests (eg. by Stiftung Warentest) and this could helped to win a little more electronics dealers.

I guess, the company has survived over 50 years in this state.

Every company is interested in having a healthy flywheel and in participating in to several flywheels at the same time. I think, the most realistic way adding a new flywheel would be reusing existing one or two nodes and adding a new triangle.

For example, Metz could try to grow true Metz fans, having basically the strategy of Apple and XiaoMi:

Well, in theory it looks good, but we all know that in the practice, there are all kind of problems, starting from missing investment budget, not enough innovation talents in R&D department, law and regulations preventing some flywheels, strong market competition, etc.

The reason I’ve written this post is that the idea of depicting the flywheels by triangles came to me in a dream, and for some reason, I was very sure in my dream that it absolutely must have at least three corners. I cannot explain this logically, but anecdotally, if we look at the Marx’s formula:

It is striking that it doesn’t provide any non-trivial insights of how to start growing business.

An alle Extrovertierten

Liebe Extrovertierte!

Die Ausgangsbeschränkungen haben mein Alltagsleben gar nicht verändert. Ich mache die gleichen Sachen wie zuvor, treffe (fast) die gleichen Menschen wie zuvor, bin gleich oft draußen wie zuvor. Und das, obwohl ich den Ausgangsbeschränkungen folgeleiste.

Ich bekomme nun mit, dass einige von euch es nicht mehr aushalten können, alleine zuhause zu sein, euch gar einsam fühlen und eure Veranstaltungen und Networking vermissen.

Es tut mir sehr leid für euch. Aber.

Endlich wisst ihr, wie sich Introvertierte in der von euch regierten Welt fühlen.

Ich wäre euch also dankbar, nächstes Mal:

– bevor ihr mich als Kunde anruft, obwohl ich mir klar die Kommunikation per E-Mail gewünscht hatte, weil ihr eure Finger nicht wundtippen wollen, und ein Telefonat für euch einfacher und angenehmer ist,

– bevor ihr mich als Dienstleister zu einem Workshop bei euch vor Ort einlädt, und mir eine PPT mit dem Auftrag vorliest, anstatt die PPT per E-Mail zu verschicken und dann zwei-drei Fragen per E-Mail zu beantworten, nur weil ihr mich aus reiner Neugier persönlich kennenlernen möchtet,

– bevor ihr mich als euren Mitarbeiter zu einer Teambuilding-Maßnahme einlädt, wo zwangzig gut bezahlte Erwachsene den ganzen Tag einander farbige Plastikballe zuwerfen, und später in Schwimmflossen rückwärts gegeneinander rennen müssen, weil ihr denkt, es macht ja so viel Spaß,

– bevor ihr mich quer durch Deutschland anfliegen lässt und euch mein Diagramm aus einer PDF-Datei vorlesen lasst, weil ihr es anders nicht versteht, denn ihr habt es trotz Grundschule, mittleren Reife, Abitur und Studium nicht geschafft zu lernen, wie man Deutsch selbstständig lesen und verstehen kann,

– also bevor ihr nächstes Mal all solche typisch extrovertierten Dinge macht, denkt mal bitte an diesen einen Monat, als ihr zuhause alleine sitzen musstet und euch unwohl gefühlt habt, und fragt euch, ob es vielleicht in eurem Alltag den introvertierten Mitmenschen gegenüber respektvoller gewesen wäre:

– den gleichen Kommunikationsweg zu nutzen, den sie für sich gewählt haben, auch wenn ihr eure Finger etwas wundtippen, Informationen selbstständig lesen statt vorgelesen bekommen und eure Neugier bändigen müsstet,

– das Programm von euren Firmenfeier und Teambuilding-Veranstaltungen im Voraus zu veröffentlichen, damit die Introvertierten sich rechtzeitig darauf einstellen und ggf. ihre Bedenken melden können.

Danke für eure Aufmerksamkeit.

Sanitär für IT-ler

Irgendwann hat man angefangen, Wasserleitungen nicht nach Kundenvorgabe, sondern in einer Reihe von fest definierten Größen herzustellen.

Der Bauherr konnte zwar nicht mehr auf Millimeter genau festlegen, wie groß seine Röhre waren. Konnte dafür aber Wasserröhre von unterschiedlichen Herstellern miteinander kombinieren und von der Konkurrenz der Hersteller profitieren. Die Hersteller haben auch davon profitiert, weil sie a) die Röhre als Massenware herstellen, und b) auf Vorrat und nicht mehr auf Bestellung produzieren konnten – nimmt ein Kunde eine Rohr nicht ab, so kann es ein beliebiger anderer Kunde kaufen.

Nun, abgesehen von der Länge haben runde Röhre zwei weitere Dimensionen: Innendurchmesser und Außendurchmesser. Da damals die Differenz dazwischen immer gleich blieb, hat man eine der zwei Dimensionen für überflüßig gehalten und Röhre immer nach dem Innendurchmesser gekennzeichnet. Also eine 1” Rohr hatte den Innendurchmesser von 1 Zoll. Das war aus der Usability Sicht und aus Sicht der Kundenorientierung damals vorbildlich. Erstens, es hat den Kunden eher der Durchsatz des Wassers in seinem Netz interessiert, also im Endeffekt der Innendurchmesser. Wie viel Platz die Rohr in oder an der Wand nimmt, war für den Bauherr zweiträngig (damals waren Wände ja auch dicker). Zweitens, alle Unterlagen wie Preislisten, Angebote, Rechnungen, Buchungen und Lieferscheine waren einfacher zu schreiben und zu lesen, weil eben nur eine Zahl statt zwei darin kommuniziert werden musste.

So. What could possibly go wrong?

Dann kam man aber auf die Idee, dass es einfacher und schneller ist, Wasserleistungen mit Gewinde miteinander zu verbinden, als sie immer vor Ort schweißen oder loten zu müssen. Für Gewindeverbindungen braucht man aber Fittings – also kleine Rohrstücke in der gewünschten Form (ein Winkel, ein T-Stück, eine Muffe usw). Wenn man Fittings auf oder einschraubt, gibt es dann immer eine Innen- und eine Außengewinde. Wenn ich z. B. eine Rohr verlängern möchte, scheide ich an deren Ende eine Außengewinde auf und schraube darauf ein Fitting auf, der entsprechend Größe Innengewinde haben muss. Wenn ich eine 1” Rohr habe, wie groß muss die Innengewinde des Fittings sein? Sie muss so groß sein, wie der Außendurchmesser der Rohr. Also bei einer 1” Rohr war es damals 1”5/16. Also man hätte die passende Fittings damals auch so bezeichnen können: ein 1”5/16 Fitting. Tja, das Usability wäre dann aber schlecht. Wenn der Kunde eine Rohr mit 1 Zoll hatte, musste er daran denken, dass der passende Fittig die Größe 1”5/16 hat, und woher soll er das denn wissen, man hat ihm ja die ganze Zeit zuvor den Außendurchmesser der Rohr nirgendswo kommuniziert (siehe oben, aus Usability-Gründen).

Deswegen hat man sich damals wirklich dafür entschlossen, die Fittings so wie die passende Röhre zu kennzeichnen. Wenn ich also eine 1” Muffe kaufe, also ein Fitting mit zwei Innengewinden, dann hat diese Muffe keine einzige Größe, die sich auf 1 Zoll beläuft. Sondern der Fitting ist so groß, dass er auf eine Rohr mit Innendurchmesser 1” passt, wenn man denn darauf eine Außengewinde aufschneidet.

Soo. Des passd scho. War aber knapp. Als ein IT-ler spürt man hier schon langsam ein Geruchlein.

Und dann kam man aber auf die Idee! Darauf, dass Gusseisen nicht das einzige Material für Wasserröhre sein kann, und hat angefangen, Stahl, Kupfer, Messing usw. zu verwenden.

Nun, da gäbe es aber ein kleines Problemchen. Man braucht weniger von den neuen Materialien, um gleichwertig stabile Röhre zu bekommen. Also man hatte damals zum Beispiel einen kleineren Außendurchmesser bei gleichem Innendurchmesser machen können. Hat man aber nicht gemacht. Warum nicht? Weil man ja schon so viele Fittings produziert und verbaut hat, die indirekt einen bestimmten Außendurchmesser verlangen. Wir erinnern uns, die 1” Muffe hat die Innengewinde von 1”5/16, damit sie auf eine Gusseisen-Rohr mit 1” Innendurchmesser passt. Eine Stahl-Rohr mit einem Innendurchmesser von 1 Zoll hätte den Außendurchmesser von nur ca. 1”1/8. Der alte Fitting wäre dann um 3/16” zu groß.

Deswegen hat man sich damals wirklich dafür entschlossen, bei den neuen Röhren den Innendurchmesser zu vergrößern! Und den Außendurchmesser zu behalten! Und die Röhre immer noch nach dem nicht mehr vorhandenen Innendurchmesser zu kennzeichnen!

Das ist so herrlich, deswegen jetzt nochmals zum mitschreiben.

Wenn ich heute eine 1” Stahlrohr kaufe, hat sie weder den Innen- noch den Außendurchmesser von einem Zoll. Sondern, ihr Außendurchmesser ist so groß, wie er bei einer Gusseisenrohr irgendwann mal war, und diese Gusseisenrohr hatte damals den Innendurchmesser von einem Zoll.

Tja. Der einfachste Weg, zu sich einen Berater in einem Baumarkt zu holen, ist in die Sanitär-Abteilung mit einer Schiebelehre zu marschieren. “(Oh, oh, oh, der Kunde misst gerade die Gewinde eines Fittings mit der Schiebelehre, das kann nur schief gehen…) Guten Tag, kann ich Ihnen helfen?”.


Und dann sind die Französen gekommen. Die mit ihrem Metre. Und haben zurecht moniert, dass sich die ganze Welt auf das metrische System geeinigt hat, und was haben hier alle diese Zölle zu suchen? Deswegen haben die Röhre in den neuen Anwendungen (also nicht in der Haustechnik, aber z.B. für Raketen) tendenziell metrische Größen und metrische Gewinden. Es ist also gar nicht so unwahrscheinlich, ein Druckmessgerät zu kaufen, dessen Anschluss eine M20 Gewinde hat. Es hat also einen Durchmesser von exakt 20 mm. Der nächste Fitting wäre dann der 1/2” Fitting, der mit 18,5mm etwas zu klein gewesen wäre. Abgesehen davon haben die metrischen Gewinden eine andere Dichte, was die Anzahl von Faden angeht.

Da müssen wir uns endlich glücklich schätzen, dass die Gewindeverbindungen in der Haustechnik nicht mehr zum State of the Art gehören.

Endlich gibt es Mehrschichtverbundröhre, die mit ihrem Außendurchmesser und Stärke, mit metrischen Einheiten gekennzeichnet werden. Also z.B. 16×2,2 für eine Rohr mit 16mm Außendurchmesser und 2,2mm stark. Diese Röhre kann man von Hand biegen, und in Sekunden durch Verpressen miteinander verbinden. Und eine 100 Meter Rolle kann von einer Person getragen werden.

Es gibt nur ein kleines Problemchen. Alle Hersteller haben den Wikipedia-Eintrag über den Walled Garden gelesen und versuchen, auf ihr System zu locken. Wer “fremde” Röhre oder Fittings verwendet, kriegt keine Garantie. Also wenn die Rohr oder die Fittings alle sind, ist da nichts mit “mal schnell in den Baumarkt fahren und neue holen”, man muss im System bleiben und Nachschub dort besorgen, wo man es früher gekauft hat. Es gibt um 36 unterschiedliche Presskonturen, die jeweils nur auf passende Fittings angewendet werden können. Sie könnte man alle mit einer Presszange verpressen (mit auswechselbaren Pressbacken). Das kostet aber 25 Euro pro Pressbacke. Und dann verliert man ebenfalls die Garantie. Man soll das Originalgerät des Herstellers verwenden, das für den gleichen Job plötzlich das 10-fache kostet (wir sprechen hier um mehr als 1000 Euro für ein kleines Akku-Gerät).

Ich wäre dafür, dass man in Urheber- und Patentrecht auch den Tatbestand eines “Gestaltungsmisbrauchs” einführen würde. Ich habe nämlich den Verdacht, dass man exakte Größe des Fittings nicht preisgibt, nicht weil es auf diese Größen ankommt, um das Fitting besser als von der Konkurrenz machen (stabiler, bequemer, preiswerter), sondern einzeln und allein um zu verhindern, dass die Konkurrenz preiswerte und bessere kompatible Tools und Zubehör herstellen kann.


Als wäre das alles. Isses aber ned.

Frage: ich habe eine Rohr, die mit 1/2” gekennzeichnet ist. Ich möchte sie durch eine Wand durchführen. Wie groß muss das Loch in der Wand sein? Antwort: 40mm. Wie komme ich darauf? Nun, die 1/2” beträgt bekanntlich 12,25mm. Diese Zahl ist ja aber wie wir gelernt haben heutzutage irrelevant und wir schmeißen sie gleich weg. Wir schauen stattdessen in die Tabelle an und finden heraus, dass der Außendurchmesser einer 1/2” Rohr 20mm beträgt. Nun, laut EnEV müssten wir die Rohr noch dämmen. Hierfür ist die Dämmstärke von 50% des Rohrdurchmessers erforderlich. 50% von 20mm ist 10mm. Die 10mm Schicht rund um die Rohr von 20mm herum, macht die finale Größe von 40mm.

Ich finde diese Lösung auch sehr elegant (nicht!). Die Gesetzgeber haben sich vielleicht überlegt, ob sie gleich die Hersteller vorgedämmte Röhre herstellen lassen. Als Verbraucher hätte man dann eine 40-er Rohr gekauft, die man mit der Schiebelehre messen und 40mm bekommen kann, die man auch mit einem 40er Fitting verbinden und in ein 40-er Loch reinschieben kann, usw. Doch so einfach ist es nicht – es gibt ja noch sehr viele alte Installationen, die man sowieso nachträglich dämmen müsste. Und überhaupt, wo kämen wir denn hin, dann würden wir doch dieses wunderbares seit Jahrhunderten sich bewehrtes Zoll-System verlieren! Das geht ned! So ungefähr ist es wohl verlaufen. Nun müssen die Installateure immer mit zwei Durchmesser hantieren. Einmal für die Rohr und Fittings wie sie sind. Und einmal für das Ganze, nachdem es fertig isoliert ist.

Es gibt viele IT-ler, sie sich für ihren Quellcode schämen. Zu viele. Eine Kur davon könnte es sein, mal mit den eigenen Händen eine alte Sanitär-Installation zu renovieren.

Four Advices For Product Managers of Machine Learning Products

The hype around Big Data and Machine Learning goes on and on, and more and more businesses seem to obtain competitive edge by developing data-based products. As a product manager, everyone is considering to use this new tech in their area.

Having made some first experiences designing data-based products, I want to share the lessons learned.

Artificial Intelligence is about having Less Control

In a traditional software product, we fully control its internal logic. On contrary, with a data-based product, we give up some part of the logic to the Machine Learning model. This is fully intended. In fact, this is why we want to use an A.I. at all.

For example, if we want to recognize images of kittens, we could define exact rules of how to process and transform the colors of pixels, by ourselves. But this would be a tedious and complex, if not impossible task, at least for human software developers. Instead, we would train a ML model that would accept images on its input, and would output the detection result, “somehow”.

And here is the dark side of the coin: this “somehow” means that we
1) cannot explain how exactly the model has made its decision, and
2) cannot find “just that one single screw” in the model to tune its behavior, because ML Models can contain millions of “screws” you can tune, and as a human, you cannot find the right one; and finally
3) we have to accept that the ML model is making right detections often, but not all the time.

In the practice, this all leads to the following advices:

1. Design for a Mistake

Most ML models produce results that are only statistically correct – i.e. only some big part of the users will obtain correct or at least satisfactory results. Howewer, some sizable part of the users will not get satisfactory results, and this part could be uncomfortably large (compared to the traditional products), especially because the ML model can make mistakes also in the situations, which for us humans would appear inacceptable, for example it could recognize a kitten in the photo of your CEO.

It is our task as product manager to proactively counteract the possible user dissatisfaction. Here are some ideas how to do so.

Human Moderation

We are working on a gallery of images with kittens and we want that only kittens can be published.
Wrong: if no kitten is detected in the uploaded image, prevent the user to post it.
Right: inform the user that the image is sent to manual moderation process, and hire moderators.
Alternatively: post the image in an “unsorted” category, let users tag inappropriate images, and automatically hide the image detected as “no kittens” after even the very first user complain.

Sort Instead of Hide

We have a baby products shop. We want to recognize the possible age of the users baby and to show them only items suitable for their age.
Wrong: hide the items of an inappropriate age
Right: sort the items of an inappropriate age “below the fold”
Alternatively: hide the inappropriate items, but provide good visible buttons “<<< Articles for younger babies” and “Articles for older babies >>>”

Suggest Instead of Fill

We have a marketplace for used products and we want to generate the headline for new listings automatically, to speedup and simplify the listing publishing process
Wrong: make the headline not editable and generate it automatically
Right: Fill the generated headline as predefined (default) value of the text box, remove it as soon as user starts typing something else.

Note that to be able to implement this idea, the overall publishing workflow has to be changed: instead of starting with the entering a headline, the user can start by uploading article images or filling some structured data, which is needed to generate the headline for him.


We want to help users to estimate the value of their real estate.
Wrong: ask user to fill a form, then output the result of the model.
Right: output a wide range of valuations as soon as user has typed the location, and let the user either to enter more data to reduce the valuation range, or, if measuring or getting the data is too complicated for them, suggest them to order a human evaluation service.

Feedback to Calm Down

We want to show some ads that are as relevant to the users as possible.
Wrong: just show the ads produced by a recommender model.
Right: additionally, show a button allowing the users to give us the feedback (“Less of this topic”) or to turn off the ads (see the Upsell idea).

2. Test in Production

A traditional QA process includes comparing the documented intended logic of the product with the actual product behavior. But because we don’t know the exact logic of the A.I. models, we also cannot document it so the testers also cannot check it. Besides, even if we could document the logic, this document would be so complicated that it would be infeasible to test all the cases.

This leaves us with the following options:

Predefined Mockups

In your services, define some “magic” identifiers that would prevent going through the A.I. pipeline and return predefined results back. For example, we can add a logic into our kitten recognition service that would always return “kitten” for images of 1×2 pixels, and “no kitten” for images of 2×1 pixels, given that it is unprobably that real users would use this image sizes in production. A tester would be able at least to test the non-A.I. parts of the product (uploading, publishing, searching etc)

Exploratory Testing

The traditional exploratory testing is still possible with data-based products, but is always more expensive. For example, the testers would need to prepare testing sets with images of kittens, dogs, horses, lions, dolphins, NSFW-images, etc.

The exploratory testing is especially laborous for products that act on the previous user behavior, such as recommenders, especially if recommenders use some out-of-band data like current date and time, the weather outside, or any interesting shows running currently on TV.

A/B Testing

Therefore, the data-based products rely on A/B testing much more often than any traditional product, because that would compensate for the lack of the traditional QA, which is often infeasible due to time or budget constraints.

3. Document for Reproducibility

Software documentation is often part of prescribed software development processes. In traditional products, the documentation of their business logic is often considered to be the main and most important part.

As a product manager of a data-based product, you can often be confronted with the question, “Where I can find the documentation how the product decides to do this and that in a such and such situation”. And then you will need to explain that there is no documentation, because nobody knows the logic and nobody is able to know it either.

Here, we have to make one step back and remind ourselves, that the main goal of documentation is to enable the maintenance and further development of the software. For the data-based products, the key factors for this is being able to reproduce the model training, and being able to re-train the model with some new, or modified data. There are some tools on the market to manage the datasets (the most popular being Pachyderm), while we have also created our very own framework for this:

4. Create Product Ideas with Data Mining, or Be Flexible

The usual process of product idea discovery includes doing interviews with the users and other UX research, filling out business canvas and performing SWAT analysis, scheduling brainstormings, and a lot of other important and effective tools.

What it does not include, is, to check whether the data in your data lake really has the quality level required to implement the Machine Learning model you need.

To give you some examples of what can diminish your data quality:

  • something that won’t be collected from user and cannot be inferred from other data. For example, if we don’t ask how many rooms there is in the apartment, it is very hard or impossible to realiably infer it from text or images,
  • something that is not validated during collection, for example users can enter literally anything as their zip code,
  • something that has been collected, but disappeared in further data processing steps. For example, the user id could be actually collected during his item search process, but then be removed in the later steps due to the data minimization principle of EU GDPR. In this case we cannot create recommendation models based on what else items this user was interested in the past. This is to demonstrate that low data quality not always means some computer bugs that can be quickly fixed, but could also be a well intended state.

Unexpected low data quality can hit you badly. In the worst case scenario you would successfully pitch an idea, get it on your company roadmap, start implementing it and only after investing 80% of the data science efforts you would recognize that the resulting ML Model cannot be used as expected, for example because it has a very low accuracy, so too many users would be annoyed by it.

Be Flexible

This is why you need to have a plan B for this kind of situations. One typical solution would be using some traditional business logic instead of the A.I., and accept the possible less than expected uplift of the products KPIs. Another solution would be to understand, what exactly part of the available data has good quality, and to quickly conceive, pitch, prioritize and implement a completely different product, possibly with some other target group, product focus or KPIs, but at least technically feasible as it uses a good quality data.

Mine Data

Another, a fundamentally different way to approach this problem is to generate product ideas with Data Mining.

Your data scientists would start with looking at data about some topic (for example, the behavior of users who like kittens) and start to mangle it in different ways, for example creating clusters of users (white kitties versus black kitties lovers), visualizing user retention, cohort analysis and so on. As a by-product, the data scientists will identify and eliminate the bad data: all that bots and crawlers, and users posting NSFW pictures, and can generate some insights that hopefully would be useful to generate an idea.

What if they can discover that a sizable part of the users “like” good kitten images within minutes after they appear online? You can make a new product feature out of it!

For example, for any new kitten image you would find out the users that most probably would like it, and send them a push notification, so that they can enjoy and like the image immediately, thereby increasing their retention on the web site.

The advantage of this process is that you’re guaranteed that you have data at least in some reasonable quality, before you start pitching and prioritizing your product idea.

The disadvantage is that you cannot guarantee that data mining would produce any insights related to the current company goals and focus user groups, so that you need to be flexible here too and be able to make good products, even though they not always contribute to the current company focus.

Law of the Architecture Decomposition by User Interface


One of the key aspects of the software architectures is the choice of the decomposition principle – how the software is divided into parts and via which interfaces these parts communicate to each other. There are several factors influencing the decomposition of software, some of them are technical or logical by nature, others are not related to computer science (such as politics, time and budget constraints, etc). In this post, I’m discussing yet another factor influencing the software architectures.

Prior Work

M. Conway stated in 1967:

Organizations are to produce software architectures which are copies of the communication structures of these organizations.

Before reading this law, I didn’t know that software architecture can be influenced by factors not related to its purpose (stated in technical and business terms). Even after reading it, I was sure that it was just some cynical rant and in reality, most of architectures are not influenced by it. I was wrong and you were right, Mr. Conway.

Still, I think I have noticed yet another non-CS factor influencing architectures – unrelated to politics and organizational charts.

The Law

Software architectures are decomposed using the same interfaces between the modules, that the software developers are using to communicate with their computers.


The punch cards generation

Software developers who have used punch cards as the primary user interface to their computers, tend to think in terms of memory maps. You can in fact visualize the memory map of computer code by putting several stacks of punch cards on top of each other in the right order. So, the typical interface between the modules either uses some well known RAM address location to pass parameters and to receive the results, or uses concepts like stacks or decks or ring buffer, which can also easily be modeled using the punch cards.

Typical programming language: Assembly.

Typical architecture decomposition tool: a big piece of paper hanging on the wall representing the memory map, with pencil marks on it defining the usage of each memory cell.

The command line generation

Software developers who worked with terminals in the command line see their computers as something accepting a command with some parameters. So their typical programming language is C – a C function is more or less a command with some parameters. In the UNIX world it is indeed often the case that an API C function and the command line executable bear the same name, so having learned how to use it in the shell you could easily use it exactly the same way in your C program.

Typical programming language: C

Typical architecture decomposition tools: structured programming, functions, modules with several related functions, ony some of them are publicly available to other modules and build the formal “interface” of the module that can be statically checked by the linker.

The GUI generation

Software developers who think about computers as something having a GUI, tend to think in terms of objects and containers of objects. The desktop is a container of windows, and each window is a container for other UI elements, and an UI element is a container of methods that are to be executed on mouse click.

Typical programming language: Smalltalk, C++, Java, C#, a lot of others

Typical architecture decomposition tools: OOP, polymorphic classes with virtual methods and inheritance. The modules can now contain several classes, and only some of them are publicly available to other modules. There is a granular control of what parts of software are visible to whom and what can be overridden. One or several modules can be put together in a package, and be installed and managed by package managers. They are versioned, have explicit dependency tracking, and semi-automatic documentation.

The web generation

Software developers who spend most of their time in the web browser or in the mobile apps like chats and social networks, tend to think in terms of hosts and resources available on that hosts.

Typical programming language: NodeJs, PHP, but actually they don’t matter that much any more

Typical architecture decomposition tools: RESTful web services (like web pages) and Message Busses (like Chats). Just like the computer device doesn’t matter for the web generation (as they interact not with their device, but with some web site somewhere in the cloud), their software can both run in a 1 inch by 1 inch small device on their wrist, or have its parts running in 10 different countries all around the globe – without changing anything in the architecture.

The generation Future

Observing the exponential growth of artificial intelligence and virtual reality, we could assume that the next generation of software developers will neither interact with a computer, nor interact with some internet resource. Instead, their own cognitive abilities will be augmented by silicon-based intelligence and some combination of local and global information networks. We can imagine that this augmentation could be seamless, so that they don’t even recognize, whether their order of a salad instead of a burger was their own decision, or an advice of their health monitor, worried by the blood sugar levels. The will not interact with any hardware, software or resource – these things will be just parts of the peoples personalities.

Typical programming language: actually, not a programming language, but a machine learning framework. Probably, something implementing the Neural Turing Machine.

Typical architecture decomposition tools: something from the areas related to AI psychology, AI parenting, AI training, education, self-control, self-esteem etc.