A three minute guide to Ansible for data engineers

Ansible is a DevOp tool. Data Engineeres can be curious about tech and tools used by DevOps, but rarely have more than 3 minutes to spend for a quick look over the fence.

Here is your 3 minutes read.

To deploy something into an empty server, first we need to instal python there. So we open our Terminal and use ssh to connect to the server:

localhost:~$ ssh myserver.myorg.com
maxim@myserver.myorg.com's password: **********

Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-26-generic x86_64)


Note that after the first connection, the DevOp will probably configure ssh client certificate as well as some sudoers magic, so that we don’t need to enter the password every time we use ssh and therefore we can use ssh in scripts doing something automatically. For example, if we need to install pip on the remote server, we can do

ssh -t myserver.myorg.com sudo apt install python3-pip

Now, let’s say, we have received 10 blank new empty servers to implement our compute cluster and we need to configure all of the in the same way. You cannot pass a list of servers to the ssh command above, but you can do that to Ansible. First, we create a simple file hosts.yml storing our server names (our inventory):



Now we can install pip on all compute servers (even in parallel!) by one command:

ansible -i hosts.yml my-compute-cluster -m shell -a "sudo apt install python3-pip"

This is already great, but executing commands like this has two drawbacks:

  • You need to know the state of each server in my-compute-servers group. For example, you cannot mount a disk partition before you format it, so you have to remember whether partitions have been already formatted or not and
  • The state of all the servers has to be the same. If you have 5 old servers and one new, you want to format and mount the disk partition on the new one, and under no circumstances you want to format the partitions of the old servers.

To solve this, Ansible provides modules that not always execute some command, but first check the current state and skip the execution, if it is not necessery (so it is not “create”, it is “ensure”). For example, to ensure formatting of a disk partition /dev/sdb with the file system ext4, you call

ansible -i hosts.yml my-compute-cluster -m filesystem -a "fstype=ext4 dev=/dev/sdb"

This command won’t touch the old servers and only do something on the new one.

Usually, when preparing the server to host your data pipeline, several configuration steps are required (OS patches need to be applied, software needs to be installed, security must be hardened, data partitions mounted, monitoring established) so instead of having a bash script with commands such as above, Ansible provides comfortable and readable roles format in YAML. The following role prepare-compute-server.yml will for example update the OS, install pip, and format and mount filesystem:

- name: Upgrade OS
    upgrade: yes

- name: Update apt cache and install python3 and pip
    update_cache: yes
    - python3
    - python3-pip

- name: format data partition
    fstype: ext4
    dev: /dev/sdb

- name: mount data partition
    path: /opt/data
    src: /dev/sdb

Roles such this ment to be reusable building blocks and shouldn’t really depended on what rollout project you are currently doing. To facilitate this, it is possible to use placeholders and pass parameters to the roles using Jinja2 syntax. You also have loops, conditional executions and error handling.

To do some particular rollout, you would usually write a playbook, where you specify, what roles have to be executed on what servers:

- hosts: my-compute-cluster
  become: true    # indication to become root on the target servers
    - prepare-compute-server

You can then commit the playbook to your favorite version control system, to keep track who did what when, and then execute it like this

ansible-playbook -i hosts.yml rollout-playbook.yml

Ansible has a huge ecosystem of modules that you can install from its galaxy (similar to PyPi) and also much more features, most notable of which is that instead of having a static inventory of your servers, you can write a script that fetches your machines using some API, for example the EC2 instances from your AWS account.

Alternatives to Ansible are Terraform, Puppet, Chef and Salt.

How to make yourself to like YAML

2003 when I was employed at straightec GmbH, my boss was one of the most brilliant software engineers I’ve ever met. I’ve learnt a lot from him and I am still applying most of his philosophy and guiding principles in my daily work. To make an impression of him, its enough to know that we have used Smalltalk to develop a real-world, successful commercial product.

One of his principles was, “if you need a special programming language to write your configuration, it means your main development language is crap”. He would usually proceed demostrating that a configuration file written in Smalltalk is at least not worse than a file written in INI format or in XML.

So, naturally, I had preposessions against JSON or YAML, preferring to keep my configurations and my infrastructure scripts in my main programming language, in this case Python.

Alas, life forces you to abandon your principles from time to time, and the need to master and to use Ansible and Kubernetes has forced me to learn YAML.

Here is how you can learn YAML if you in principle against of it, but you have to learn it anyway.

This is YAML for a string

Some string

and this is for a number


And this is a one-level dictionary with string as keys and strings or numbers as values

key1: value1
key2: 3.1415
key3: "500" # if you want to force the type to be string, use double quotes

Next is a one-dimensional array of strings

- item 1
- item 2
- third item
- строки могут быть utf-8

Nested dictionaries

      used: 1
      power_consumption: 150  
      used: 1
      power_consumption: 2  
      used: 0

You can glue nested levels together like this:

      used: 1
      power_consumption: 150    

A dictionary having an array as some value

scalar_value: 123
  - first
  - second
  - third
another_way_defining_array_value: ["first", "second", "third"]

Now something that I was often doing wrong (and still doing wrong from time to time): an array of dictionaries. Each dictionary has two keys: “name” and “price”

- name: Chair
  price: 124€       # note that price is indented and stays directly under name. 
                    # any other position of price is incorrect.
- name: Table
  price: 800€       

# note that there is nothing special in the "name", you can use any key to be first:

- price: 300€
  name: Another table       

- price: 12€
  name: Plant pot

Just to be perfectly clear, the YAML above is equivalent to the following JSON

    "name": "Chair",
    "price": "124€"
    "name": "Table",
    "price": "800€"
    "price": "300€",
    "name": "Another table"
    "price": "12€",
    "name": "Plant pot"

Finally, you can put several objects into one file by delimiting them with —

type: Container
path: some/url/here
replicas: 1
label: my_app
type: LoadBalancer
selector: my_app
port: 8080

This should cover about 80% of your needs when writing simple YAML.

If you want to continue learning YAML, I recommend you to read about

  • how to write a string spanning over several lines
  • how to reference one object from another inside of the same YAML
  • And the Jinja2 templating syntax that is used very often together with YAML

How to stop fearing and start using Kubernetes

The KISS principle (keep it simply stupid) is important for modern software development, and even more so in the Data Engineering, where due to big data and big costs every additional system or layer without clear benefits can quickly generate waste and money loss.

Many data engineers are therefore wary when it goes about implementing and rolling out Kubernetes into their operational infrastructure. After all, 99,999% of the organizations out there are not Google, Meta, Netflix or OpenAI and for their tiny gigabytes of data and two or three data science-related microservices running as prototypes internally on a single hardware node, just bare Docker (or at most, docker-compose) is more than adequate.

So, why Kubernetes?

Before answering this question, let me show you how flat the learning curve of the modern Kubernetes starts.

First of all, we don’t need the original k8s, we can use a simple and reasonable k3s instead. To install a fully functional cluster, just login to a Linux host and execute the following:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --secrets-encryption

You can then execute

kubectl get node

to check if the cluster is running.

Now, if you have a Docker image with a web service inside (for example implemented with Python and Flask) listening on port 5000, you only need to create the following YAML file:

apiVersion: apps/v1
kind: Deployment
  name: my-microservice
    app: my-microservice
      app: my-microservice
  replicas: 1
        app: my-microservice
       -name: my-microservice
          image: my-artifactory.repo/path-to-docker-image
           -containerPort: 5000


kind: Service 
apiVersion: v1 
  name: my-microservice
  type: LoadBalancer
    app: my-microservice
    - port: 5000
      targetPort: 5000

Conceptually, Kubernetes manages the computing resources of the nodes belonging to the cluster to run Pods. Pod is something akin a Docker container. Usually, you don’t create Pods manually. Instead, you create a Deployment object and it will then take care to start defined number of Pods, watch their health and re-start them if necessary. So, in the first object defined above, with the kind of Deployment, we define a template, which will be used whenever a Deployment needs to run yet another Pod. As you can see, inside the template you are specifying the path to the Docker image to run. There, you can also specify everything else necessary for Docker to run it: environment variables, volumes, command line, etc.

A Kubernetes cluster assigns IP addresses from its very own IP network to the nodes and Pods running there, and because usually your company network doesn’t know how to route to this network, the microservices are not accessible by default. You make them accessible by creating another Kubernetes object of kind Service. There are different types of the Services, but for now everything you need to know is that if you set it to be LoadBalancer, the k3s will expose your microservice to the rest of your corporate network by leasing a corporate network IP address and hosting a proxy service on it (Traefik) that will forward the communication to the corresponding Pod.

Now, when we have our YAML file, we can roll out our tiny happy microservice to our Kubernetes cluster with

kubectl apply -f my-microservice.yaml

We can see if it is running, watch its logs or get a shell access to the running docker container with

kubectl get pod
kubectl logs -f pod/my-pod-name-here
kubectl exec -it pod/my-pod-name-here bash

And if we don’t need our service any more, we just delete it with

kubectl delete -f my-microservice.yaml

Why Kubernetes?

So far, we didn’t see any advantages compared to Docker, did we?

Well, yes, we did:

  • We’ve got a watch-dog that monitors Pods and can (re)start them for example after server reboot or if they crash for any reason.
  • If we have two hardware nodes, we can deploy our Pods with “replicas: 2” and because we already have a load balancer in front of them, we can get high availability almost for free
  • If the microservice supports scalability by running several instances in parallel, we already get a built-in industrial grade loadbalancer for scaling out.

Besides, hosting your services in Kubernetes has the following advantages:

  • If at some point you will need to pass your internal prototypes for professional operations to a separate devop team, they will hug you to death when they learn your service is already kubernetized
  • If you need to move your services from on-premises into the cloud, the efforts to migrate, for example, to Amazon ECS is much much higher than the changes you need to do to go from k3s to Amazon EKS.
  • You can execute batched workflows scheduled by time with a CronJob object, without the need to access the /etc/crontab on the hardware nodes.
  • You can define DAG (directed acyclic graphs) for complicated workflows and pipelines using Airflow, Prefect, Flyte, Kubeflow or other Python frameworks that will deploy and host your workflow steps on Kubernetes for you
  • You can deploy Hashicorp Vault or other secret manager to Kubernetes and manage your secrets in a professional, safer way.
  • If your microservices need some standard, off-the-shelf software like Apache Kafka, RabbitMQ, Postgres, MongoDB, Redis, ClickHouse, etc, they all can be installed into Kubernetes with one command, and deploying additional cluster nodes will be just a matter of changing the number of replicas in the YAML file.


If you only need to host a couple of prototypes and microservices, Kubernetes will immediately improve their availability, and, more importantly, will be a future-proof, secure, scalable and standartized foundation for coming operational challenges.

Now when you’ve seen how easy the entry into the world of Kubernetes is, you don’t have the “steep learning curve” as an excuse for not using Kubernetes already today.

How to choose a database for data science

“I have five CVS files one Gb each and the loading them with Pandas is too slow. What database should I use instead?”

I often get similar questions from data scientists. Continue reading to obtain a simple conceptual framework to be able to answer this kind of questions yourself.

Conceptual framework

The time some data operation takes depends on the amount of data and the throughtput of the hardware, as well as on the number of operations you can do in parallel:

Time ~ DataSize, Throughtput, Parallelization

The throughput is the easiest part, because it it defined and limited by electronics and its physical constraints. Here are the data sources from slow to fast:

  • Internet (for example, S3)
  • Local Network (for example, NAS, also hard drives attached to your local network)
  • (mechanical) hard drives inside of your computer
  • SSD inside of your computer
  • RAM
  • CPU Cache memory / GPU memory if you use GPU for training

You can reduce the data processing time by moving the data to the faster medium. Here is how to:

  • Download the data from the internet to your harddrive, or to local network NAS, if the data doesn’t fit into your machine.
  • As soon as you read a file once, your OS will keep its data in RAM in the file cache for further reads. Note that the data will be eventually evicted from memory, because RAM size is usually much less than the harddrive size.
  • If you want to prevent eviction for some particular files that fit in memory, you can create a RAM drive and copy the files there.

There are a couple of tricks to reduce data size:

  • Compress the data with any losless compression (zip, gzip etc). You will still need to decompress it to start working on it, but the reading from slow data source like a hard drive will be quicker because there is less data to read, and decompression will happen in a fast RAM.
  • Partition the data, for example by month or by customer. If you only need to run a query related to one month, you will skip reading the data for other monthes. Even if you need the full data (for example for ML training), you can spread your data partitions into different computers and process the data simultaneously (see parallelization below)
  • Store the data not row by row, but column by column. Thats the same idea like partitioning, but column-wise. So if for some particular queries you don’t need some column, in a column-based file format you can skip reading it, therefore reducing the amount of data.
  • Store additional data (index) that will help you to find rows and columns inside of your main file. For example, you can create a full text index over the columns of your data containing free English text, and then, if you only need rows containing the word “dog”, this index will read only bytes from the storage with the rows containing “dog” inside, so you read less data, so you reduce amount of data to be read. For Computer Science, this is where their focus is on and they are most excited inventing additional data structures for especially fast indexes. For Data Science, this is rarely helpful, because we often need to read all the data anyways (eg. for training).

As of parallelization, it is a solution for the situation where your existing hardware in under-used and you want to speed up things by fully using it, or you can provision more hardware (and pay for it) to speed-up your processes. At the same time this is also the most complicated factor to optimize, so don’t go there before you try the stuff above.

  • Parallelization on macrolevel: you can split your training set to several servers, read the data and train the model simultaneously on each of them, calculate weight updates with the backpropagation and then apply them to a central storage of weights and redistribute the weights back to all servers.
  • Parallelization on microlevel: put several examples from your training set at once onto GPU. Similar ideas are utilized in a small scale by databases or frameworks like numpy and is called there vectorization.
  • Parallelization on hardware level: you can attach several hard drives to your computer in a so-called RAID array and when you read from them, the hardware ensures that it reads from all of the hard drives in parallel, multiplying your hardware throughput.

Parallelization is sometimes useless and/or expensive and hard to get right, so this is your last resort.

Some practical solutions

A database is a combination of smart compressed file format, an engine that can read and write it efficiently, and an API so that several people can work on it simultaneously, data can be replicated between database nodes, etc. When in doubt, for data science use ClickHouse, because it provides top performance for many different kind of data and use-cases.

But databases are the top notch full package, and often you don’t need all of it, especially if you work alone one some particular dataset. Luckily, also separate parts of the databases are available on the market. They have various names like embedded database, database engine, in-memory analytics framwork etc.

An embedded database like DuckDB or even just a file format like Parquet (Pandas can read it using fastparquet) are in my opinion the most interesting one and might be already enough.

Here is an short overview:

Data compressionNative compressed format with configurable codecs. Also some basic support of Parquet.Native compressed format. Also supports ParquetConfigurable compression codecs
PartitioningNative partitioning. Also basic partitioning support for ParquetSome partitioning support for ParquetManual partitioning using different file names
Column-based storageNative and Parquet, as well as other file formatsNative and ParquetParquet is a column-based format
IndexesPrimary and secondary indexes support for native format. No index support for Parquet (?).Min-Max index for native format. No index support for Parquet (?).Not supported by fastparqet to my knowledge, but Parquet as file format has indexes.

Backup is working!

So guys, I have got some lessons learned about restoring from a backup this week!

Every computer geek shames for his backup strategy. Most of us know we should backup our data, but we don’t do it. Those who do backups regularly, haven’t ever tried to restore from a backed up file to check if it is really working. And only the geek gods like Umputun backup into several locations and probably also check their backups regularly.

Until this week, I’ve belonged to the minority above: I was making my backup, but never tried to restore from it. This is a kind of interesting psycological trap. On the one hand, you think your backup software is working properly (because if it isn’t, it makes no sense to backup at all), but on the other hand, you are scared to restore from a backup file, because if that will go wrong, it will both destroy your original data and leave you with an unusable backup file.

And it also doesn’t add trust that my backup software has been developed by AOMEI, a chinese company (no, having a chinese wife doesn’t make you to trust just any chinese software more than before).

People who work as IT admins don’t have this problem, because they can always use a spare drive to restore from backup into there and to check if the backup is working without the risk of losing the original data. Or else some of their server drives will die and they are forced to restore to a new drive anyway.

The latter scenario has finally happened to me this week. My booting drive (128 Gb SSD) has died. But don’t worry, I’ve had it backed up (sectorwise) using the AOMEI backup into my small Sinology NAS.

So I have ordered and recevied a new SSD (thanks Amazon Prime). Now, how would I restore my backup file on this new drive? Even if I could attach it to Sinology (I would probably need some adapters for that!) I don’t think that Synology can read the AOMEI file format, or that AOMEI has developed an app for Synology. Also, I cannot even login into the Synology Web UI, because it requires credentials, and I have stored the credentials on KeePass, which is luckily for me on a separate drive, but it is not replicated to my other devices. And also, Synology doesn’t allow to restore password by sending it to me via E-Mail. I only can reset it by pressing some hidden reset button with a needle. My NAS is physically located in a place I could hardly reach, so it would be yet another adventure.

Lesson 1: store your backup NAS IP, username and password not on the same PC you would need to restore.

So I have attached a DVD drive to my PC (for all of you Gen-Z: this is a cool technology from last century, where you would store data on disks, but no, the disks are not black and from vinyl, but rather rainbow colored and from plastic) and installed a fresh Windows 10 onto my new SSD system drive.

After booting I was happy to see all the data from my second drive are still there, including the KeePass database. I just didn’t have KeePass to read it. No problem, I will just download it! I went to the KeePass web site, but the download didn’t want to start, probably because I was using some old version of the Edge browser. Okay. So I needed to download and to install Chrome first. Thank God Windows 10 installation has automatically recognized all my drivers including the NIC (for all of you Gen-Z: in the past, Windows didn’t have access to the Internet after the installation. You would need to spend some time configuring your dialup connection).

So, downloaded and started KeePass, copied the NAS credentials from there, now I can see the backup file. Cool. Next step will be to restore it. Okay, need to download and install AOMEI. Thank God, the company is still in business and they still have a web site and their newest software is still compatible with my backup file.

Lesson 2: download a portable version or an installer of your backup software and store it not on the same PC you would need to restore.

Installed and started AOMEI and pointed it to the backup file. It said, I need to reboot into a system restore mode. Okay. After a restart, the AOMEI app has booted instead of Windows and after some scary warnings started to restore the system. Also, at this point I was happy I didn’t need some special drivers for NIC or something. I could have forced me to copy the backup file into USB-Stick first though, and I am grateful it hasn’t.

The restoration process took several hours, but after it has finished, the PC has rebooted again, into my old Windows, with all software installed and old data on the system drive restored!

Summary: I am happy! And I am less afraid of restores now!

Question: are there any home NAS on the market that allow you to restore sectorwise system backups directly onto a new drive (you don’t need to attach it to your PC, just insert the new drive directly into the NAS)?

On Death of the Russian Culture

I was born in the USSR. When my country died, I didn’t feel anything special, maybe just a little hope: this was a new opportunity to do better, to learn from our mistakes, to become a member of a peaceful global world.

Besides, I’ve clearly separated the country – a bureaucratic, infected with the communist ideology, useless monster that have destroyed lakes and rivers and killed millions of its own citizens in Gulag – from the Russian culture.

With over 1000 years of history, the Russian culture came up with novel philosophical ideas to answer major questions of life, discovered natural laws, invented useful technology and created art on the international level.

The Russian Culture has survived enslavement by the Mongols, Tsarist Regimes, the Communist Revolution, and two World Wars, so I didn’t need to worry back then, when USSR fell apart. I’ve had my Russian Culture. It would live with me and I’d pass it on to the new generations.

Well, not after February 24th, 2022.

All the dear old fairy tales my parents have read to me in my childhood, all the lullaby songs they sang to me, all their explanations of what is good and what is bad, all the Russian ways I know how to live life and how to be successful and how to resolve conflicts and how to solve problems, all the religion and philosophical ideas, all my favorite, deeply meaningful songs, smart movies, tender cartoons, awesome books, and finally, the most important, my understanding of what is Love and how to love that I have learned from my mom – all of it became obsolete.

Yes, it still lives within me. But how am I supposed to pass it over to the new generations? The culture of the very that nation that did Bucha, Mariupol, Charkow, that has raped, beaten, tortured, and murdered children, women, elderly and innocent civilians in the hundreds of towns and villages in Ukraine. The culture of people who lied about their military involvement in Krim and Donbass, who poisoned their opponents worldwide, who threaten the whole world with their nuclear weapons, who let their leaders to brainwash them, who have shelled pregnant women, murdered tiny girls in front of their mothers, destroyed homes of hundreds of thousands of people and inflicted hunger and energy emergency around the world?

How can I ask my nephew if he wants me to read a Russian fairy tale, if I am deeply ashamed of being Russian and I am calling myself “German” in public now?

Millions of brilliant Russians had ideas, discoveries, lessons learned, and art the whole world needed to hear and to benefit from. Now, all of it is gone. Now, a big part of me is dead.

Data Lakes are Technical Debt

I’ve been working on big data since 2014 and I’ve managed to avoid taking the technical debt of data lakes so far. Here is why.

Myth of reusing existing text logs

For the purpose of this post, let’s define: a data lake is a system allowing a) to store data from various sources in their original format (included unstructured / semistructured) and b) to process this data.

Yes, you can copy your existing text log files into a data lake, and run any data processing on them in the second step.

This processing could be either a) converting them into a more appropriate storage format (more about it in just a minute) or b) working with the actual information – for example, exploring it, creating reports, extracting features or execute business rules.

The latter is a bad, bad technical debt:

  • Text logs are not compressed and not stored by columns, and have no secondary indexes, so that you waste more storage space, RAM, CPU time, energy, carbon emissions, upload and processing times, and money if you casually work with the actual information contained in there.
  • Text logs doesn’t have a schema. Schemas in data pipelines play the same role as strict static typing in programming languages. If somebody would just insert one more column in your text log somewhere in the middle, then your pipeline will in the best case fail weeks or months later (if you execute it for example only once a month), on in the worst case it will produce garbage as a result, because it wouldn’t be able to detect the type change dynamically.

Never work off text logs directly.

A more appropriate format for storage and for data processing is a set of relational tables stored in compressed, columnar format, with a possibility to add secondary indexes, projections, and with a fixed schema checking at least column names and types.

And if we don’t work off the text logs directly, it makes no sense to copy them into a data lake – first to avoid the temptation to use them “just for this one quick and dirty one-time report”, but also because you can read the logs from the system where they are originally stored, convert them into a proper format, and ingest into your relational columnar database.

Yes, data lakes would provide a backup for the text logs. But YAGNI. The only use-case where you would need to re-import some older logs is some nasty hard-to-find bug in the import code. This happens rarely enough to be willing to use much cheaper backup solutions than the data lake.

Another disadvantage of working with text logs in data lakes is that it motivates to produce even more technical debt in the future.

Our data scientist needs a little more information? We “just add” one more column into our text log. But, at some point, the logs become so big and bloated that you won’t be able to read them with your naked eye in any text editor, so that you’d lose the primary goal of any text log: tracing the state of the system to enable offline debugging. And if you add this new column in the middle, some old data pipelines can silently break and burn on you.

Our data scientist needs information from our new software service? We will just write it in a new text log, because we’re already doing it in our old system and it “kinda works”. But in fact, logging some information:

logging.info('Order %d has been completed in %f ms' % (order_nr, time))

takes roughtly as much effort as inserting it into the proper, optimal, schema-checked data format:

db.insert(action='order_completed', order=order_nr, duration_ms=time)

but the latter saves time, energy, storage and processing costs, and possible format mistakes.

Myth of decoupled usage

You can insert all the data you have now, store it in the data lake, and if somebody needs to use it later, they will know where to find it.

Busted. Unused data is not an asset, it is a liability:

  • you pay for storage
  • you always need to jump over it when you scroll down a long list of data buckets in your lake,
  • you might have personal data there so you have one more copy to check if you need to fulfill GDPR requirements,
  • the data might contain passwords, secure tokens, company secrets or some other sensitive information that might be stolen, or could leak.
  • Every time you change the technology or the clould provider of your data lake, you have to spend time, effort and money to port this unused data too.

Now, don’t get me wrong. Storage is cheap, and nothing makes me more angry at work than people who would delete or not store data, just because they want to save storage costs. Backup storage is not so expensive as data lake storage, and de-personalized parts of data should be stored forever, just in case we might need them (but remember: YAGNI).

Storing unused data in a data lake is much worse than storing it in an unused backup.

Another real-world issue preventing decoupled usage of data is how quickly the world change. Even if the data stored in the data lake is minutiously documented up to the smallest detail – which is rarely the case – time doesn’t stand still. Some order types and licensing conditions become obsolete, some features don’t exist any more, and the code that has been producing data is already removed, not only from the master branch, but also from the code repository altogether, because at some point the company was switching from SVN to git and they had decided to drop the history older than 3 years, and so on.

You will find column names that nobody can understand, and column values that nobody can interpret. And this would the best case. In the worst case, you would find an innocent and fairly looking column named “is_customer” with values 0 and 1, and you will mistake it for a paying user and you will use it for some report going up to the C-level, only to painfully cringe, after somebody would suddenly remember that your company had an idea to build up a business alliance 10 years ago, and this column was used to identify potential business partners for that cooperation.

I only trust the data I collect myself (or at least I can read and fully understand the source code collecting it).

The value of the most data is exponentially decaying with time.

Myth of “you gonna need it anyway”

It goes like this: you collect data in small batches like every minute, every hour or every day. Having many small files makes your data processing slow, so you re-partition them, for example into monthly partitions. At this point you can also use a columnar, checked store and remove unneeded data. These monthly files are still to slow to be used for online, interactive user traffic (with expected latency of milliseconds) so you run next aggregation step and then shove the pre-computed values into a some quick key-value store.

Storing the original data in its original format in the lake in the first step feels to be scientifically sound. It makes the pipeline uniform, and is a prerequisite for reproducability.

And at the very least, you will have three or more copies of that data (in different aggreation state and formats) somewhere anyway, so why not storing one more, original copy?

I suppose, this very widespread idea comes from some historically very popular big data systems like Hadoop, Hive, Spark, Presto (= AWS Athena), row-based stores like AWS Redshift (=Postgresql) or even document-based systems like MongoDB. Coincidentally, these systems are not only very popular, but also have very high latency and / or waste a lot of hardware resources, given the fact that some of them written on Java (no system software should be ever written in Java), or use storage concepts not suitable for big data (document or row-stores). With these systems, there is no other way than to duplicate data and store it pre-computed in different formats according to the consumption use-case.

But we don’t need to use popular software.

Modern column-based storage systems based on the principles discovered with Dremel and MonetDB are so efficient that in the most use-cases (like 80%) you can store your data exactly once, in a format that is suitable for a wide variety of queries and use-cases and deliver responses with sub-second latency for simple queries.

Some of these database systems (in alphabetical order):

  • Clickhouse
  • DuckDB
  • Exasol
  • MS SQL Server 2016 (COLUMNSTORE index)
  • Vertica

A direct comparison of Clickhouse running in an EC2 instance with data stored in S3 and queried by Athena (for some specific data mix and query types that are typical at my current employer Paessler AG) has shown that in this particular case Clickhouse is 3 to 30 times quicker and at the same time cheaper than the naive Athena implementation.

Is it possible to speed up the Athena? Yes, if you pre-aggregate some information, and pre-compute some other information, and store it in the DynamoDb. You’ll then get it cheaper than Clickhouse, and “only” 50% to 100% slower. Is it worth having three copies of data and employing a full time DBA monitoring the data pipelines health for all that pre-aggregating and pre-computing, as well as using three different APIs to access the data (Athena, DynamoDB and PyArrows)? YMMV.


Data lakes facilitate technical debt:

  • Untyped data (that can lead to silent, epic fuck-ups)
  • Waste of time
  • Waste of money
  • Waste of hardware
  • Waste of energy and higher carbon footprint
  • Many copies of the same data (that can get out of sync)
  • Can be against of the data minimization principle of GDPR
  • Can create additional security risks
  • Can easily become data grave if you don’t remove dead data regularly

Avoid data lakes if you can. Mind the technical debt you are agreeing on and be explicit about it, if you still have to use them.

Flywheel Triangles

For a business to survive and become somewhat sustainable, it needs a self-sustaining business process to earn money, such as it would be very hard to destroy it by management errors or market changes.

I’ve heard it to be called “Flywheel” at StayFriends.

Business flywheels are positive feedback loops leading to business growth, and can be depicted as triangles. Here is for example how the flywheel of StayFriends looked like:

Users generate content. Content could be used to generate ads, or sold to other users, and the resulting revenue could be used to buy ads, bringing new users.

This effect was called “viral loop” back then, but now I understand that this kind of flywheels exist in any successful business and are not limited to social networks.

This is for example the flywheel of Axinom in its early years:

Any custom projects developed based on the AxCMS resulted to more generalized features being added to this CMS, and to more good looking references for it, so that it has attracted more customers, and thus it has generated new projects. Note how the quality of the produced software was not part of this triangle. Theoretically, you could run projects that have left customers dissatisfied, but you had still added something to AxCMS and could use it to win other customers.

Winning new customers might be harder than winning new projects of an existing customers, so that Axinom was working on a second flywheel:

Having several flywheels supporting each other seems to be a feature of companies demostrating sustainable growth. Here is for example the landscape of flywheels of Immowelt (including a potential new one):

It is interesting to see that even a damaged flyweel can support the business for decades. I can demonstrate it on example of Metz, a TV manufacturer. Initially, when television was not yet ubiquitos, the company was participating in the following flywheels:

Where Metz has owned only the lower triangle, but the growth has happened in the upper two: people discovered some new cool show they wanted to see, then needed to buy their first TV set for that, having done that, they became TV Viewers, that motivated creation of new TV shows both directly as well as because of more money from the advertisers. This worked until virtually every family had a TV set. From this time on, the link between TV Sets and TV Viewers became broken, so Metz remained with this:

Basically, they had only the pair of dealer -> TV sets, and a very weak third corner: by producing very high-quality TV sets, they could consistently win various Tests (eg. by Stiftung Warentest) and this could helped to win a little more electronics dealers.

I guess, the company has survived over 50 years in this state.

Every company is interested in having a healthy flywheel and in participating in to several flywheels at the same time. I think, the most realistic way adding a new flywheel would be reusing existing one or two nodes and adding a new triangle.

For example, Metz could try to grow true Metz fans, having basically the strategy of Apple and XiaoMi:

Well, in theory it looks good, but we all know that in the practice, there are all kind of problems, starting from missing investment budget, not enough innovation talents in R&D department, law and regulations preventing some flywheels, strong market competition, etc.

The reason I’ve written this post is that the idea of depicting the flywheels by triangles came to me in a dream, and for some reason, I was very sure in my dream that it absolutely must have at least three corners. I cannot explain this logically, but anecdotally, if we look at the Marx’s formula:

It is striking that it doesn’t provide any non-trivial insights of how to start growing business.

An alle Extrovertierten

Liebe Extrovertierte!

Die Ausgangsbeschränkungen haben mein Alltagsleben gar nicht verändert. Ich mache die gleichen Sachen wie zuvor, treffe (fast) die gleichen Menschen wie zuvor, bin gleich oft draußen wie zuvor. Und das, obwohl ich den Ausgangsbeschränkungen folgeleiste.

Ich bekomme nun mit, dass einige von euch es nicht mehr aushalten können, alleine zuhause zu sein, euch gar einsam fühlen und eure Veranstaltungen und Networking vermissen.

Es tut mir sehr leid für euch. Aber.

Endlich wisst ihr, wie sich Introvertierte in der von euch regierten Welt fühlen.

Ich wäre euch also dankbar, nächstes Mal:

– bevor ihr mich als Kunde anruft, obwohl ich mir klar die Kommunikation per E-Mail gewünscht hatte, weil ihr eure Finger nicht wundtippen wollen, und ein Telefonat für euch einfacher und angenehmer ist,

– bevor ihr mich als Dienstleister zu einem Workshop bei euch vor Ort einlädt, und mir eine PPT mit dem Auftrag vorliest, anstatt die PPT per E-Mail zu verschicken und dann zwei-drei Fragen per E-Mail zu beantworten, nur weil ihr mich aus reiner Neugier persönlich kennenlernen möchtet,

– bevor ihr mich als euren Mitarbeiter zu einer Teambuilding-Maßnahme einlädt, wo zwangzig gut bezahlte Erwachsene den ganzen Tag einander farbige Plastikballe zuwerfen, und später in Schwimmflossen rückwärts gegeneinander rennen müssen, weil ihr denkt, es macht ja so viel Spaß,

– bevor ihr mich quer durch Deutschland anfliegen lässt und euch mein Diagramm aus einer PDF-Datei vorlesen lasst, weil ihr es anders nicht versteht, denn ihr habt es trotz Grundschule, mittleren Reife, Abitur und Studium nicht geschafft zu lernen, wie man Deutsch selbstständig lesen und verstehen kann,

– also bevor ihr nächstes Mal all solche typisch extrovertierten Dinge macht, denkt mal bitte an diesen einen Monat, als ihr zuhause alleine sitzen musstet und euch unwohl gefühlt habt, und fragt euch, ob es vielleicht in eurem Alltag den introvertierten Mitmenschen gegenüber respektvoller gewesen wäre:

– den gleichen Kommunikationsweg zu nutzen, den sie für sich gewählt haben, auch wenn ihr eure Finger etwas wundtippen, Informationen selbstständig lesen statt vorgelesen bekommen und eure Neugier bändigen müsstet,

– das Programm von euren Firmenfeier und Teambuilding-Veranstaltungen im Voraus zu veröffentlichen, damit die Introvertierten sich rechtzeitig darauf einstellen und ggf. ihre Bedenken melden können.

Danke für eure Aufmerksamkeit.

Sanitär für IT-ler

Irgendwann hat man angefangen, Wasserleitungen nicht nach Kundenvorgabe, sondern in einer Reihe von fest definierten Größen herzustellen.

Der Bauherr konnte zwar nicht mehr auf Millimeter genau festlegen, wie groß seine Röhre waren. Konnte dafür aber Wasserröhre von unterschiedlichen Herstellern miteinander kombinieren und von der Konkurrenz der Hersteller profitieren. Die Hersteller haben auch davon profitiert, weil sie a) die Röhre als Massenware herstellen, und b) auf Vorrat und nicht mehr auf Bestellung produzieren konnten – nimmt ein Kunde eine Rohr nicht ab, so kann es ein beliebiger anderer Kunde kaufen.

Nun, abgesehen von der Länge haben runde Röhre zwei weitere Dimensionen: Innendurchmesser und Außendurchmesser. Da damals die Differenz dazwischen immer gleich blieb, hat man eine der zwei Dimensionen für überflüßig gehalten und Röhre immer nach dem Innendurchmesser gekennzeichnet. Also eine 1” Rohr hatte den Innendurchmesser von 1 Zoll. Das war aus der Usability Sicht und aus Sicht der Kundenorientierung damals vorbildlich. Erstens, es hat den Kunden eher der Durchsatz des Wassers in seinem Netz interessiert, also im Endeffekt der Innendurchmesser. Wie viel Platz die Rohr in oder an der Wand nimmt, war für den Bauherr zweiträngig (damals waren Wände ja auch dicker). Zweitens, alle Unterlagen wie Preislisten, Angebote, Rechnungen, Buchungen und Lieferscheine waren einfacher zu schreiben und zu lesen, weil eben nur eine Zahl statt zwei darin kommuniziert werden musste.

So. What could possibly go wrong?

Dann kam man aber auf die Idee, dass es einfacher und schneller ist, Wasserleistungen mit Gewinde miteinander zu verbinden, als sie immer vor Ort schweißen oder loten zu müssen. Für Gewindeverbindungen braucht man aber Fittings – also kleine Rohrstücke in der gewünschten Form (ein Winkel, ein T-Stück, eine Muffe usw). Wenn man Fittings auf oder einschraubt, gibt es dann immer eine Innen- und eine Außengewinde. Wenn ich z. B. eine Rohr verlängern möchte, scheide ich an deren Ende eine Außengewinde auf und schraube darauf ein Fitting auf, der entsprechend Größe Innengewinde haben muss. Wenn ich eine 1” Rohr habe, wie groß muss die Innengewinde des Fittings sein? Sie muss so groß sein, wie der Außendurchmesser der Rohr. Also bei einer 1” Rohr war es damals 1”5/16. Also man hätte die passende Fittings damals auch so bezeichnen können: ein 1”5/16 Fitting. Tja, das Usability wäre dann aber schlecht. Wenn der Kunde eine Rohr mit 1 Zoll hatte, musste er daran denken, dass der passende Fittig die Größe 1”5/16 hat, und woher soll er das denn wissen, man hat ihm ja die ganze Zeit zuvor den Außendurchmesser der Rohr nirgendswo kommuniziert (siehe oben, aus Usability-Gründen).

Deswegen hat man sich damals wirklich dafür entschlossen, die Fittings so wie die passende Röhre zu kennzeichnen. Wenn ich also eine 1” Muffe kaufe, also ein Fitting mit zwei Innengewinden, dann hat diese Muffe keine einzige Größe, die sich auf 1 Zoll beläuft. Sondern der Fitting ist so groß, dass er auf eine Rohr mit Innendurchmesser 1” passt, wenn man denn darauf eine Außengewinde aufschneidet.

Soo. Des passd scho. War aber knapp. Als ein IT-ler spürt man hier schon langsam ein Geruchlein.

Und dann kam man aber auf die Idee! Darauf, dass Gusseisen nicht das einzige Material für Wasserröhre sein kann, und hat angefangen, Stahl, Kupfer, Messing usw. zu verwenden.

Nun, da gäbe es aber ein kleines Problemchen. Man braucht weniger von den neuen Materialien, um gleichwertig stabile Röhre zu bekommen. Also man hatte damals zum Beispiel einen kleineren Außendurchmesser bei gleichem Innendurchmesser machen können. Hat man aber nicht gemacht. Warum nicht? Weil man ja schon so viele Fittings produziert und verbaut hat, die indirekt einen bestimmten Außendurchmesser verlangen. Wir erinnern uns, die 1” Muffe hat die Innengewinde von 1”5/16, damit sie auf eine Gusseisen-Rohr mit 1” Innendurchmesser passt. Eine Stahl-Rohr mit einem Innendurchmesser von 1 Zoll hätte den Außendurchmesser von nur ca. 1”1/8. Der alte Fitting wäre dann um 3/16” zu groß.

Deswegen hat man sich damals wirklich dafür entschlossen, bei den neuen Röhren den Innendurchmesser zu vergrößern! Und den Außendurchmesser zu behalten! Und die Röhre immer noch nach dem nicht mehr vorhandenen Innendurchmesser zu kennzeichnen!

Das ist so herrlich, deswegen jetzt nochmals zum mitschreiben.

Wenn ich heute eine 1” Stahlrohr kaufe, hat sie weder den Innen- noch den Außendurchmesser von einem Zoll. Sondern, ihr Außendurchmesser ist so groß, wie er bei einer Gusseisenrohr irgendwann mal war, und diese Gusseisenrohr hatte damals den Innendurchmesser von einem Zoll.

Tja. Der einfachste Weg, zu sich einen Berater in einem Baumarkt zu holen, ist in die Sanitär-Abteilung mit einer Schiebelehre zu marschieren. “(Oh, oh, oh, der Kunde misst gerade die Gewinde eines Fittings mit der Schiebelehre, das kann nur schief gehen…) Guten Tag, kann ich Ihnen helfen?”.


Und dann sind die Französen gekommen. Die mit ihrem Metre. Und haben zurecht moniert, dass sich die ganze Welt auf das metrische System geeinigt hat, und was haben hier alle diese Zölle zu suchen? Deswegen haben die Röhre in den neuen Anwendungen (also nicht in der Haustechnik, aber z.B. für Raketen) tendenziell metrische Größen und metrische Gewinden. Es ist also gar nicht so unwahrscheinlich, ein Druckmessgerät zu kaufen, dessen Anschluss eine M20 Gewinde hat. Es hat also einen Durchmesser von exakt 20 mm. Der nächste Fitting wäre dann der 1/2” Fitting, der mit 18,5mm etwas zu klein gewesen wäre. Abgesehen davon haben die metrischen Gewinden eine andere Dichte, was die Anzahl von Faden angeht.

Da müssen wir uns endlich glücklich schätzen, dass die Gewindeverbindungen in der Haustechnik nicht mehr zum State of the Art gehören.

Endlich gibt es Mehrschichtverbundröhre, die mit ihrem Außendurchmesser und Stärke, mit metrischen Einheiten gekennzeichnet werden. Also z.B. 16×2,2 für eine Rohr mit 16mm Außendurchmesser und 2,2mm stark. Diese Röhre kann man von Hand biegen, und in Sekunden durch Verpressen miteinander verbinden. Und eine 100 Meter Rolle kann von einer Person getragen werden.

Es gibt nur ein kleines Problemchen. Alle Hersteller haben den Wikipedia-Eintrag über den Walled Garden gelesen und versuchen, auf ihr System zu locken. Wer “fremde” Röhre oder Fittings verwendet, kriegt keine Garantie. Also wenn die Rohr oder die Fittings alle sind, ist da nichts mit “mal schnell in den Baumarkt fahren und neue holen”, man muss im System bleiben und Nachschub dort besorgen, wo man es früher gekauft hat. Es gibt um 36 unterschiedliche Presskonturen, die jeweils nur auf passende Fittings angewendet werden können. Sie könnte man alle mit einer Presszange verpressen (mit auswechselbaren Pressbacken). Das kostet aber 25 Euro pro Pressbacke. Und dann verliert man ebenfalls die Garantie. Man soll das Originalgerät des Herstellers verwenden, das für den gleichen Job plötzlich das 10-fache kostet (wir sprechen hier um mehr als 1000 Euro für ein kleines Akku-Gerät).

Ich wäre dafür, dass man in Urheber- und Patentrecht auch den Tatbestand eines “Gestaltungsmisbrauchs” einführen würde. Ich habe nämlich den Verdacht, dass man exakte Größe des Fittings nicht preisgibt, nicht weil es auf diese Größen ankommt, um das Fitting besser als von der Konkurrenz machen (stabiler, bequemer, preiswerter), sondern einzeln und allein um zu verhindern, dass die Konkurrenz preiswerte und bessere kompatible Tools und Zubehör herstellen kann.


Als wäre das alles. Isses aber ned.

Frage: ich habe eine Rohr, die mit 1/2” gekennzeichnet ist. Ich möchte sie durch eine Wand durchführen. Wie groß muss das Loch in der Wand sein? Antwort: 40mm. Wie komme ich darauf? Nun, die 1/2” beträgt bekanntlich 12,25mm. Diese Zahl ist ja aber wie wir gelernt haben heutzutage irrelevant und wir schmeißen sie gleich weg. Wir schauen stattdessen in die Tabelle an und finden heraus, dass der Außendurchmesser einer 1/2” Rohr 20mm beträgt. Nun, laut EnEV müssten wir die Rohr noch dämmen. Hierfür ist die Dämmstärke von 50% des Rohrdurchmessers erforderlich. 50% von 20mm ist 10mm. Die 10mm Schicht rund um die Rohr von 20mm herum, macht die finale Größe von 40mm.

Ich finde diese Lösung auch sehr elegant (nicht!). Die Gesetzgeber haben sich vielleicht überlegt, ob sie gleich die Hersteller vorgedämmte Röhre herstellen lassen. Als Verbraucher hätte man dann eine 40-er Rohr gekauft, die man mit der Schiebelehre messen und 40mm bekommen kann, die man auch mit einem 40er Fitting verbinden und in ein 40-er Loch reinschieben kann, usw. Doch so einfach ist es nicht – es gibt ja noch sehr viele alte Installationen, die man sowieso nachträglich dämmen müsste. Und überhaupt, wo kämen wir denn hin, dann würden wir doch dieses wunderbares seit Jahrhunderten sich bewehrtes Zoll-System verlieren! Das geht ned! So ungefähr ist es wohl verlaufen. Nun müssen die Installateure immer mit zwei Durchmesser hantieren. Einmal für die Rohr und Fittings wie sie sind. Und einmal für das Ganze, nachdem es fertig isoliert ist.

Es gibt viele IT-ler, sie sich für ihren Quellcode schämen. Zu viele. Eine Kur davon könnte es sein, mal mit den eigenen Händen eine alte Sanitär-Installation zu renovieren.