Die wunderbare Welt von Isotopp
Meditations on Quitting
I quit often. At least once a year, but there have been years when I have been quitting four times.
Every time before a performance evaluation, or before important meetings, I sit down and write myself a notice. I pull up a word processor, start the empty business letter template, fill in the details and the date, and then write the three or four sentences necessary to inform my employer that the time has come to part ways. I print this, collect the letter, fold it properly and put it into an empty unsealed envelope.
From Hadoop to HTAP?
For the last 15 years, one popular way to persist large amounts of data has been Hadoop, if you needed them persisted in such a way that you can still process them.
Of course, from a database point of view, brute forcing a result by scanning compressed CSV files in parallel, and then building giant distributed Hash Joins is not very elegant. But two facts were true in 2006 influenced Hadoop’s design, and which allowed it to process data at all at a scale where all other things failed:
Tying the BI pipeline together
In Of Stars and Snowflakes we have been looking at the “normal Form” for Data Warehouses/BI structures, and how it differs from normal forms used in transactional systems. In ETL from a Django Model we looked at one implementation of a classical offline DWH with a daily load.
The normal BI structure is a fact table, in which an object identifier (the one we collect facts about) is paired with a point in time to report facts about the object at a certain point in time. That is, we have a table with a compound primary key, in which a component (usually the latter) has a time dimension. Fact tables are also often very wide, since we collect any number of facts about the object we report on for each point in time.
Ansible: List Cross-Join
A friend asked in Discord:
I need a pointer to a solution in Jinja.
Given two lists,
x: [a,b,c]andy: [d,e,f], I need the cross-join["a.d","a.e","a.f","b.d",…,"c.e","c.f"]. I know how to cross-join, but that then is a list of lists, and I want join the inner lists.
After some experimentation the result was a set of nasty templating loops. There has to be a better way.
There are two:
ChatGPT and Limits
Like everybody else, I have been playing with ChatGPT from OpenAI . Specifically, I wanted to test how it could be used as a coding assistant, and what the limits are in terms of size and complexity.
Code Generation
I have been using the Labyrinths example as a base. My goal was to have ChatGPT write the Labyrinth class for me.
I did so interactively.
Kris:
Write an empty Python class named Labyrinth
2FA für Mastodon
Multi-Factor Authentication (Identifikation mit mehreren Faktoren) oder 2FA (Two-Factor-Authentication) sind ein Weg, einen Account vor der Übernahme durch Dritte zu schützen.
Statt sich mit Usernamen und Passwort anzumelden ist zusätzlich noch eine wechselnde Pseudozufallszahl notwendig. Diese wird von einem Seed-Wert generiert, der durch eine Buchstabenfolge oder einen QR-Code repräsentiert wird.
Authenticator Anwendung installieren
Das Verfahren ist standardisiert und wird von vielen Tools unterstützt. Dazu gehören Google Authenticator, Bitwarden und viele andere Passwortmanager.
Change Data Capture
Change Data Capture is a way to capture, well, events from a system that describe how the data in the system changed. For a system that does business transactions that may be at the lowest level Create, Update, or Delete of entities or relationships. Systems that emit this kind of events are called Entity Services and are kind of the lowest level of events that you can have in such a system.
USENET und Tiernetze
Vor ziemlich genau 30 Jahren gab es in Deutschland die Anfangsgründe des Internet , aber es gab auch Netze, die auf anderer, viel älterer Technologie betrieben wurden – die Mailboxnetze. Das sind dezentrale Netze, bei denen denen lokale Rechner mit Modems ausgestattet wurden, bei denen man anrufen und dann Nachrichten a la Mastodon online lesen konnte. Oder man hatte Software daheim, die bei der Mailbox anrief, die Nachrichten heruntergeladen hat. Dann konnte man offline lesen, Antworten schreiben und ein zweites Mal anrufen. Die Antworten wurden dann gesammelt eingeworfen, und zugestellt.
Systemd and docker -H fd://
Based on what I learned in Systemd Service and Socket Activation and Systemd Service and stdio , we can now have a look at Docker.
The code for -H fd://-Handling is here
.
The file descriptors are coming from activation.Listeners(), and are in the listeners slice.
In our case, the part after the fd:// is empty, so lines 83-85 are activated, and the incoming fd’s are passed to the Docker proper.
|
|
Summary
The question that started this Yak shaving session was: “How to expose the docker socket of a remote machine over the network?” And this appears that the answer to this question is:
Systemd Service and stdio
After yesterday’s article, Arne Blankerts pointed me at a note showing how to install a program using stdio with systemd.
Code and Unit files
The code:
#! /usr/bin/env python3
import sys
if __name__ == "__main__":
while True:
line = input().strip()
print(f"ECHO: {line}")
if line == "QUIT":
sys.exit(0)
The Socket Unit:
$ systemctl --user cat kris2.socket
# /home/kris/.config/systemd/user/kris2.socket
[Unit]
Description=My second service
PartOf=kris2.service
[Socket]
ListenStream=127.0.0.1:12346
Accept=Yes
[Install]
WantedBy=sockets.target
And the Service Unit, which has to be a template: