RSS Atom Add a new post titled:

As an opportunity to rewire my brain from "docker" to "podman" and "buildah" I started to create an image build with an ECH enabled curl at https://gitlab.com/hoexter-experiments/ech.

Not sure if it helps anyone, but setup should be like this:

git clone https://gitlab.com/hoexter-experiments/ech
cd ech
buildah build --layers -f Dockerfile -t echtest
podman run -ti echtest /usr/local/bin/curl \
  --ech true --doh-url https://one.one.one.one/dns-query \
  https://crypto.cloudflare.com/cdn-cgi/trace.cgi
fl=48f121
h=crypto.cloudflare.com
ip=2.205.251.187
ts=1773410985.168
visit_scheme=https
uag=curl/8.19.0
colo=DUS
sliver=none
http=http/2
loc=DE
tls=TLSv1.3
sni=encrypted
warp=off
gateway=off
rbi=off
kex=X25519

It also builds nginx and you can use that for a local test within the image. More details in the README.

Posted Fri Mar 13 15:16:02 2026

Now that ECH is standardized I started to look into it to understand what's coming. While generally desirable to not leak the SNI information, I'm not sure if it will ever make it to the masses of (web)servers outside of big CDNs.

Beside of the extension of the TLS protocol to have an inner and outer ClientHello, you also need (frequent) updates to your HTTPS/SVCB DNS records. The idea is to rotate the key quickly, the OpenSSL APIs document talks about hourly rotation. Which means you've to have encrypted DNS in place (I guess these days DNSoverHTTPS is the most common case), and you need to be able to distribute the private key between all involved hosts + update DNS records in time. In addition to that you can also use a "shared mode" where you handle the outer ClientHello (the one using the public key from DNS) centrally and the inner ClientHello on your backend servers. I'm not yet sure if that makes it easier or even harder to get it right.

That all makes sense, and is feasible for setups like those at Cloudflare where the common case is that they provide you NS servers for your domain, and terminate your HTTPS connections. But for the average webserver setup I guess we will not see a huge adoption rate. Or we soon see something like a Caddy webserver on steroids which integrates a DNS server for DoH with not only automatic certificate renewal build in, but also automatic ECHConfig updates.

If you want to read up yourself here are my starting points:

RFC 9849 TLS Encrypted Client Hello

RFC 9848 Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings

RFC 9934 Privacy-Enhanced Mail (PEM) File Format for Encrypted ClientHello (ECH)

OpenSSL 4.0 ECH APIs

curl ECH Support

nginx ECH Support

Cloudflare Good-bye ESNI, hello ECH!

If you're looking for a test endpoint, I see one hosted by Cloudflare:

$ dig +short IN HTTPS cloudflare-ech.com
1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBBFQAgACDBFqmr34YRf/8Ymf+N5ZJCtNkLm3qnjylCCLZc8rUZcwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76
Posted Wed Mar 11 16:42:53 2026

If you want the latest pflogsumm release form unstable on your Debian trixie/stable mailserver you've to rely on pining (Hint for the future: Starting with apt 3.1 there is a new Include and Exclude option for your sources.list).

For trixie you've to use e.g.:

$ cat /etc/apt/sources.list.d/unstable.sources
Types: deb
URIs: http://deb.debian.org/debian
Suites: unstable 
Components: main
#This will work with apt 3.1 or later:
#Include: pflogsumm
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

$ cat /etc/apt/preferences.d/pflogsumm-unstable.pref 
Package: pflogsumm
Pin: release a=unstable
Pin-Priority: 950

Package: *
Pin: release a=unstable
Pin-Priority: 50

Should result in:

$ apt-cache policy pflogsumm
pflogsumm:
  Installed: (none)
  Candidate: 1.1.14-1
  Version table:
     1.1.14-1 950
        50 http://deb.debian.org/debian unstable/main amd64 Packages
     1.1.5-8 500
       500 http://deb.debian.org/debian trixie/main amd64 Packages

Why would you want to do that?

Beside of some new features and improvements in the newer releases, the pflogsumm version in stable has an issue with parsing the timestamps generated by postfix itself when you write to a file via maillog_file. Since the Debian default setup uses logging to stdout and writing out to /var/log/mail.log via rsyslog, I never invested time to fix that case. But since Jim picked up pflogsumm development in 2025 that was fixed in pflogsumm 1.1.6. Bug is #1129958, originally reported in #1068425 Since it's an arch:all package you can just pick from unstable, I don't think it's a good candidate for backports, and just fetching the fixed version from unstable is a compromise for those who run into that issue.

Posted Mon Mar 9 10:09:44 2026

With TLS 1.3 more parts of the handshake got encrypted (e.g. the certificate), but sometimes it's still helpful to look at the complete handshake.

curl uses the somewhat standardized env variable for the key log file called SSLKEYLOGFILE, which is also supported by Firefox and Chrome. wireshark hides the setting in the UI behind Edit -> Preferences -> Protocols -> TLS -> (Pre)-Master-Secret log filename which is uncomfortable to reach. Looking up the config setting in the Advanced settings one can learn that it's called internally tls.keylog_file. Thus we can set it up with:

sudo wireshark -o "tls.keylog_file:/home/sven/curl.keylog"

SSLKEYLOGFILE=/home/sven/curl.keylog curl -v https://www.cloudflare.com/cdn-cgi/trace

Depending on the setup root might be unable to access the wayland session, that can be worked around by letting sudo keep the relevant env variables:

$ cat /etc/sudoers.d/wayland 
Defaults   env_keep += "XDG_RUNTIME_DIR"
Defaults   env_keep += "WAYLAND_DISPLAY"

Or setup wireshark properly and use the wireshark group to be able to dump traffic. Might require a sudo dpkg-reconfigure wireshark-common.

Regarding curl: In some situations it could be desirable to force a specific older TLS version for testing, which requires a minimal and maximal version. E.g. to force TLS 1.2 only:

curl -v --tlsv1.2 --tls-max 1.2 https://www.cloudflare.com/cdn-cgi/trace
Posted Wed Jan 28 11:32:59 2026

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

Posted Thu Jan 8 11:32:21 2026

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

Posted Wed Dec 17 15:38:52 2025

Brief note to maybe spare someone else the trouble. If you want to hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a collapsible element you need the md_in_html extension and use the markdown attribute for it to kick in on the <details> html tag.

Add the extension to your mkdocs.yaml:

markdown_extensions:
  - md_in_html

Hide the table in your markdown document in a collapsible element like this:

<details markdown>
<summary>Long Table</summary>

| Foo | Bar |
|-|-|
| Fizz | Buzz |

</details>

It's also required to have an empty line between the html tag and starting the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.

Posted Wed Oct 8 17:17:59 2025

If you use HaProxy to e.g. terminate TLS on the frontend and connect via TLS to a backend, one has to take care of sending the SNI (server name indication) extension in the TLS handshake sort of manually.

Even if you use host names to address the backend server, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt

HaProxy will try to establish the connection without SNI. You manually have to enforce SNI here, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The surprising thing here was that it requires an expression, so you can not just write sni foobar.example, you've to wrap it in an expression. The simplest one is making sure it's a string.

Update: Might be noteworthy that you've to configure SNI for the health check separately, and in that case it's a string not an expression. E.g.

server foobar foobar.example:2342 check check-ssl check-sni foobar.example ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The ca-file is shared between the ssl context and the check-ssl.

Posted Mon Sep 15 14:44:31 2025

If someone hands you an IP:Port of a Google Cloud load balancer, and tells you to connect there with TLS, but all you receive in return is an F (and a few other bytes with none printable characters) on running openssl s_client -connect ..., you might be missing SNI (server name indication). Sadly the other side was not transparent enough to explain in detail which exact type of Google Cloud load balancer they used, but the conversation got more detailed and up to a working TLS connection when the missing -servername foobar.host.name was added. I could not find any sort of official documentation on the responses of the GFE (the frontend part) when TLS parameters do not match the expectations. Also you won't have anything in the logs, because logging at Google Cloud is a backend function, and as long as your requests do not reach the backend, there are no logs. That makes it rather unpleasant to debug such cases, when one end says "I do not see anything in the logs", and the other one says "you reject my connection and just reply F".

Posted Mon Sep 15 14:32:05 2025

Kudos to yadd@ (and who else was involved to make that happen), for the new watch file v5 format. Especially templates for the big git hoster make it much nicer. Prepared two of my packages to switch on the next upload, exfatprogs tracking github releases and pflogsumm scraping a web page. That is much easier to read and less error prone.

Update 2026-03-10: Realized that the github template is not useful if you need to pull maintainer created release tarballs from GitHub. The best I could come up so far is:

Version: 5
Search-Mode: plain
Pgp-Mode: auto
Source: https://api.github.com/repos/<user or org>/@PACKAGE@/releases?per_page=50
Matching-Pattern: https://github.com/[^/]+/[^/]+/releases/download/(?:[\d\.]+)/@PACKAGE@@ANY_VERSION@@ARCHIVE_EXT

That pulls the download information from the API endpoint and preferes .xz over .gz when available.

Posted Tue Sep 9 17:02:14 2025