RSS Atom Add a new post titled:

With TLS 1.3 more parts of the handshake got encrypted (e.g. the certificate), but sometimes it's still helpful to look at the complete handshake.

curl uses the somewhat standardized env variable for the key log file called SSLKEYLOGFILE, which is also supported by Firefox and Chrome. wireshark hides the setting in the UI behind Edit -> Preferences -> Protocols -> TLS -> (Pre)-Master-Secret log filename which is uncomfortable to reach. Looking up the config setting in the Advanced settings one can learn that it's called internally tls.keylog_file. Thus we can set it up with:

sudo wireshark -o "tls.keylog_file:/home/sven/curl.keylog"

SSLKEYLOGFILE=/home/sven/curl.keylog curl -v https://www.cloudflare.com/cdn-cgi/trace

Depending on the setup root might be unable to access the wayland session, that can be worked around by letting sudo keep the relevant env variables:

$ cat /etc/sudoers.d/wayland 
Defaults   env_keep += "XDG_RUNTIME_DIR"
Defaults   env_keep += "WAYLAND_DISPLAY"

Or setup wireshark properly and use the wireshark group to be able to dump traffic. Might require a sudo dpkg-reconfigure wireshark-common.

Regarding curl: In some situations it could be desirable to force a specific older TLS version for testing, which requires a minimal and maximal version. E.g. to force TLS 1.2 only:

curl -v --tlsv1.2 --tls-max 1.2 https://www.cloudflare.com/cdn-cgi/trace
Posted Wed Jan 28 11:32:59 2026

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

Posted Thu Jan 8 11:32:21 2026

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

Posted Wed Dec 17 15:38:52 2025

Brief note to maybe spare someone else the trouble. If you want to hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a collapsible element you need the md_in_html extension and use the markdown attribute for it to kick in on the <details> html tag.

Add the extension to your mkdocs.yaml:

markdown_extensions:
  - md_in_html

Hide the table in your markdown document in a collapsible element like this:

<details markdown>
<summary>Long Table</summary>

| Foo | Bar |
|-|-|
| Fizz | Buzz |

</details>

It's also required to have an empty line between the html tag and starting the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.

Posted Wed Oct 8 17:17:59 2025

If you use HaProxy to e.g. terminate TLS on the frontend and connect via TLS to a backend, one has to take care of sending the SNI (server name indication) extension in the TLS handshake sort of manually.

Even if you use host names to address the backend server, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt

HaProxy will try to establish the connection without SNI. You manually have to enforce SNI here, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The surprising thing here was that it requires an expression, so you can not just write sni foobar.example, you've to wrap it in an expression. The simplest one is making sure it's a string.

Update: Might be noteworthy that you've to configure SNI for the health check separately, and in that case it's a string not an expression. E.g.

server foobar foobar.example:2342 check check-ssl check-sni foobar.example ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The ca-file is shared between the ssl context and the check-ssl.

Posted Mon Sep 15 14:44:31 2025

If someone hands you an IP:Port of a Google Cloud load balancer, and tells you to connect there with TLS, but all you receive in return is an F (and a few other bytes with none printable characters) on running openssl s_client -connect ..., you might be missing SNI (server name indication). Sadly the other side was not transparent enough to explain in detail which exact type of Google Cloud load balancer they used, but the conversation got more detailed and up to a working TLS connection when the missing -servername foobar.host.name was added. I could not find any sort of official documentation on the responses of the GFE (the frontend part) when TLS parameters do not match the expectations. Also you won't have anything in the logs, because logging at Google Cloud is a backend function, and as long as your requests do not reach the backend, there are no logs. That makes it rather unpleasant to debug such cases, when one end says "I do not see anything in the logs", and the other one says "you reject my connection and just reply F".

Posted Mon Sep 15 14:32:05 2025

Kudos to yadd@ (and who else was involved to make that happen), for the new watch file v5 format. Especially templates for the big git hoster make it much nicer. Prepared two of my packages to switch on the next upload, exfatprogs tracking github releases and pflogsumm scraping a web page. That is much easier to read and less error prone.

Posted Tue Sep 9 17:02:14 2025

Rant - I've a theory about istio: It feels like a software designed by people who hate the IT industry and wanted revenge. So they wrote a software with so many odd points of traffic interception (e.g. SNI based traffic re-routing) that's completely impossible to debug. If you roll that out into an average company you completely halt the IT operations for something like a year.

On topic: I've two endpoints (IP addresses serving HTTPS on a none standard port) outside of kubernetes, and I need some rudimentary balancing of traffic. Since istio is already here one can levarage that, combining the resource kinds ServiceEntry, DestinationRule and VirtualService to publish a service name within the istio mesh. Since we do not have host names and DNS for those endpoint IP addresses we need to rely on istio itself to intercept the DNS traffic and deliver a virtual IP address to access the service. The sample given here leverages the exportTo configuration to make the service name only available in the same namespace. If you need broader access remove or adjust that. As usual in kubernetes you can resolve the name also as FQDN, e.g. acme-service.mynamespace.svc.cluster.local.

---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  ports:
    - number: 12345
      name: acmeglue
      protocol: HTTPS
  resolution: STATIC
  location: MESH_EXTERNAL
  # limit the availability to the namespace this resource is applied to
  # if you need cross namespace access remove all the `exportTo`s in here
  exportTo:
    - "."
  # use `endpoints:` in this setup, `addreses:` did not work
  endpoints:
    # region1
    - address: 192.168.0.1
      ports:
        acmeglue: 12345
    # region2
     - address: 10.60.48.50
       ports:
        acmeglue: 12345
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: acme-service
spec:
  host: acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  trafficPolicy:
    loadBalancer:
      simple: LEAST_REQUEST
    connectionPool:
      tcp:
        tcpKeepalive:
          # We have GCP service attachments involved with a 20m idle timeout
          # https://cloud.google.com/vpc/docs/about-vpc-hosted-services#nat-subnets-other
          time: 600s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  http:
  - route:
    - destination:
        host: acme-service
    retries:
      attempts: 2
      perTryTimeout: 2s
      retryOn: connect-failure,5xx
---
# Demo Deployment, istio configuration is the important part
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foobar
  labels:
    app: foobar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foobar
  template:
    metadata:
      labels:
        app: foobar
        # enable istio sidecar
        sidecar.istio.io/inject: "true"
      annotations:
        # Enable DNS capture and interception, IP resolved will be in 240.240/16
        # If you use network policies you've to allow egress to this range.
        proxy.istio.io/config: |
          proxyMetadata:
            ISTIO_META_DNS_CAPTURE: "true"
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Now we can exec into the deployed pod, do something like curl -vk https://acme-service:12345, and it will talk to one of the endpoints defined in the ServiceEntry via an IP address out of the 240.240/16 Class E network.

Documentation
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution
https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB
https://istio.io/latest/docs/ops/configuration/traffic-management/dns-proxy/#sidecar-mode

Posted Wed Aug 20 17:56:02 2025

Brief dump so I don't forget how that worked in August 2025. Requires npm, npx and nodejs.

  1. Install Chrome
  2. Add the BrowserMCP extension
  3. Install gemini-cli npm install -g @google/gemini-cli
  4. Retrieve a Gemini API key via AI Studio
  5. Export API key for gemini-cli export GEMINI_API_KEY=2342
  6. Start BrowserMCP extension, see manual, an info box will appear that it's active with a cancel button.
  7. Add mcp server to gemini-cli gemini mcp add browsermcp npx @browsermcp/mcp@latest
  8. Start gemini-cli, let it use the mcp server and task it to open a website.
Posted Wed Aug 13 14:21:57 2025

even I managed to migrate my last setup to sway a few weeks ago. I was the last one you've been waiting for dear X Strike Force, right?

Multi display support just works, no more modeline hackery. Oh and we can also remove those old clipboard manager.

One oddity with sway I could not yet solve is that I had to delete the default wallpaper /usr/share/backgrounds/sway/Sway_Wallpaper_Blue_1920x1080.png to allow it to load the Debian wallpaper via

output * bg /usr/share/desktop-base/active-theme/wallpaper/contents/images/1920x1080.svg fill

Update: Thanks to Birger and Sebastian who easily could explain that. The sway-backgrounds package ships a config snippet in /etc/sway/config.d and if that's included e.g. via include /etc/sway/config.d/* after setting the background in your ~/.config/sway/config it does the obvious and overrides your own background configuration again. Didn't expect that but makes sense. So the right fix is to just remove the sway-backgrounds package.

I also had a bit of fist fight with sway to make sure I've as much screen space available as possible. So I tried to shrink fonts and remove borders.

default_border none
default_floating_border none
titlebar_padding 1
titlebar_border_thickness 0
font pango: monospace 9

Rest I guess is otherwise well documented. I settled on wofi as menu tool, cliphist for clipboard access, of course waybar to be able to use the nm-applet, swayidle and swaylock are probably also more or less standard for screen locking.

Having

for_window [app_id="firefox"] inhibit_idle fullscreen

is also sensible for video streaming, to avoid the idle locking.

Posted Fri Jul 18 18:11:45 2025