RSS Atom Add a new post titled:

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved.

My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

Posted Fri Sep 29 12:39:07 2017

Primarily a note for my future self so I don't have to find out what I did in the past once more.

If you're running some smaller systems scattered around the internet, without connecting them with a VPN, you might want your munin master and nodes to communicate with TLS and validate certificates. If you remember what to do it's a rather simple and straight forward process. To manage the PKI I'll utilize the well known easyrsa script collection. For this special purpose CA I'll go with a flat layout. So it's one root certificate issuing all server and client certificates directly. Some very basic docs can be also found in the munin wiki.

master setup

For your '/etc/munin/munin.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/master.key
tls_certificate /etc/munin/master.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

A node entry with TLS will look like this:

[node1.stormbind.net]
    address [2001:db8::]
    use_node_name yes

Important points here:

  • "tls_certificate" is a Web Client Authentication certificate. The master connects to the nodes as a client.
  • "tls_ca_certificate" is the root CA certificate.
  • If you'd like to disable TLS connections, for example for localhost, set "tls disabled" in the node block.

For easy-rsa the following command invocations are relevant:

./easyrsa init-pki
./easyrsa build-ca
./easrsa gen-req master
./easyrsa sign-req client master
./easyrsa set-rsa-pass master nopass

node setup

For your '/etc/munin/munin-node.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/node1.key
tls_certificate /etc/munin/node1.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

For easy-rsa the following command invocations are relevant:

./easyrsa gen-req node1
./easyrsa sign-req server node1
./easyrsa set-rsa-pass node1 nopass

Important points here:

  • "tls_certificate" on the node must be a server certificate.
  • You've to provide the CA here as well so we can verify the client certificate provided by the munin master.
Posted Fri Sep 8 16:41:12 2017

Lately I experienced some new kind of spam, at least new to myself. Seems that spammer abuse registration input fields that do not implement strong enough validation, and echo back several values from the registration process in some kind of welcome mail. Basically filling the spam message in name and surname fields.

So far I found a bunch of those originating from the following AS: AS49453, AS50896, AS200557 and AS61440. The first three belong to something identifying itself as "QUALITYNETWORK". The last one, AS61440, seems to be involved only partially with some networks being delegated to "Atomohost".

To block them it's helpful to query the public radb service whois.radb.net for all networks belonging to the specific AS like this:

whois -h whois.radb.net -- '-i origin AS50896'

Another bunch of batch capable whois services are provided by Team Cymru. They've some examples at the end of https://www.team-cymru.org/IP-ASN-mapping.html.

In this specific case the spam was for "www.robotas.ru" which is currently terminated at CloudFlare and redirects via JS document.location to "http://link31.net/b494d/ooo/", which in turn redirects via JS window.location to "http://revizor-online.ga/" which is again hosted at CloudFlare. The page at the end plays some strange youtube video - currently at around 1900 plays, so not that widely spread. In the end an interesting indicator about spam campaign success.

Posted Tue Aug 22 16:59:54 2017

In case you're for example using Alpine Linux 3.6 based docker images, and you've been passing through environment variable names with dots, you might miss them now in your actual environment. It seems that with busybox 1.26.0 the busybox ash got a lot stricter regarding validation of environment variable names and now you can no longer pass through variable names with dots in them. They just won't be there. If you've been running ash interactively you could not add them in the past, but until now you could do something like this in your Dockerfile

ENV foo.bar=baz

and later on accces a variable "foo.bar".

bash still allows those invalid variable names and is way more tolerant. So to be nice to your devs, and still bump your docker image version, you can add bash and ensure you're starting your application with /bin/bash instead of /bin/sh inside of your container.

Links

Posted Wed Jul 12 17:38:09 2017

We bought a bunch of very cheap low end HPE DL120 server. Enough to warrant a completely automated installation setup. Shouldn't be that much of a deal, right? Get dnsmasq up and running, feed it a preseed.cfg and be done with it. In practise it took us more hours then we expected.

Setting up the hardware

Our hosts are equipped with an additional 10G dual port NIC and we'd like to use this NIC for PXE booting. That's possible, but it requires you to switch to UEFI boot. Actually it enables you to boot from any available NIC.

Setting up dnsmasq

We decided to just use the packaged debian-installer from jessie and do some ugly things like overwritting files in /usr/lib via ansible later on. So first of all install debian-installer-8-netboot-amd64 and dnsmasq, then enroll our additional config for dnsmasq, ours looks like this:

domain=int.foobar.example
dhcp-range=192.168.0.240,192.168.0.242,255.255.255.0,1h
dhcp-boot=bootnetx64.efi
pxe-service=X86-64_EFI, "Boot UEFI PXE-64", bootnetx64.efi
enable-tftp
tftp-root=/usr/lib/debian-installer/images/8/amd64/text
dhcp-option=3,192.168.0.1
dhcp-host=00:c0:ff:ee:00:01,192.168.0.123,foobar-01

Now you've to link /usr/lib/debian-installer/images/8/amd64/text/bootnetx64.efi to /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/bootnetx64.efi. That got us of the ground and we had a working UEFI PXE boot that got us into debian-installer.

Feeding d-i the preseed file

Next we added some grub.cfg settings and parameterized some basic stuff to be handed over to d-i via the kernel command line. You'll find the correct grub.cfg in /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/grub/grub.cfg. We added the following two lines to automate the start of the installer:

set default="0"
set timeout=5

and our kernel command line looks like this:

 linux    /debian-installer/amd64/linux vga=788 --- auto=true interface=eth1 netcfg/dhcp_timeout=60 netcfg/choose_interface=eth1 priority=critical preseed/url=tftp://192.168.0.2/preseed.cfg quiet

Important points:

  • tftp host IP is our dnsmasq host.
  • Within d-i we see our NIC we booted from as eth1. eth0 is the shared on-board ilo interface. That differs e.g. within grml where it's eth2.

preseeed.cfg, GPT and ESP

One of the most painful points was the fight to find out the correct preseed values to install with GPT to create a ESP (EFI system partition) and use LVM for /.

Relevant settings are:

# auto method must be lvm
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-basicfilesystems/no_swap boolean false

# Keep that one set to true so we end up with a UEFI enabled
# system. If set to false, /var/lib/partman/uefi_ignore will be touched
d-i partman-efi/non_efi_system boolean true

# enforce usage of GPT - a must have to use EFI!
d-i partman-basicfilesystems/choose_label string gpt
d-i partman-basicfilesystems/default_label string gpt
d-i partman-partitioning/choose_label string gpt
d-i partman-partitioning/default_label string gpt
d-i partman/choose_label string gpt
d-i partman/default_label string gpt

d-i partman-auto/choose_recipe select boot-root-all
d-i partman-auto/expert_recipe string \
boot-root-all :: \
538 538 1075 free \
$iflabel{ gpt } \
$reusemethod{ } \
method{ efi } \
format{ } \
. \
128 512 256 ext2 \
$defaultignore{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext2 } \
mountpoint{ /boot } \
. \
1024 4096 15360 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
1024 4096 15360 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var } \
. \
1024 1024 -1 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/lib } \
.
# This makes partman automatically partition without confirmation, provided
# that you told it what to do using one of the methods above.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman-md/confirm boolean true
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true

# This is fairly safe to set, it makes grub install automatically to the MBR
# if no other operating system is detected on the machine.
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-installer/bootdev  string /dev/sda

I hope that helps to ease the processes to setup automated UEFI PXE installations for some other people out there still dealing with bare metal systems. Some settings took us some time to figure out, for example d-i partman-efi/non_efi_system boolean true required some searching on codesearch.d.n (amazing ressource if you're writing preseed files and need to find the correct templates) and reading scripts on git.d.o where you'll find the source for partman-* and grub-installer.

Kudos

Thanks especially to P.P. and M.K. to figure out all those details.

Posted Mon Jun 12 18:35:52 2017

People using Chrome might have already noticed that some internal certificates created without a SubjectAlternativeName extension fail to verify. Finally the Google Chrome team stepped forward, and after only 17 years of having SubjectAlternativeName as the place for FQDNs to verify as valid for a certificate, they started to ignore the commonName. See also https://www.chromestatus.com/feature/4981025180483584.

Currently Debian/stretch still has Chromium 57 but Chromium 58 is already in unstable. So some more people might notice this change soon. I hope that everyone who maintains some broken internal scripting to maintain internal CAs now re-reads the OpenSSL Cookbook to finally fix this stuff. In general I recommend to base your internal CA scripting on easy-rsa to avoid making every mistake in certificate management on your own.

Posted Wed Apr 26 12:08:38 2017

Must be the irony of life that I was about to give up the TclCurl Debian package some time ago, and now I'm using it again for some very old and horrible web scraping code.

The world moved on to https but the Tcl http package only supports unencrypted http. You can combine it with the tls package as explained in the Wiki, but that seems to be overly complicated compared to just loading the TclCurl binding and moving on with something like this:

package require TclCurl
# download to a variable
curl::transfer -url https://sven.stormbind.net -bodyvar page
# or store it in a file
curl::transfer -url https://sven.stormbind.net -file page.html

Now the remaining problem is that the code is unmaintained upstream and there is one codebase on bitbucket and one on github. While I fed patches to the bitbucket repo and thus based the Debian package on that repo, the github repo diverted in a different direction.

Posted Fri Feb 24 13:04:28 2017

After a few weeks of running Exodus on my moto g falcon, I've now done again the full wipe and moved on to LineageOS nightly from 20170213. Though that build is no longer online at the moment. It's running smooth so far for myself but there was an issue with the Google Play edition of the phone according to Reddit. Since I don't use gapps anyway I don't care.

The only issue I see so far is that I can not reach the flash menu in the camera app. It's hidden behind a grey bar. Not nice but not a show stopper for me either.

Posted Tue Feb 14 10:23:07 2017

For CentOS 4 to CentOS 6 we used pam_ldap to restrict host access to machines, based on groupOfUniqueNames listed in an openldap. With RHEL/CentOS 6 RedHat already deprecated pam_ldap and highly recommended to use sssd instead, and with RHEL/CentOS 7 they finally removed pam_ldap from the distribution.

Since pam_ldap supported groupOfUniqueNames to restrict logins a bigger collection of groupOfUniqueNames were created to restrict access to all kind of groups/projects and so on. But sssd is in general only able to filter based on an "ldap_access_filter" or use the host attribute via "ldap_user_authorized_host". That does not allow the use of "groupOfUniqueNames". So to allow a smoth migration I had to configure sssd in some way to still support groupOfUniqueNames. The configuration I ended up with looks like this:

[domain/hostacl]
autofs_provider = none 
ldap_schema = rfc2307bis
# to work properly we've to keep the search_base at the highest level
ldap_search_base = ou=foo,ou=people,o=myorg
ldap_default_bind_dn = cn=ro,ou=ldapaccounts,ou=foo,ou=people,o=myorg
ldap_default_authtok = foobar
id_provider = ldap
auth_provider = ldap
chpass_provider = none
ldap_uri = ldaps://ldapserver:636
ldap_id_use_start_tls = false
cache_credentials = false
ldap_tls_cacertdir = /etc/pki/tls/certs
ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt
ldap_tls_reqcert = allow
ldap_group_object_class = groupOfUniqueNames
ldap_group_member = uniqueMember
access_provider = simple
simple_allow_groups = fraappmgmtt

[sssd]
domains = hostacl
services = nss, pam
config_file_version = 2

Important side note: With current sssd versions you're more or less forced to use ldaps with a validating CA chain, though hostnames are not required to match the CN/SAN so far.

Relevant are:

  • set the ldap_schema to rfc2307bis to use a schema that knows about groupOfUniqueNames at all
  • set the ldap_group_object_class to groupOfUniqueNames
  • set the the ldap_group_member to uniqueMember
  • use the access_provider simple

In practise what we do is match the member of the groupOfUniqueNames to the sssd internal group representation.

The best explanation about the several possible object classes in LDAP for group representation I've found so far is unfortunately in a german blog post. Another explanation is in the LDAP wiki. In short: within a groupOfUniqueNames you'll find a full DN, while in a posixGroup you usually find login names. Different kind of object class requires a different handling.

Next step would be to move auth and nss functionality to sssd as well.

Posted Thu Feb 9 13:02:27 2017

Recently some of my coworkers and I experienced an issue with using the upper left touchpad button on our Dell Latitude E7470 and similar laptops (E5xxx from the current generation). Some time in January we could no longer hold down this button and select text with the touchpad. Using the left button below the touchpad still worked. This hit my coworker running Fedora and myself running Debian/stretch. So I first thought that it's likely a libinput issue (same version in Debian/stretch and Fedora and I recently pulled that in as an update), somehow blacklisting the upper left key because it's connected to the trackpoint. So I filled #99594 upstream. While this was not very helpful at first, and according to Peter very unlikely to be related to libinput, another coworker using Debian/jessie found this issue to hit him when he upgraded the backports kernel in use from 4.8 to 4.9. That finally led to the conclusion that it's a bug in the Linux alps driver, which is already fixed in 4.10 and probably 4.9.6.

Until the Debian kernel team pulls in a fresh 4.9 point release I'm using 4.10-rc6 from experimental. For Debian/jessie + backports kernel user it might be more convenient to just stay at 4.8 in case this issue annoys you.

Kudos to Peter, Benjamin, TW and WW for the help in locating the origin of this issue!

Lessons learned:

  • I should've started with the painful downgrade of xorg and libinput via snapshot.d.o before opening the bugreport.
  • A lot more of the touchpad related hardware support is nowadays in the kernel and not in the xorg layer. Either that was just my personal historic misunderstanding, or it was different 10 years ago.
  • There is an interesting set of slides from Benjamin related to debuging input device issues.
Posted Tue Feb 7 12:55:04 2017