Sometimes you've to look at the content of x509 certificate chains. Usually one finds them pem encoded and concatenated in a text file.
Since the openssl x509
subcommand only decodes the first
certificate it will find in a file, I did something like this:
csplit -z -f 'cert' fullchain.pem '/-----BEGIN CERTIFICATE-----/' '{*}'
for x in cert*; do openssl x509 -in $x -noout -text; done
Apparently that's the "wrong" way and the more appropriate way is
using the openssl crl2pkcs7
subcommand albeit we do not try to
parse a revocation list here.
openssl crl2pkcs7 -nocrl -certfile fullchain.pem | \
openssl pkcs7 -print_certs -noout
Learned that one in a webinar presented by Victor Dukhovni. If you're new to the topic worth watching.
Not entirely sure how people use fluxcd, but I guess
most people have something like a flux-system
flux kustomization as the
root to add more flux kustomizations to their kubernetes cluster.
Here all of that is living in a monorepo, and as we're all humans people figure
out different ways to break it, which brings the reconciliation of the flux controllers down.
Thus we set out to do some pre-flight validations.
Note1: We do not use flux variable substitutions for those root kustomizations, so
if you use those, you've to put additional work into the validation and pipe things through
flux envsubst
.
First Iteration: Just Run kustomize Like Flux Would Do It
With a folder structure where we've a cluster
folder with
subfolders per cluster, we just run a for loop over all of them:
for CLUSTER in ${CLUSTERS}; do
pushd clusters/${CLUSTER}
# validate if we can create and build a flux-system like kustomization file
kustomize create --autodetect --recursive
if ! kustomize build . -o /dev/null 2> error.log; then
echo "Error building flux-system kustomization for cluster ${CLUSTER}"
cat error.log
fi
popd
done
Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml
Next someone figured out that you can delete some yaml files from a workload subfolder,
including the kustomization.yaml
, but not all of them. That left around a resource
definition which lacks some other referenced objects, but is still happily included into
the root kustomization by kustomize create
and flux, which of course did not work.
Thus we started to catch that as well in our growing for loop:
for CLUSTER in ${CLUSTERS}; do
pushd clusters/${CLUSTER}
# validate if we can create and build a flux-system like kustomization file
kustomize create --autodetect --recursive
if ! kustomize build . -o /dev/null 2> error.log; then
echo "Error building flux-system kustomization for cluster ${CLUSTER}"
cat error.log
fi
# validate if we always have a kustomization file in folders with yaml files
for CLFOLDER in $(find . -type d); do
test -f ${CLFOLDER}/kustomization.yaml && continue
test -f ${CLFOLDER}/kustomization.yml && continue
if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
fi
done
popd
done
Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.
I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:
resource "google_dns_record_set" "testv6" {
name = "testv6.some-domain.example."
managed_zone = "some-domain-example"
type = "HTTPS"
ttl = 3600
rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
}
This results in a permanent diff because the Google CloudDNS API seems to parse the
record content, and stores the ipv6hint expanded (removing the ::
notation) and in all
lowercase as 2001:db8:0:0:0:0:0:1
. Thus to fix the permanent diff we've to use it like
this:
resource "google_dns_record_set" "testv6" {
name = "testv6.some-domain.example."
managed_zone = "some-domain-example"
type = "HTTPS"
ttl = 3600
rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
}
Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.
Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.
Update from version 1.31.1-gke.1146000
to 1.31.1-gke.1678000
is causing
trouble whenever NetworkPolicy
resources and a readinessProbe
(or health check)
are configured. As a workaround we started to remove the NetworkPolicy
resources. E.g. when kustomize is involved with a patch like this:
- patch: |-
$patch: delete
apiVersion: "networking.k8s.io/v1"
kind: NetworkPolicy
metadata:
name: dummy
target:
kind: NetworkPolicy
We tried to update to the latest version - right now 1.31.1-gke.2008000
- which
did not change anything.
Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic
is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000
because that is now the oldest release of 1.31.1 which I can find in the regular and
rapid release channels. The last known good version 1.31.1-gke.1146000
is not
available to try a downgrade.
I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets.
Since the GitHub terraform provider internally works mostly
with repository IDs, not slugs (this human readable
organization/repo
format), we've to do some mapping in between.
In my case it looks like this:
#tfvars Input for Module
org_secrets = {
"SECRET_A" = {
repos = [
"infra-foo",
"infra-baz",
"deployment-foobar",
]
"SECRET_B" = {
repos = [
"job-abc",
"job-xyz",
]
}
}
# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos" {
query = "org:myorg archived:false -is:public -is:private"
include_repo_id = true
}
/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals {
# Assemble the set of repository names we need repo_ids for
repos = toset(flatten([for v in var.org_secrets : v.repos]))
# Walk through all names in the query result list and check
# if they're also in our repo set. If yes add the repo name -> id
# mapping to our resulting map
repos_and_ids = {
for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
if contains(local.repos, v)
}
}
resource "github_actions_organization_secret" "org_secrets" {
for_each = var.org_secrets
secret_name = each.key
visibility = "selected"
# the logic how the secret value is sourced is omitted here
plaintext_value = data.xxx
selected_repository_ids = [
for r in each.value.repos : local.repos_and_ids[r]
if can(local.repos_and_ids[r])
]
}
Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently.
Luckily terraform supports since version
1.2 precondition checks, which we can use in an output
-block
to provide the information which repository is missing. What we
need is the set
of missing repositories and the validation condition:
locals {
# Debug facility in combination with an output and precondition check
# There we can report which repository we still have in our configuration
# but no longer get as a result from the data provider query
missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
}
# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos" {
value = local.missing_repos
precondition {
condition = length(local.missing_repos) == 0
error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
}
}
Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs,
but is not supported by GitHub and you will encounter
inconsistent results. But
it works, even if your terraform apply
failed that way.
Learned a few things about xdg and mimetype registration in the last week that could be helpful to have condensed in a single place.
No Need to Ship a Mailcap Mime File
If you already ship a .desktop
file (that is what ends up in /usr/share/applications/
)
which has a MimeType
declared, there is no need to also ship a mailcap file (that
is what ends up in /usr/lib/mime/packages/
). Some triggers will do the conversion work
for you. See also Debian Policy 4.9.
Reverse DNS Naming Convention for .desktop Files
Seems to be a closely guarded secret, maybe mainly known inside the Gnome world, but it's in the spec. Also not very widely known inside Debian if I look at my local system as not very representative sample.
Your hicolor Theme App Icon can be a Mime Type Icon as Well
In case you didn't know the hicolor
icon theme is the default fallback theme.
Many of us already install application icons e.g. in
/usr/share/icons/hicolor/48x48/apps/
which is used in conjunction with the
Icon
field in the .desktop
file to locate the application icon.
Now the next step, and there it seems quite of few us miss out, is to create
a symlink to also provide a mime type icon, so it's displayed in
graphical file managers for the application data files. The schema here is simple:
Take the MimeType
e.g. application/x-vym
and replace the /
with a -
and use
that as file name in e.g. /usr/share/icons/hicolor/48x48/mimetypes/
.
In the vym case that is /usr/share/icons/hicolor/48x48/mimetypes/application-x-vym.png
.
If you have one use a scalable .svg
file instead of .png
.
This seems to be an area where Debian lacks a bit of tooling to automatically
convert application icons to all the different sizes and install it in all the
appropriate places. What is already there is a trigger to run gtk-update-icon-cache
when you install new icons into one of the icon theme folder so they're picked up.
No Priority or Order in .desktop Files
Likely something that hapens on all my fresh installations: Libreoffice is installed
and xdg-open
starts to open pdf files with Libreoffice instead of evince. Now I've to
figure out again to run xdg-mime default org.gnome.Evince.desktop application/pdf
to
change that (at least for my user). Background here is that the desktop file spec
explicitly mandates "Priority for applications is handled external to the .desktop files.". That's why we got in addition to all of that mimeapps.list
files.
And now, after running the xdg-mime
command from above, we've a ~/.config/mimeapps.list
defining
[Default Applications]
application/pdf=org.gnome.Evince.desktop
Debian as whole seems to be not very keen on shipping something like a sensible
default mimeapps.list
outside of desktop environment specific ones. A quick search
gave me just
$ apt-file search mimeapps.list
cinnamon-desktop-data: /usr/share/applications/x-cinnamon-mimeapps.list
gdm3: /usr/share/gdm/greeter/applications/mimeapps.list
gnome-session-common: /usr/share/applications/gnome-mimeapps.list
plasma-workspace: /usr/share/applications/kde-mimeapps.list
sxmo-utils: /usr/share/applications/mimeapps.list
sxmo-utils: /usr/share/sxmo/xdg/mimeapps.list
While it's a bit anoying to run into that pdf vs Libreoffice thing every now and then, it's maybe better to not have long controversial threads about default pdf viewer, like the ones we already had about the default MTA choices. And while we're at it: everyone using Libreoffice should give a virtual hug to rene@ for taming that beast since 2010 and OpenOffice.org before.
Had a need for a mindmapping application and found view your mind in the archive. Works but the version is a bit rusty. Sadly my Debian packaging skills are a bit rusty as well, especially when it comes to bigger GUI applications. Thus I spent a good chunk of yesterday afternoon to rip out cdbs and package the last source release on github which is right now 2.9.22 (the release branch already has 2.9.27, still sorting that out).
Git repository and a amd64 build of the current state. It still deserves some additional love, e.g. creating a -common package for arch indep content.
Proposed a few changes upstream:
- spelling fixes
- .desktop file improvements
- small adjustments for the documentation installation via cmake
Also pinged pollux@ who uploaded vym up to 2019 if he'd be fine if I pick it up. If someone else is interested, I'm also fine to put it up on salsa in the general "Debian" group for shared maintenance. I guess I will use it in the future, but time is still a scarce resource for all of us.
I recently came a cross a x509 P(rivate)KI Root Certificate which had
a pathLen
constrain set on the (self signed) Root Certificate.
Since that is not commonly seen I looked a bit around to get a
better understanding about how the pathLen
basic constrain
should be used.
Primary source is RFC 5280 section 4.2.1.9
The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path
Since the Root is always self-issued it doesn't count towards the limit,
and since it's the last certificate (or the first depending on how you count)
in a chain, it's pretty much pointless to configure a pathLen
constrain
directly on a Root Certificate.
Another relevant resource are the
Baseline Requirements of the CA/Browser
Forum (currently v2.0.2).
Section 7.1.2.1.4 "Root CA Basic Constraints" describes it
as NOT RECOMMENDED
for a Root CA.
Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.
Summary: It's pointless to set the pathLen
constrain on the Root Certificate, so just
don't do it.
Write it down before I forget about it again:
for x in $(gh api graphql --paginate -f query='query($endCursor:String) { organization(login:"myorg") {
repositories(first: 100, after: $endCursor, isArchived:false) {
pageInfo {
hasNextPage
endCursor
}
nodes {
name
}
}
}
}' --jq '.data.organization.repositories.nodes[].name'); do
secrets=$(gh secret list --json name --jq '.[].name' -R "myorg/${x}" | tr '\n' ',')
if ! [ -z "${secrets}" ]; then
echo "${x},${secrets}"
fi
done
Requests a list of all not archived repositories in a GitHub org and queries repository secrets. If we find some we output the repo name and the secrets in a comma separated list. Not real CSV, but good enough for further processing. I've to admit it's kinda beautiful what you can do with the gh cli by now. Sadly it seems the secrets are not yet available via GraphQL (or I missed it in the docs), so I just use the gh cli to do the REST calls.
I stick to some very archaic workflows, e.g. to connect
to some corp VPN I just run sudo vpnc-connect
and later on sudo vpnc-disconnect
. In the past that also
managed to restore my resolv.conf
, currently it doesn't.
According to a colleague that's also the case for Ubuntu.
Taking a step back, the sane way would be to use the
NetworkManager vpnc plugin, but that does not work with
this specific case because we use uncool VPN tech which
requires the Enable weak authentication
setting for vpnc.
There is a feature request open for that one at
https://gitlab.gnome.org/GNOME/NetworkManager-vpnc/-/issues/11
Taking another step back I thought that it shouldn't be that hard to add some checkbox, a boolean and render out another config flag or line in a config file. Not as intuitive as I thought this mix of XML and C. So let's quickly look elsewhere.
What happens is that the backup files in /var/run/vpnc/
are
created by the vpnc-scripts script called vpnc-script
, but not
moved back, because it adds some pid as a suffix and the pid is
not the final pid of the vpnc process. Basically it can not find
the backup when it tries to restore it. So I decided to replace the
pid guessing code with a suffix made up of the gateway IP and the
tun interface name. No idea if that is stable in all circumstance
(someone with a vpn name DNS RR?) or several connections to different
gateways. But good enough for myself, so here is my patch:
vpnc-scripts [master]$ cat debian/patches/replace-pid-detection
Index: vpnc-scripts/vpnc-script
===================================================================
--- vpnc-scripts.orig/vpnc-script
+++ vpnc-scripts/vpnc-script
@@ -91,21 +91,15 @@ OS="`uname -s`"
HOOKS_DIR=/etc/vpnc
-# Use the PID of the controlling process (vpnc or OpenConnect) to
-# uniquely identify this VPN connection. Normally, the parent process
-# is a shell, and the grandparent's PID is the relevant one.
-# OpenConnect v9.0+ provides VPNPID, so we don't need to determine it.
-if [ -z "$VPNPID" ]; then
- VPNPID=$PPID
- PCMD=`ps -c -o cmd= -p $PPID`
- case "$PCMD" in
- *sh) VPNPID=`ps -o ppid= -p $PPID` ;;
- esac
+# This whole script is called twice via vpnc-connect. On the first run
+# the variables are empty. Catch that and move on when they're there.
+if [ -n "$VPNGATEWAY" ]; then
+ BACKUPID="${VPNGATEWAY}_${TUNDEV}"
+ DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.${BACKUPID}
+ DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.${BACKUPID}
+ RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.${BACKUPID}
fi
-DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.${VPNPID}
-DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.${VPNPID}
-RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.${VPNPID}
SCRIPTNAME=`basename $0`
# some systems, eg. Darwin & FreeBSD, prune /var/run on boot
Or rolled into a debian package at https://sven.stormbind.net/debian/vpnc-scripts/
The colleague decided to stick to NetworkManager, moved the vpnc binary aside and
added a wrapper which invokes vpnc with --enable-weak-authentication
. The beauty is,
all of this will break on updates, so at some point someone has to
understand GTK4 to fix the NetworkManager plugin for good.