Profile cover photo
Profile photo
Phil Pennock
Semi-Recovering Grumpy Troll
Semi-Recovering Grumpy Troll

Post has attachment

Written just for +Yonatan Zunger because I read his tweet and just laughed.

#!/usr/bin/env python3

import sys
import time

def foo():

def tracefunc(frame, event, arg, indent=[0]):
if event == 'line' and frame.f_lineno >= 9:
frame.f_lineno = 7
return tracefunc

Add a comment...

I get notifications from G+, and it's "5 posts from X which you've missed", despite having read two of them on another device. So I try to click on the others, only to get taken to the main index, because G+'s own internal links to see posts don't work.

And thus I am reminded of why I gave up on G+ as a working system. (Which was actually "not all comments loading on mobile, so I repeatedly looked like an idiot for missing what someone else had already said", but falls into the same category of "basic functionality for a social network not working right").
Add a comment...

PSA: before getting angry at government IT maintenance, there's something you should know. Most technology deployments suck.

If you're going to get angry at everything painted as a dastardly plot by the evil GOP, then you're playing into the hands of those who do like to influence mobs and create fake enemies. When something isn't actually a plot then you're propagating fake news, you're undermining your own legitimacy. Please stop. There are enough real problems being caused by the current POTUS without drowning them out of the news cycle with non-problems.

The IT systems were broken when they launched. With a lot of hard work by talented people, lipstick was put on the pig and the result was a system which looks like it runs well, and mostly does, and gets the job done. But it was not designed as a cloud "always on" utility service. It was designed as a traditional IT service, with weekly maintenances.

This is not a new change, this is not a plot by the GOP. This is just somewhat sucky technology needing weekly maintenance. This used to be normal. That it's not, for the big sites you care about, is part of how they were able to survive to become the big sites you care about. What we have is the gritty reality of a technology stack which needs weekly maintenance.

Best operational practice around scheduled maintenances is to let affected people know, so that they can plan around it. The government has let people know.

You can plan around it, or you can let yourself be manipulated into outrage. As food for thought: this is why some in government laugh bitterly at the idea of "open government". The ideal of oversight falls flat when someone with an axe to grind manages to twist scheduled maintenance windows into being signs of malicious plots or utter incompetence.

It's neither. Except perhaps that procurement practices were behind the curve on specifying availability requirements as cultural expectations shifted towards 24x7 availability of websites. Any incompetence which might have existed was around the original design of the system, not by the people maintaining it now.

[repost of something I put up on FB]
Add a comment...

DNSSEC deployment status:

perl -lane < ROOT-ZONE 'next if $F[0] eq "."; $z{$F[0]} = 1 if $F[3] eq "NS"; $d{$F[0]} = 1 if $F[3] eq "DS"; END { print "Zones: @{[scalar %z]}\tDNSSEC: @{[scalar %d]}" }'

Zones: 1544 DNSSEC: 1396

148 root-child zones (TLDs) are missing DNSSEC delegation glue in the root zone, 1396 zones have it. Fewer than 10% of top-level domains are still without DNSSEC support in the domain. Whether the registrars are on the ball enough to allow registration is a different matter.

TLD Zones still missing DNSSEC support:

> ae. aero. ai. al. ao. aq. as. ba. bb. bd. bf. bh. bi. bj. bn. bo. bs. bt. bv. cd. cf. cg. ci. ck. cm. cu. cv. cw. cy. dj. dm. do. dz. ec. eg. er. et. fj. fk. fm. ga. gb. ge. gf. gg. gh. gm. gp. gq. gt. gu. gw. gy. hm. ht. im. iq. ir. it. je. jm. jo. kh. km. kn. kp. kw. kz. ls. ly. mc. md. mh. mk. ml. mo. mp. mq. mr. ms. mt. mu. mv. mw. mz. ne. ng. ni. np. nr. om. pa. pf. pg. ph. pk. pn. ps. py. qa. rs. rw. sd. sk. sl. sm. so. sr. st. sv. sz. tc. td. tg. tj. tk. to. tr. uz. va. vc. ve. vg. vi. xn--54b7fta0cc. xn--80ao21a. xn--90a3ac. xn--90ae. xn--90ais. xn--d1alf. xn--fzc2c9e2c. xn--j1amh. xn--lgbbat1ad8j. xn--mgb9awbf. xn--mgba3a4f16a. xn--mgbaam7a8h. xn--mgbayh7gpa. xn--mgbc0a9azcg. xn--mgbpl2fh. xn--mgbtx2b. xn--mix891f. xn--node. xn--qxam. xn--wgbl6a. xn--xkc2al3hye2a. xn--ygbi2ammx. ye. zw.
Add a comment...

Irony: Gotip "Executable is not supported on [..] OpenBSD (unless procfs is mounted.)"

Wasn't OpenBSD first to put `_progname` guaranteed in process's symbol table, via crt0 linkage? Okay, Golang isn't using crt0 but the same mechanism crt0 uses should be available.

I don't care enough to file bugs and submit patches, I just shake my head at how things change. OpenBSD can be awkward. Eg, they're a security-focused OS and they rewrote DNS stub client routines; the result, ASR is very clean and beautiful, but doesn't support EDNS0 and so doesn't support DNSSEC/AD. It seems unmaintained and so stale.

I think even in its current state, ASR is probably the right choice though: most tools don't care about DNSSEC AD-or-not and if you do care, just put a filtering validating resolver in the resolution path. No need to load down every DNS-using tool with extra logic for the very few apps which care, those apps can use another stub library. It's just annoying.
Add a comment...

Latest version of Chrome seems to have removed the ability to see a site's HTTPS cert by clicking on the padlock. Not just "buried under developer tools, click a few more buttons" but no link to even that.

Add a comment...

Post has attachment
Funky dnsviz graph to a TLSA record via a DNAME into a different zone. The red exclamation marks are because of UDP timeouts for a couple of servers and not significant unless they persist.

The TLSA record of interest is for `` (underscore 25 dot underscore tcp dot ...) which DNAME delegates at the _tcp level into because that machine is the MX host for

Tagging +Exim
Add a comment...

xpost to G+ for obvious reasons; enjoy the hilarity and schadenfreude.

Google Maps no longer finds my street address, instead picking something in a different ZIP, half an hour away. Tonight's food order has gone astray as a result.

When I go to my "Home" location, it's showing up as a range, in descending order, and has moved the Star to be in the middle of this block. The street is right. But just searching for the street, in this ZIP, gives the street miles away in another ZIP with the different ZIP being buried. I absolutely do not blame the delivery driver for not catching that.

I just gave the driver my neighbor's address, on a different cross-street (I'm on a corner lot) because that works and the driver is now on the way.

Life when Google Maps can't find your home ... this is going to get interesting.

(and yes, I filed it as a bug in Maps)
Add a comment...

Chrome / TLS / PKI / Certificate Transparency question, seeking informed comments/answers from the brain-trust here since I can't think of an appropriate forum where this is not going to get shouted out as off-topic.

Ryan Sleevi announced in October 2016 that certificates issued in October 2017 or later would be required to be listed in public Certificate Transparency logs to be considered valid in Chrome.

CA/Browser Forum is explicitly about Public CAs; as is the IETF group, etc; the OSes I'm most familiar with (*nix, macOS) do not distinguish readily in the trust stores between "public CA" and "private CA". Also, public CAs are not allowed to issue certs for hostnames resolving to RFC1918 private address space, for which the latest CA/Browser guidance I can find explicitly lists private CAs as a good solution).

Thus: as someone running a private CA for security local network resources on non-public IPs, what is the correct approach to ensure that these continue to work from October 2017 onwards? Is there, or will there be, any way to mark a CA anchor as "trusted to not need CT"?

For now, I've gone with this, on MacOS:
defaults write CertificateTransparencyEnforcementDisabledForUrls -array .lan

After clicking "Reload policies" in chrome://policy I see that array show up. I somewhat assume that it works in the obvious way. Is this the "correct" way, to enumerate all domains which are used in such a manner? Because this approach makes me distinctly uncomfortable.

If I have a domain, "", with sites like "" using public certs, those should be in CT. If the site "" exists and is used for a small group on less routed IP space, I might use an internal CA instead (a couple of other scenarios come to mind too). And only that CA. Not any other public CA which might be issuing certs. It should be "CT/public or this trusted other CA", not exposing those domains to unaudited certs from any other CA public or private at a time when the expectation of issuance integrity has shifted to stronger assumptions.

(It's not like most CAs are honoring CAA records in DNS at this time, after all, so even the public ones could be tricked).

So: is it "enumerate the domains to weaken trust for, better isolate public site domains from internal site domains"? Is there another approach?

Thanks for any constructive comments offered.
Add a comment...
Wait while more posts are being loaded