Skip to main content


New blog post: Post-OCSP certificate revocation in the Web PKI.

With OCSP in all forms going away, I decided to look at the history and possible futures of certificate revocation in the Web PKI. I also threw in some of my own proposals to work alongside existing ones.

I think this is the most comprehensive current look at certificate revocation right now.


#security #WebPKI #LetsEncrypt #TLS #OCSP

This entry was edited (3 weeks ago)

Seirdy reshared this.

in reply to Seirdy

if anybody else has feedback, whether it’s confusion, suggested edits, or other areas I should cover: I welcome all feedback. :akemi_16_cheerful_with_hearts:
This entry was edited (3 weeks ago)
in reply to Seirdy

"I heard you like footnotes, so we put a footnote in your footnote so you can read a footnote while you read a footnote" /lh
in reply to kittens!

i try to keep footnotes to less than 1/7 the total word-count, excluding backlinks. footnotes of footnotes are less than 1/7 total footnotes
This entry was edited (3 weeks ago)
in reply to Seirdy

@cybertailor so nested footnotes must be less than 1/49 my total word-count to be acceptable.
in reply to Seirdy

I see that certificate revocation is pretty much web-centric. What would e.g. XMPP servers do besides setting CAA records and hoping their keys aren't stolen?
This entry was edited (3 weeks ago)
in reply to kittens!

@cybertailor All of this also applies to XMPP. Nothing’s stopping an XMPP client from using CRLite. But most generally have cobbled-together crypto. I’d be amazed if most non-browser-based ones even handled OCSP Must-Staple correctly.
in reply to Seirdy

Regarding ACME clients that support not before/notAfter, Posh-ACME also supports this via the LifetimeDays parameter.
poshac.me/docs/latest/Function…

I also wasn’t aware ZeroSSL had added support on the server side. So thanks for that.

in reply to Ryan Bolger

@rmbolger Sorry for the delay; updated to mention Posh-ACME.

Aside: I usually associate the term “Posh” with “POSIX Shell”, so the name really threw me for a loop.

Unknown parent

Seirdy

my rationale for using basic security measures as a filter is that i have to efficiently narrow down millions of domains to something I can manually check, and I might as well pick something positive.

after the “good security” filter, I’ll isolate domains with a main and h1 tag with no trackers in a “good page content” filter. Then I’ll figure out how to narrow it down further before cursory accessibility reviews and reading what people post in the Tor Browser.

in reply to Seirdy

1.5 million domains checked so far, 682 domains passed the first filter. lets goooo
in reply to Seirdy

scraping the HSTS Preload List and Internet.nl Hall of Fame saw much higher success rates. A minority of the domains passing the first filters are from the Tranco top 2M.
in reply to Seirdy

Partway through, I decided to start filtering out Nextcloud and Searx(Ng) instances. I was already filtering out Masto instances and some others. I ran a second filter to check for the existence of hyperlinks on the page to avoid dead-ends, and to ensure they don’t block Tor.

I filtered a subset of duplicates and handled a subset of redirects. I’m down to around 1.1k domains, around 350 of which are the ones that qualified from Tranco’s top 2.6M domains. Many more are from the HSTS Preload list and Internet.nl Hall of Fame. Around a couple dozen more are uniquely from my browsing history, site outlinks, old chatrooms, web directories, and other more obscure locations.

I can manually pare this down over a couple weeks but that’s too much work. Need to figure out the right set of additional filters. Maybe a “points system” for privacy, security, and accessibility features and then taking the top 250 domains with the most points.

in reply to Seirdy

Might be useful to have a dump of your rejections and reasons, for those of us who think not being in the list is a really useful symptom to investigate.
in reply to Tim Bray

@timbray Right now the filter is TLSv1.3, has a strict content-security policy header (with the exception of allowing unsafe-inline styles), has no common tracking third-parties in the CSP, allows Tor. Then it needs a main, h1, a, and meta viewport element.

I’ll then add a points system to cut it in 1/3 and manually review a few domains per day.

Unknown parent

Seirdy
@tanith I started from scratch and yes you are. Via the HSTS Preload list.
in reply to Seirdy

Or I could run a subset of Axe-Core on every page and let my fans spin up.

Axe-Core is one of the only page-content checkers out there that doesn’t have a ton of false positives. Even the Nu HTML checker (often incorrectly referred to as the HTML5 Validator; HTML5 can’t be validated) has a ton of them. But some of Axe’s errors, like dupe link names, are way too trivial compared to easy-to-spot manual-only checks like “this h1 is used for the site name but it should be used for the page title”.

This entry was edited (2 weeks ago)
Unknown parent

Seirdy
@khm a main element can have many article elements or just one. every post in this thread is an article element. every reply i list to one of my blog posts is also an article element. when i include an xkcd comic in a blog post complete with title, caption, and transcript, i use an article in an article.
@khm
in reply to Seirdy

school me on this main element. I use article at the moment and this is the first I'm hearing of main. otherwise I think sciops.net meets these requirements... except not only do I not use hsts, I expose content over http for accessibility reasons
Unknown parent

Seirdy

@khm its existence hearkens back to the “standard” page layout most settled on early in the Web’s history: a header, a main, maybe a couple aside elements on the side, and a footer. A “skip to content” link, if it exists, should typically skip to the first non-decorative thing in main.

Viewing your post on the remote instance, I imagine that main may begin just before your profile banner.

@khm
in reply to Seirdy

my activitypub software (snac2) does not use main. I'm willing to open a pull request to fix this if I can grasp the intent properly...

one main tag for the feed body, with each post wrapped in article tags?

in reply to Seirdy

I ran an aggressive filter on the sites, but scrapped it because I had already seen too many of the personal sites that passed.

that filter mandated multiple of the following:

  • CAA record paired with DNSSEC
  • OCSP Stapling
  • COEP + COOP headers
  • No third party content in the CSP
  • An onion-location header.

and all of the following:

  • Not enabling the insecure XSS Auditor with the X-XSS-Protection. Either leaving out the header or explicitly disabling it.
  • Disabling MIME sniffing with X-Content-Type-Options.

Instead I’ll just manually comb through 100-200 domains a day in the Tor Browser to trim my way down to 500-600 sites or so, then figure out how to proceed. I’ll throw out dead ends, login pages, cryptocurrency, very corporate pages, pages for large organizations without much interesting reading material, LLM-related pages, and anything that doesn’t work in the Tor Browser’s “safest” mode (no media, JS, or a bunch of other features).

When I’m down to a few hundred I’ll probably run a mini version of Axe, decide on an actual points system, and spend more than a few seconds on each site looking for original writing, projects, and/or art and reviewing accessibility.

This entry was edited (2 weeks ago)
in reply to Seirdy

Last time I tried this, in October 22, I sent accessibility feedback to a dozen themes and sites. I resumed this project now because some common ones finally implemented feedback.