Skip to main content


New blog post: Post-OCSP certificate revocation in the Web PKI.

With OCSP in all forms going away, I decided to look at the history and possible futures of certificate revocation in the Web PKI. I also threw in some of my own proposals to work alongside existing ones.

I think this is the most comprehensive current look at certificate revocation right now.


#security #WebPKI #LetsEncrypt #TLS #OCSP

This entry was edited (3 months ago)

Seirdy reshared this.

in reply to Seirdy

With the first extension, an attacker who triggers a misissuance would compromise it for a few days or hours months.


typo (emphasis mine)

in reply to Seirdy

if anybody else has feedback, whether it’s confusion, suggested edits, or other areas I should cover: I welcome all feedback. :akemi_16_cheerful_with_hearts:
This entry was edited (3 months ago)
in reply to Seirdy

"I heard you like footnotes, so we put a footnote in your footnote so you can read a footnote while you read a footnote" /lh
in reply to kbity...

i try to keep footnotes to less than 1/7 the total word-count, excluding backlinks. footnotes of footnotes are less than 1/7 total footnotes
This entry was edited (3 months ago)
in reply to Seirdy

@cybertailor so nested footnotes must be less than 1/49 my total word-count to be acceptable.
in reply to Seirdy

I see that certificate revocation is pretty much web-centric. What would e.g. XMPP servers do besides setting CAA records and hoping their keys aren't stolen?
This entry was edited (3 months ago)
in reply to kbity...

@cybertailor All of this also applies to XMPP. Nothing’s stopping an XMPP client from using CRLite. But most generally have cobbled-together crypto. I’d be amazed if most non-browser-based ones even handled OCSP Must-Staple correctly.
in reply to Seirdy

Regarding ACME clients that support not before/notAfter, Posh-ACME also supports this via the LifetimeDays parameter.
poshac.me/docs/latest/Function…

I also wasn’t aware ZeroSSL had added support on the server side. So thanks for that.

in reply to Ryan Bolger

@rmbolger Sorry for the delay; updated to mention Posh-ACME.

Aside: I usually associate the term “Posh” with “POSIX Shell”, so the name really threw me for a loop.

Unknown parent

akkoma - Link to source
Seirdy

my rationale for using basic security measures as a filter is that i have to efficiently narrow down millions of domains to something I can manually check, and I might as well pick something positive.

after the “good security” filter, I’ll isolate domains with a main and h1 tag with no trackers in a “good page content” filter. Then I’ll figure out how to narrow it down further before cursory accessibility reviews and reading what people post in the Tor Browser.

in reply to Seirdy

1.5 million domains checked so far, 682 domains passed the first filter. lets goooo
in reply to Seirdy

scraping the HSTS Preload List and Internet.nl Hall of Fame saw much higher success rates. A minority of the domains passing the first filters are from the Tranco top 2M.
in reply to Seirdy

Partway through, I decided to start filtering out Nextcloud and Searx(Ng) instances. I was already filtering out Masto instances and some others. I ran a second filter to check for the existence of hyperlinks on the page to avoid dead-ends, and to ensure they don’t block Tor.

I filtered a subset of duplicates and handled a subset of redirects. I’m down to around 1.1k domains, around 350 of which are the ones that qualified from Tranco’s top 2.6M domains. Many more are from the HSTS Preload list and Internet.nl Hall of Fame. Around a couple dozen more are uniquely from my browsing history, site outlinks, old chatrooms, web directories, and other more obscure locations.

I can manually pare this down over a couple weeks but that’s too much work. Need to figure out the right set of additional filters. Maybe a “points system” for privacy, security, and accessibility features and then taking the top 250 domains with the most points.

in reply to Seirdy

Might be useful to have a dump of your rejections and reasons, for those of us who think not being in the list is a really useful symptom to investigate.
in reply to Tim Bray

@timbray Right now the filter is TLSv1.3, has a strict content-security policy header (with the exception of allowing unsafe-inline styles), has no common tracking third-parties in the CSP, allows Tor. Then it needs a main, h1, a, and meta viewport element.

I’ll then add a points system to cut it in 1/3 and manually review a few domains per day.

Unknown parent

akkoma - Link to source
Seirdy
@tanith I started from scratch and yes you are. Via the HSTS Preload list.
in reply to Seirdy

Or I could run a subset of Axe-Core on every page and let my fans spin up.

Axe-Core is one of the only page-content checkers out there that doesn’t have a ton of false positives. Even the Nu HTML checker (often incorrectly referred to as the HTML5 Validator; HTML5 can’t be validated) has a ton of them. But some of Axe’s errors, like dupe link names, are way too trivial compared to easy-to-spot manual-only checks like “this h1 is used for the site name but it should be used for the page title”.

This entry was edited (2 months ago)
Unknown parent

akkoma - Link to source
Seirdy
@khm a main element can have many article elements or just one. every post in this thread is an article element. every reply i list to one of my blog posts is also an article element. when i include an xkcd comic in a blog post complete with title, caption, and transcript, i use an article in an article.
@khm
in reply to Seirdy

school me on this main element. I use article at the moment and this is the first I'm hearing of main. otherwise I think sciops.net meets these requirements... except not only do I not use hsts, I expose content over http for accessibility reasons
Unknown parent

akkoma - Link to source
Seirdy

@khm its existence hearkens back to the “standard” page layout most settled on early in the Web’s history: a header, a main, maybe a couple aside elements on the side, and a footer. A “skip to content” link, if it exists, should typically skip to the first non-decorative thing in main.

Viewing your post on the remote instance, I imagine that main may begin just before your profile banner.

@khm
in reply to Seirdy

my activitypub software (snac2) does not use main. I'm willing to open a pull request to fix this if I can grasp the intent properly...

one main tag for the feed body, with each post wrapped in article tags?

in reply to Seirdy

I ran an aggressive filter on the sites, but scrapped it because I had already seen too many of the personal sites that passed.

that filter mandated multiple of the following:

  • CAA record paired with DNSSEC
  • OCSP Stapling
  • COEP + COOP headers
  • No third party content in the CSP
  • An onion-location header.

and all of the following:

  • Not enabling the insecure XSS Auditor with the X-XSS-Protection. Either leaving out the header or explicitly disabling it.
  • Disabling MIME sniffing with X-Content-Type-Options.

Instead I’ll just manually comb through 100-200 domains a day in the Tor Browser to trim my way down to 500-600 sites or so, then figure out how to proceed. I’ll throw out dead ends, login pages, cryptocurrency, very corporate pages, pages for large organizations without much interesting reading material, LLM-related pages, and anything that doesn’t work in the Tor Browser’s “safest” mode (no media, JS, or a bunch of other features).

When I’m down to a few hundred I’ll probably run a mini version of Axe, decide on an actual points system, and spend more than a few seconds on each site looking for original writing, projects, and/or art and reviewing accessibility.

This entry was edited (2 months ago)
in reply to Seirdy

Last time I tried this, in October 22, I sent accessibility feedback to a dozen themes and sites. I resumed this project now because some common ones finally implemented feedback.
in reply to Seirdy

200 websites left to do a cursory accessibility test on. i look at focus outlines, forced colors mode, proper use of heading level one (page title not site title) semantic html (nav, avoids div soup), and a quick run of axe-core. about a minute per site. this will take several more days before i’m ready to build a directory of the survivors and give a proper look at each one.
This entry was edited (1 month ago)
in reply to Seirdy

I should document how I do these incomplete-but-helpful “lightning audits” more thoroughly. After looking at a hundred sites the process has become automatic.

biggest things I look for in an automated audit like Axe are skipped heading levels, missing landmarks (main is big one), and missing alt attributes (mainly on non-decorative images, though decorative images should also have an empty alt).

with inspect element i also look for some semblance of page structure. is it all div soup or is there a header, nav, main, and footer when applicable?

I open the site in a regular browser profile and in my personal profile with an adblocker and forced colors mode, and make sure that tabbing around works in both with focus indicators.

Automated contrast checks are good but also not terribly nuanced. A more nuanced check like APCA with awareness of font size, the type of element (decoration? spot element like a superscript? fluent text?), font weight, etc. is what we should use but that takes time. For a lightning audits i just eyeball it and flag it if the contrast seems very obviously bad.

This entry was edited (1 month ago)
in reply to Seirdy

I used to think that contrast was only talked about so much only because violations were common and it was easy to spot, not because it was one of the most important issues.

Then I started using a shitty dim screen at night with screen gamma adjustment and extra-strong nighttime orange-tinted blue-blocking computer glasses and it got personal.

I don’t think everything should be perfect under such extreme conditions; your visited links and unvisited links appear to have the same hue with a low-contrast night-optimized display. but I should be able to read a paragraph of text, and see the beginnings and ends of links.

This entry was edited (1 month ago)
in reply to Seirdy

I decided that checking 3m domains wasn’t enough so now I’m also checking the top 2 to 5 million DomCop domains (unlike Tranco, this includes subdomains). And a few million Marginalia.nu domains with a score of 6 or higher.
in reply to Seirdy

almost done checking the ten millionth domain lmao

i narrowed 5m domains to around 300. i’m hoping my quality filters will give me 500 sites to work with. then I can start being ✨subjective✨ and narrow it down to 200-300 interesting ones for a directory, plus a hall of fame containing maybe 25 sites.

in reply to Seirdy

Refined my automatic filters to require a main and h1 element in the raw HTML response. Content outside landmarks and misuse of headings are the most common non-color violations, and a missing h1 happens almost as often as using h1 as a site title instead of a page title.
This entry was edited (1 month ago)
in reply to Seirdy

as i drift off, knowing that three machines on three networks are busy doing a polite ten HEAD requests per second while i sleep (one per domain, millions of domains, slow enough to not get blocked by anyone) and anticipating the results when I wake gives me the same fuzzy feeling as I had years ago playing idle games but it’s so much more constructive. it’s going towards building something new that’s never (to my knowledge) been tried, mixing the new automatic-curation approach with the old manual-curation approach to website discovery. shining a light on the sites that will never see page-one on a major search engine.
This entry was edited (1 month ago)
Unknown parent

akkoma - Link to source
Seirdy
@esoteric_programmer not a search engine. a directory with the goal of recognizing and incentivizing good practices on the less-commercial Web.
in reply to Seirdy

so, tldr is, you're building a search engine. How? where do you even start with that?
Unknown parent

akkoma - Link to source
Seirdy
@esoteric_programmer further upthread i describe how i filter. I use lists of domains like Tranco too 6m domains, DomCop top 10mil, Marginalia.nu data dumps, my browsing traffic and bookmarks, and some others.
in reply to Seirdy

very interesting! however, if you can search in the directory, it's kinda a search engine. But yeah, what do you mean by good practices in this case? and how are you crawling for domains?
in reply to Seirdy

Some of the most common #accessibility issues I see in the shortlist of 300-400 sites (filtered from 10 million):

  • Links only distinguishable through a hue change. I consider links discernible through page structure (e.g. simple navbar links) exempt, but for in-text links we need to see the beginnings and ends of their interactive regions. Underlines work; outlines are another approach I’ve seen.
  • No landmarks, or stuff outside landmarks. Everything should be in a landmark: header, main, section, footer, and/or aside are what you typically want on the top-level, directly under body. main is the most important.
  • Misuse of or missing headings. Your page needs one (yes, one) h1 that titles the page, not your entire website. Don’t skip heading levels just to get smaller text. Don’t use headings for stylized text. A lower heading following a higher heading looks like a subtopic of the higher heading, not its own thing.
  • Contrast. I recommend the APCA contrast tool. Don’t lose sleep over the occasional superscript having barely sub-optimal contrast; focus on bigger issues if you’re 99% perceptible. Often, background images are the culprit; remember that users override foreground and background colors and may use a different viewport that results in text in front of a lighter/darker part of the background image. Contrast issues with images are nearly impossible to automatically detect with popular tools.
  • Focus indicators that are invisible or barely-visible, or rely solely on hue changes.
  • Fancy bespoke hidden/toggle-able menus whose hidden items are still accessible via keyboard navigation, making me tab through items I can’t see.
  • Interactive items using font icons whose accessible names are unicode code-points, even if the author clearly tried to give them readable names.
  • Really unhelpful alt text like “diagram of X” without actually explaining the things being diagrammed, whether in alt-text or a caption.
  • Animations that don’t respect prefers-reduced-motion.

Link imperceptibility, missing landmarks, and heading misuse are really common.

A common nit-pick: lists of links (e.g. in nav) would benefit from ul or ol parents.

A common issue that isn’t exactly an accessibility issue: toggles like hamburger menus that require JS don’t work in the Tor Browser’s “safest” mode. I’m looking at simple websites that have no need to exclude anonymous visitors.

This entry was edited (1 month ago)
in reply to Seirdy

Oh, and the document outline algorithm is dead. You can’t have h1 descendants of other headings. or h2 descendants of anything other than h1. Levels do not reset when you enter a child sectioning element, even article.
Unknown parent

akkoma - Link to source
Seirdy
@tanith You’re good, and will probably be in the Hall of Fame when I finish.
in reply to Seirdy

Oh gosh, now I hope we didn't do anything wrong, like I think it's alright but it's so much n nyaaa, what'f we missed sth.
Unknown parent

akkoma - Link to source
Seirdy

@toastal AT users are used to list navigation. Screen readers also do neat things like announce the number of items. “list with 136 items” may not be worth hearing all the way through, but “list with eight items” might be different.

If something semantically makes sense, it should receive the appropriate semantic markup even if the presentation is visually worse in a given browser. Presentation should not be a major concern of the markup.

in reply to Seirdy

you are going to have to explain the a list of links should be in list items for semantics/accessibility to me. When I load a site in a TUI browser which doesn’t have a stylesheet to do layout, nothing is more infuriating than a massive vertical list of links.
Unknown parent

akkoma - Link to source
Seirdy

@toastal A list of navbar links being marked up as a list is a very standard pattern that people and ATs have come to expect, just like how pagination links or table of contents links are list entries. :02shrug:

If you have a list of short non-landmark items or several consecutive standalone items of the same type (single standalone sentences, images in a gallery, links, entries in a post archive, etc) they should be a list for consistent navigation.

If each paragraph is its own item and not part of the same work or part of the same article (e.g. untitled entries on a microblog) they should also be contained in list entries. See the list of h-entry microblogs in tantek.com/ for an example.

in reply to Seirdy

then at what point is an article not a list of sections with a list of paragraphs? Most of my navigations in the last 3 years have been like 7 items or less with role navigation & skip link to get around it if you did not want to consume it. I am not convinced adding a bunch of nodes is necessary here just because you *can* make a list of it. Why is this different?