Skip to main content


When your #curl command line is rejected by the server but your browser still works, it might be because of TLS fingerprinting.

I blogged about this two years ago: daniel.haxx.se/blog/2022/09/02…

#curl
in reply to daniel:// stenberg://

This is really annoying. I've also seen examples where #curl requests are working but requests via #nushell 'http get' are not.
in reply to Felix πŸ‡¨πŸ‡¦ πŸ‡©πŸ‡ͺ πŸ‡ΊπŸ‡¦

yeah, servers can of course do many other checks but the more HTTP level ones are so much easier to work around...
This entry was edited (1 month ago)
in reply to daniel:// stenberg://

In a perfect world bots and scrapers would follow robots.txt and/or ai.txt and give the webmaster a link to see if they have a legitimate reason to grab all the content. But, alas...
in reply to daniel:// stenberg://

β€œLet us come back to this topic in a few years and see where it went.”

It is exactly a few years πŸ˜€

So where has the topic gone?

⇧