_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
HTML Visit Hacker News on the Web
COMMENT PAGE FOR:
HTML Upcoming Changes to Let's Encrypt Certificates
pabs3 wrote 5 hours 28 min ago:
When are we going to get certificates signed by multiple vendors?
rswail wrote 12 hours 19 min ago:
If I want a client certificate, it sounds like that won't be available
any longer from LetsEncrypt
> These new intermediates do not contain the âTLS Client
Authenticationâ Extended Key Usage due to an upcoming root program
requirement. We have previously announced our plans to end TLS Client
Authentication starting in February 2026, which will coincide with the
switch to the Generation Y hierarchy.
So we use this to authenticate based on our fixed-IP/PTR/DNS to connect
server to server to a 3rd party.
If we don't have the Client Authentication bit set, then the cert will
be invalid for outgoing connections.
What do we use instead?
maltris wrote 15 hours 38 min ago:
Wondering: Is there a good tool for centralized ACME cert management
when one runs a large infrastructure, highly available, multi location
where it makes little sense to run the ACME client directly on each
instance or location?
azov wrote 18 hours 43 min ago:
We wanted TLS everywhere for privacy. What we ended up with is every
site needs a constant blessing from some semi-centralized authority to
remain accessible. Every site is âdead by defaultâ.
This feels in many respects worse than what we had with plain HTTP, and
we canât even go back now.
jmb99 wrote 18 hours 10 min ago:
> What we ended up with is every site needs a constant blessing from
some semi-centralized authority to remain accessible.
Do you have any examples of sites that have been blocked by the free
ACME providers?
azov wrote 15 hours 29 min ago:
If you mean that sites with expired certificates may technically be
accessible if one jumps through enough hoops and ignores scary
warnings - yes, of course youâre right.
Maybe this will just teach everyone to click through SSL warnings
the same way they click through GDPR popups - for better or worse.
amtamt wrote 18 hours 51 min ago:
Don't miss DNS-PERSIST-01 challenge introduction, a related change: [1]
It's not final yet, but interesting development.
HTML [1]: https://letsencrypt.org/2025/12/02/from-90-to-45#making-automa...
account42 wrote 3 hours 54 min ago:
So they're reinventing DANE with extra steps?
toddgardner wrote 19 hours 6 min ago:
For all the folks worried about how hard automation is going to be,
this is what my team and I have been working on for the past year: [1]
You CNAME the acme challenge DNS to us, we manage all your certificates
for you. We expose an API and agents to push certificates everywhere
you need them, and then do real-time monitoring that the correct
certificate is running on the webserver. End-to-end auditability.
HTML [1]: https://www.certkit.io/certificate-management
PunchyHamster wrote 19 hours 25 min ago:
The whole thing is very silly security wise anyway.
Okay, so you cert leaked. Will having it leaked for 1.5 months be
substantially less dangerous than 90 days? Nope, you're fucked from the
day one, it's still massively worse than "a browser asynchronously
checks whether site's cert has been revoked"
PunchyHamster wrote 19 hours 28 min ago:
Not sure why they are kicking out TLS Client certs, I understand
kicking them off the default profile (they REALLY had no place there,
not sure why there were there in the first place), but providing no way
to get one is a bit silly
londons_explore wrote 19 hours 36 min ago:
I miss the days when I could set up some random Apache server and leave
it running with zero attention for a decade or two.
These days it seems like even the tiniest of projects have random
sysadmin work like a compulsory change to https certs with little
notice.
It's frustrating and I think has contributed to the death of the
noncommercial corners of the internet.
jmb99 wrote 17 hours 34 min ago:
I just checked, and one of my web servers has an uptime of 845 days,
last login May 2 of last year. Based on the shell history I donât
believe Iâve touched the letsencrypt config on it since I set it up
in 2020 ish.
gonzo41 wrote 18 hours 11 min ago:
I agree, however the simplest place for a static bit of the web seems
to be an s3 bucket with cloudfront or something similar.
londons_explore wrote 16 hours 7 min ago:
Please accept the new T&C's within 90 days or your account will be
terminated...
2 factor Auth now compulsory.
Please validate your identity with our third party identity
provider so we can confirm you are not on the sanctions list. If
you do not, your account will be blocked.
Etc etc. Every third party service requires at least a little
work and brainspace.
jacquesm wrote 19 hours 48 min ago:
I'm halfway tempted to go back to HTTP. You don't do breaking changes
like this without giving your 'customers' a chance to stick to their
ways. I have more than enough on my plate already and don't need the
likes of letsencrypt to give me more work.
lousken wrote 19 hours 57 min ago:
I am not sure how I feel about this solution. It is already painful to
deal with certs on every single piece of IT equipment. Unless you
create and manage your own CA and manage it, which is an extra burden,
what is the point of this? This will only create more janky scripts and
annoyances for very little benefit.
What's next? Enforcing email signing with SMIME or PGP?
parhamn wrote 20 hours 42 min ago:
Given certificate issuance basically ended up being "do you control the
DNS for this domain", I feel like all of it could've been so much
simpler if it was designed like that from day one.
While I love Let's Encrypt it feels so silly to use a third party to
verify I can generate a Cloudflare API key (even .well-known is
effectively "can you run a webserver on said dns entry").
Edit: TIL about
HTML [1]: https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Name...
h43z wrote 20 hours 45 min ago:
Did I understand that correctly that I will be able to get a
certificate for an IP?
phasmantistes wrote 12 hours 31 min ago:
Yep! Should be available to the general public (as long as you're
using an ACME client that can be configured to request a specific
profile) later this week.
notherhack wrote 20 hours 55 min ago:
The other CAs with a free tier that I'm aware of (zerossl, ssl.com,
actalis, google trust, cloudflare) require you to have an account
(which means you're at their mercy), and most of them limit the number
of free certs you can get to a very small number and don't offer free
wildcard certs at all.
There really is no alternative to LE.
zaik wrote 19 hours 41 min ago:
> which means you're at their mercy
Let's Encrypt could easily refuse to issue a certificate for a
certain domain, even if you don't have a registered account. I don't
see much difference.
cheeze wrote 20 hours 8 min ago:
AWS Certificate Manager manages this all for you via DNS validation.
Granted, you're locked into their ecosystem, can't export PK, etc. so
it's FAR from a perfect solution here but I've actually been pretty
impressed with the product from a "I need to run my personal website
and don't want to have to care about certificates" perspective.
Granted, you're paying for the cert, just not directly.
I agree with your statement completely though.
eterm wrote 19 hours 57 min ago:
How much does that end up costing? I'm interested to know for my
own personal domain.
notepad0x90 wrote 20 hours 55 min ago:
I know this is a good thing, but I've struggled a lot on systems that
don't have good/reliable NTP time updates.
Also, at some point in the lifetime graph, you start getting
diminishing returns. There aren't many scenarios where you get your
private keys stolen, but the bad guys couldn't maintain access for more
than a couple of weeks.
In my humble opinion, if this is the direction the CA/B and other
self-appointed leaders want to go, it is time to rethink the way PKI
works. We should maybe stop thinking of LetsEncrypt as a CA but it (and
similar services) can function as more of a real-time trust
facilitators? If all they're checking for is server control, then maybe
a near-real-time protocol to validate that, issue a cert, and have the
webserver use that immediately is ideal? Lots of things need to change
for this to work of course, but it is practical.
Not so long ago, very short DNS TTL's were met with similar
apprehension. Perhaps the "cert expiry" should be tied to the DNS TTL.
With the server renewing much more frequently (e.g.: If the TTL is 1
hour, the server will renew every 15 minutes).
Point being, the current system of doing things might not be the best
place to experiment with low expiry lifetimes, but new ways of doing
things that can make this work could be engineered.
grayhatter wrote 1 hour 24 min ago:
> I know this is a good thing,
strongly disagree.
Increasing complexity in a system rarely makes it more robust. It
primarily makes it more expensive.
tptacek wrote 20 hours 50 min ago:
Why do you need precise timing to make this work?
notepad0x90 wrote 20 hours 42 min ago:
Not precise, but for example if it's been over a day since the last
time update, i start getting errors on various sites, including
virtually every site behind cloudflare. (assuming you're referring
to the initial issue I mentioned).
One of the setups that gives me issues is machines that are resumed
from a historical snapshot and start doing things immediately, if
the NTP date hasn't been updated since the last snapshot you start
getting issues (despite snapshots being updated after every daily
run). Most sites won't break (especially with a 24h window,
although longer always have issues), but enough sites change their
certs so frequently now, it's a constant issue.
Even with a 10 year cert, if you access at the right time you'll
have issues, the difference now is it isn't a once in a 10 year
event, but once in every few days some times.
Perhaps if TLS clients requesting a time update to the OS was a
standardized thing and if NTP client daemons supported that method
it would be a lot less painful?
akerl_ wrote 18 hours 57 min ago:
If youâre drifting enough in 24 hours to affect cert trust,
something wonky is up with the system.
notepad0x90 wrote 14 hours 53 min ago:
in my case, it's more of a case of "the system still thinks
it's yesterday, until the ntp daemon updates the time a minute
or five after resuming". Being behind by a day wasn't a huge
deal before these really short cert life spans.
akerl_ wrote 13 hours 21 min ago:
This isn't something I've seen; are you running systems w/o
an onboard RTC, or with ntpdate doing periodic update, etc
etc?
The closest I've gotten to this would be something like a
Raspberry Pi, but even then NTP is pretty snappy as soon as
there's network access, and until there's network access I'm
not hitting any TLS certs.
notepad0x90 wrote 12 hours 33 min ago:
Windows is the fastest from my testing, even then there is
about a minute or so immediately after restoration where i
get TLS errors on some sites.
Honestly, I just wish that browsers used NTP directly and
used that instead of the system time. If the CA/B wants to
go this direction, maybe this will be a good enhancement to
make it more tenable?
Dylan16807 wrote 12 hours 18 min ago:
A system being an entire day off after suspend is
hopelessly broken and we shouldn't expect browsers to fix
it with their own time sources.
Let's encrypt issues certificates with "notBefore" set an
hour in the past to compensate for incorrect clocks. An
hour is plenty of compensation.
notepad0x90 wrote 2 hours 22 min ago:
I think perhaps I'm doing a poor job at explaining the
specific issue I encounter regularly, but if a system
has been offline for a day, it will be askew by a day.
Unless you're assuming a hardware clock that's on
battery power is always available, and that the
time/ntp daemon checks that and updates the clock fast
enough.
My use case isn't unique, if you have an embedded
device, I'm sure there are even more stringent
limitations. Is there really that big of a difference
if the notBefore is a day instead of an hour, or even a
week? Perhaps when shortening notAfter, notBefore
should be increased.
Dylan16807 wrote 1 hour 2 min ago:
> Unless you're assuming a hardware clock that's on
battery power is always available, and that the
time/ntp daemon checks that and updates the clock
fast enough.
Not "and", just "or". A hardware clock is assumed,
but in absence of that it's the job of the OS to fix
the clock before it breaks anything.
And an hour is already generous. Extending it to a
day or a week gets weird and helps almost nobody.
akerl_ wrote 2 hours 4 min ago:
The overwhelming majority of consumer
laptops/desktops/etc today have a RTC so that yes,
there's a battery keeping a RTC chip awake that keeps
the clock reasonably correct after it's been powered
off / hibernating / etc.
simonsarris wrote 21 hours 0 min ago:
after some failures with Let's Encrypt (almost certainly my fault wrt
auto renewal) I switched to free 15 year Cloudflare certs instead and
I'm very happy not worrying about it any more.
SchemaLoad wrote 20 hours 41 min ago:
I wouldn't be surprised if eventually clients just start rejecting
certificates that are too long. Imagine if someone bought a domain,
but a previous owner is holding a certificate for it that lasts 15
years.
At least under the new scheme if you let the domain sit for 45 days
you'll know only you hold valid certificates for it.
notherhack wrote 20 hours 42 min ago:
That was a smart move but those days are over. Your existing 15 year
certs will continue to be accepted until they expire but then you'll
have to get a new cert and be in the same 45-day-churn boat the rest
of us are.
NoahZuniga wrote 19 hours 53 min ago:
The cloudflare 15 year cert is one they issue privately and that
they only use to authenticate your origin. Cloudflare manages the
certificates for connections coming from the web.
jcims wrote 21 hours 18 min ago:
I used to be knee deep in PKI stuff, now I hardly pay attention.
Two quick questions:
1 - Are there any TLS libraries that enable warnings when certs are
nearing expiration?
2 - Are there any extensions in the works (or previous failed attempts)
for TLS to have the client validate the next planned certificate and
signal both ends when that fails?
ekr____ wrote 21 hours 2 min ago:
To the best of my knowledge the answer to "2" is no.
jcims wrote 8 hours 46 min ago:
I did a bunch of work with Verisign as a contractor back in the
early 2000s and got to see some of the systems and infrastructure
issuing a good portion of the world's certificates at that time.
15 years later I was at Google when they let an intermediate
certificate in their SMTP certs expire and had a major GMail
outage. At work last week we had a major outage related to
certificate issues. Of course there are thousands upon thousands
of stories like that in between.
The chains of trust you can build with PKI have been incredibly
useful and instrumental to securing code, data and traffic, but the
fact that it's still subject to such brittle failure modes is
bemusing.
gethly wrote 21 hours 28 min ago:
What is the point of shortening the life time to 45 days?
Why not 60 seconds? That's way more secure. Everything is automated
after all, so why risk compromised certificate for such a long period,
like 45 days?
Or how about we issue a new certificate per request? That should remove
most of the risks.
benjiro wrote 20 hours 29 min ago:
You know that the original idea was to drop it to 17 days?! And i
think that is still on the books.
To be honest, the issue is not the time frame, you can literally have
certs being made every day. And there are plenty of ways to automated
this. The issue is the CT log providers are going to go ape!
Right now, we are at 24B certificates going back to around 2017 when
they really started to grow. There are about 4.5B unique domains, if
we reduce this to the amount of certs per domain, its about 2.3B
domain certs we currently need.
Now do 2.3B, x 8 renewals ... This is only about 18,4B new certs in
CT logs per year. Given how popular LE is, we can assume that the
actual growth is maybe 10B per year (those that still use 1 year or
multi year + the tons of LE generated ones).
Remember, i said the total going back to 2017 currently is now only
24B ... Yea, we are going to almost double the amount of certs in CT
logs, every two years.
And that assumes LE does not move to 17 days, because then i am sure
we are doubling the current amount, each year.
Good luck as a CT log provider... fyi, a typical certificate to store
is about 4.5kb, we are talking 45TB of space needing per year, and
100TB+ if they really drop it down to 17 days. And we did not talk
databases, traffic to the CT logs, etc...
Its broken Jim ... Now imagine for fun, a daily cert, ... 1700TB per
year in CT log storage?
A new system will come from Google etc because its going to become
unaffordable, even for those companies.
FiloSottile wrote 19 hours 4 min ago:
I am a CT log operator and I hands down support short-lived
certificates. Automation and short lifetimes solve a lot of the
pain points of the WebPKI.
We can solve the storage requirements, itâs fine.
NoahZuniga wrote 19 hours 58 min ago:
Have you heard the good news of Merkle Tree Certificates[1,2]? They
include the transparency cryptography directly in the certificate
issuance pipeline. This has multiple benefits, one of them being
way smaller transparency logs.
1: [1] A great explainer of how they work and why they're better.
2: [2] The current working draft
HTML [1]: https://www.youtube.com/watch?v=uSP9uT_wBDw
HTML [2]: https://davidben.github.io/merkle-tree-certs/draft-davidbe...
benjiro wrote 19 hours 20 min ago:
Yep ... Saves about 40%, seen one of the implementations. One of
the guys posting here from time to time has a working version.
Dylan16807 wrote 20 hours 8 min ago:
Is there a big use case for permanent CT logs of long-expired
certificate?
benjiro wrote 19 hours 23 min ago:
Tons of information for research, hackers, you name it ... It
shows a history of domains, you can find hidden subdomains, still
active, revoked etc ...
Do not forget that we had insane long certificates not that long
ago.
The main issue is that currently you can not easily revoke certs,
so your almost forced to keep a history of certs, and when one
has been revoked in the CT logs.
In theory, if everybody is forced to change certs every 47 days,
sure, you can invalidated them and permanently remove them. But
it requires a ton of automatization on the user side. There is
still way too much software that relies on a single year or multi
year certificated that is manually added to it. Its also why the
fadeout to 47 days, is over a 4 year time periode.
And it still does not change the massive increased in requests to
check validation, that hits CT logs providers.
Dylan16807 wrote 17 hours 41 min ago:
> Tons of information for research, hackers, you name it ... It
shows a history of domains, you can find hidden subdomains,
still active, revoked etc ...
You can store that kind of information in a lot less space. It
doesn't need to be duplicated with each renewal.
> The main issue is that currently you can not easily revoke
certs, so your almost forced to keep a history of certs, and
when one has been revoked in the CT logs.
This is based on the number of active certificates, which has
almost no connection with how long they last.
> There is still way too much software that relies on a single
year or multi year certificated that is manually added to it.
Hopefully less and less going forward.
> And it still does not change the massive increased in
requests to check validation, that hits CT logs providers.
I'm not really sure how that works but yeah someone needs to
pay for that.
nickf wrote 20 hours 10 min ago:
Where did you get 17 days from?
benjiro wrote 19 hours 38 min ago:
There was talking going on in one of the CA/Browser Forum
regarding certificate expirations, and how they looked at
potentially 17 days. The 45 days was a compromise, but the whole
17 days was never removed from the table, and was still
considered as a future option.
nickf wrote 10 hours 9 min ago:
Honestly don't recall discussing 17 days, but I could be wrong.
47 days was a 'compromise' in that it's a step-down over a few
years rather than a single big-bang event dropping from
397->90/47/less.
jfindper wrote 21 hours 11 min ago:
While I believe you're posting questions out of frustration rather
than genuine curiosity, I think it's worth pointing out two things.
One: most of the reasoning is available for reading. Lots of
discussion was had. If you're actually curious, I would suggest
starting with the CA/B mailing group. Some conversation is in the
MDSP (Mozilla's dev-security-policy) archives as well.
Two: it's good to remember that the competing interests in almost
every security-related conversation is the balance between security
and usability. Obviously, a 60-second lifetime is unusable. The goal
is to get the overlap between secure and usable to be as big as
possible.
PunchyHamster wrote 19 hours 21 min ago:
asynchronous revocation check would be far superior option; I'm sad
industry abandoned trying to make it work.
asah wrote 21 hours 5 min ago:
serious q: maybe not 60 sec, but why 45 days instead of ~1 day or
even hours? at 45 days, it pretty much has to be automated.
alwillis wrote 2 hours 35 min ago:
The 7-day certificate will be here before you know it [1]:
HTML [1]: https://letsencrypt.org/docs/profiles/#shortlived
phasmantistes wrote 12 hours 14 min ago:
Honest reply: because the infrastructure isn't ready to support
1-day certificates yet. If your cert is only valid for one day,
and renewal fails on a Saturday, then your site is unusable until
you get back to work on Monday and do something to fix it. There
are things that can be done to mitigate this risk, like using an
ACME client which supports fallback between multiple CAs, but the
vast majority of sites out there today simply aren't set up to
handle that yet.
The point of the CA/BF settling on 47-day certs is yes, to
strongly push automation, but also to still allow time for manual
intervention when automation fails.
tetha wrote 19 hours 41 min ago:
Our internal CAs run 72 hour TTLs, because we figured "why not"
5-6 years ago, and now everyone is too stubborn to stop. You'd be
surprised how much software is bad at handling certificates well.
It ranges from old systems like libpq which just loads certs on
connection creation to my knowledge, so it works, down to some JS
or Java libraries that just read certs into main memory on
startup and never deal with them again. Or other software folding
a feature request like "reload certs on SIGHUP" with "oh,
transparently do listen socket transfer between listener threads
on SIGHUP", and the latter is hard and thus both never happen.
45 days is going to be a huge pain for legacy systems. Less than
2 weeks is a huge pain even with modern frameworks. Even Spring
didn't do it right until a year or two ago and we had to keep
in-house hacks around.
ahoka wrote 20 hours 31 min ago:
They are not sadists, contrary to what others say in the
comments.
jsheard wrote 20 hours 28 min ago:
Although for the benefit of masochists, they are going to offer
6 day certs as an option soon.
phasmantistes wrote 12 hours 12 min ago:
Yep, the "shortlived" (6-day) profile will be available to
the general public later this week. But at this time we
explicitly encourage only mature organizations with stable
infrastructure and an oncall rotation to adopt that profile,
as the risks associated with a renewal failing at the
beginning of a holiday long weekend are just too high for
many sites.
shim__ wrote 21 hours 22 min ago:
Could have just kept using OSCP stapling
alwillis wrote 2 hours 30 min ago:
OSCP enables every website someone visits to be tracked; see
HTML [1]: https://news.ycombinator.com/item?id=46290033
ekr____ wrote 21 hours 10 min ago:
Basically OCSP stapling (more specifically must-staple) is
isomorphic to short-lived certificates.
kmeisthax wrote 21 hours 40 min ago:
Let's Encrypt, you're not even a for-profit business; there's nobody
you need to shield the blow from. Just say "we're reducing certificate
lifetimes to comply with CA/Browser Forum rules". You don't need to do
the cowardly "replace lower with change" in the headline thing.
danparsonson wrote 19 hours 52 min ago:
The announcement is about several changes they're making, not just
about cert lifetimes.
victorbjorklund wrote 21 hours 33 min ago:
That does not make any sense. Plenty of things on the internet are
Open Source / Non-profit yet it affects us a lot. Of course itâs
good to give people relying on your stuff heads-up etc.
wizzwizz4 wrote 21 hours 20 min ago:
GP is criticising the use of language in the headline, not the fact
there's an announcement.
b112 wrote 20 hours 52 min ago:
Plus, the announcer is standing in front of a hedge.
I'm not sure why, but every corporate picture I've seen of
someone, in this context, is standing in front of a hedge. Seems
to be a California thing?
(Where I live, we only have leaves on hedges 6 months of the
year)
asadotzler wrote 20 hours 1 min ago:
It's "let's not take your picture inside of the office because
everyone hates the inside of offices. let's take your picture
outside instead, near the office, but not featuring the office.
oh, that tree over there is nice, but darn, the lighting
underneath its branches isn't great. hey, that hedge over there
reads great in light test and it works with what you're
wearing, so, yeah, that'll do just fine."
mcpherrinm wrote 19 hours 19 min ago:
It was actually outside of my small apartment with bad
lighting
tptacek wrote 20 hours 47 min ago:
The announcer, Matthew McPherrin, is a frequent commenter here
(and a stand-up person deeply involved with information
security).
ottah wrote 21 hours 44 min ago:
I'm kind of had enough of unnecessary policy ratcheting, it's a problem
in a every industry where a solution is not possible or practical; so
the knob that can be tweaked is always turned. Same issue with
corporate compliance, I'm still rotating password, with 2fa, sometimes
three or four factors for an environment, and no one can really justify
it, except the fear that not doing more will create liability.
jofla_net wrote 21 hours 27 min ago:
Yeah the best/worst part of this is that nobody was stopping the
'enlightened' CA/Browser Forum from issuing shorter certificates for
THIER fleets, but no we couldn't be allowed to make our own decisions
about how we best saw the security of the communications channel
between ourselves and our users. We just weren't to be allowed to be
'adult' enough.
The ignorance about browser lock-in too, is rad.
I guess we could always, as they say, create a whole browser, from
scratch to obviate the issue, one with sane limitations on
certificate lifetimes.
ekr____ wrote 21 hours 11 min ago:
I fear this reflects two misunderstandings.
First, one of the purposes of shorter certificates is to make
revocation easier in the case of misissuance. Just having
certificates issued to you be shorter-lived doesn't address this,
because the attacker can ask for a longer-lived certificate.
Second, creating a new browser wouldn't address the issue because
sites need to have their certificates be acceptable to basically
every browser, and so as long as a big fraction of the browser
market (e.g., Chrome) insists on certificates being shorter-lived
and will reject certificates with longer lifetimes, sites will need
to get short-lived certificates, even if some other browser would
accept longer lifetimes.
jofla_net wrote 18 hours 15 min ago:
I don't feel the tradeoff for trying to to fix the idea of a
rogue CA misissuing is addressed by the shorter life either
though, the tradeoff isn't worth it.
The best assessment of the whole CA problem can be summed up the
best by Moxie, [1] And, Well the create-a-browser was a joke, its
what ive seen suggested for those who don't like the new rules.
HTML [1]: https://moxie.org/2011/04/11/ssl-and-the-future-of-authe...
zamadatix wrote 20 hours 45 min ago:
I always felt like #1 would have better been served by something
like RPKI in the BGP world. I.e. rather than say "some people
have a need to handle ${CASE} so that is the baseline security
requirement for everyone" you say "here is a common
infrastructure for specifying exactly how you want your internet
resources to be able to be used". In the case of BGP that turned
into things like "AS 42 can originate 1.0.0.0/22 with maxlength
of /23" and now if you get hijacked/spoofed/your BGP peering
password leaks/etc it can result in nothing bad happening because
of your RPKI config.
The same in web certs that could have been something like
"domain.xyz can request non-wildcard certs for up to 10 days
validity". Where I think certs fell apart with it is they placed
all the eggs in client side revocation lists and then that
failure fell to the admins to deal with collectively while the
issuers sat back.
For the second note, I think that friction is part of their
point. Technically you can, practically that doesn't really do
much.
ekr____ wrote 20 hours 39 min ago:
> "domain.xyz can request non-wildcard certs for up to 10 days
validity"?
You could be proposing two things here:
(1) Something like CAA that told CAs how to behave.
(2) Some set of constraints that would be enforced at the
client.
CAA does help some, but if you're concerned about misissuance
you need to be concerned about compromise of the CA (this is
also an issue for certificates issued by the CA the site
actually uses, btw). The problem with constraints at the
browser is that they need to be delivered to the browser in
some trustworthy fashion, but the root of trust in this case is
the CA. The situation with RPKI is different because it's a
more centralized trust infrastructure.
> For the second note, I think that friction is part of their
point. Technically you can, practically that doesn't really do
much.
I'm not following. Say you managed to start a new browser and
had 30% market share (I agree, a huge lift). It still wouldn't
matter because the standard is set by the strictest major
browser.
zamadatix wrote 20 hours 16 min ago:
The RPKI-alike is more akin to #1, but avoids the step of
trying to bother trusting compromised CAs. I.e., if a CA is
compromised you revoke and regenerate CA's root keys and
that's what gets distributed rather than rely on individual
revocation checks for each known questionable key or just
sitting back for 45 days (or whatever period) to wait for
anything bad to expire.
> I'm not following. Say you managed to start a new browser
and had 30% market share (I agree, a huge lift). It still
wouldn't matter because the standard is set by the strictest
major browser.
Same reasoning between us I think, just a difference in
interpreting what it was saying. Kind of like sarcasm - a
"yes, you can do it just as they say" which in reality
highlights "no, you can't actually do _it_ though" type
point. You read it as solely the former, I read it as
highlighting the latter. Maybe GP meant something else
entirely :).
That said, I'm not sure I 100% agree it's really related to
the strictest major browser does alone though. E.g. if
Firefox set the limit to 7 days then I'd bet people started
using other browsers vs all sites began rotating certs every
7 days. If some browsers did and some didn't it'd depend who
and how much share etc. That's one of the (many) reasons the
browser makers are all involved - to make sure they don't get
stuck as the odd one out about a policy change.
.
Thanks for Let's Encrypt btw. Irks about the renewal squeeze
aside, I still think it was a net positive move for the web.
alwillis wrote 2 hours 52 min ago:
Some users will be able to opt-in to automatically getting
a new cert every 7 days at some point [1]:
HTML [1]: https://letsencrypt.org/docs/profiles/#shortlived
jfindper wrote 21 hours 28 min ago:
>I'm still rotating password
A bit off-topic, but I find this crazy. In basically every ecosystem
now, you have to specifically go out of your way to turn on mandatory
rotation.
It's been almost a decade since it's been explicitly advised against
in every cybersec standard. Almost two since we've done the research
to show how ill-advised mandatory rotations are.
mboerwink wrote 21 hours 7 min ago:
PCI still recommends 90 day password changes. Luckily they've
softened their stance to allow zero-trust to be used instead.
They're not really equivalent controls, but clearly laid out as
'OR' in 8.3.9 regardless.
jfindper wrote 21 hours 1 min ago:
I think it's only a requirement if passwords are the sole factor,
correct? Any other factor or zero-trust or risk-based
authentication exempts you from the rotation. It's been awhile
since I've looked at anything PCI.
In any case, all my homies hate PCI.
mwigdahl wrote 21 hours 9 min ago:
But that would mean doing less, and that's by default bad. We must
take action! Think of the children!
I tried at my workplace to get them to stop mandatory rotation when
that research came out. My request was shot down without any
attempt at justification. I don't know if it's fear of liability
or if the cyber insurers are requiring it, but by gum we're going
to rotate passwords until the sun burns out.
vladostman wrote 21 hours 33 min ago:
I just post the password semi-publicly on some scratchpad (like maybe
a secret gist that's always open in browser or for 2fa a custom web
page with generator built in) if any of those policies get too
annoying. Bringing number of factors back to one and bypassing 'cant
use previous 300000' passwords bs. Works every time.
tgsovlerkhgsel wrote 21 hours 35 min ago:
This was stated as a long-term goal long ago. The idea is that you
should automate away certificate issuance and stop caring, and to
eventually get lifetimes short enough that revocation is not
necessary, because that's easier than trying to fix how broken
revocation is.
account42 wrote 4 hours 33 min ago:
We could have just fixed OCSP stapling instead. Or better yet scrap
the CA nonsense entirely and just use DANE.
tsimionescu wrote 18 hours 8 min ago:
Except this isn't really viable for any kind of internal certs,
where random internal teams don't have access to modify the
corporate DNS. TLS is already a horrible system to deal with for
internal software, and browsers keep making it worse and worse.
Not to mention that the WEBPKI has made it completely unviable to
deliver any kind of consumer software as an offline personal web
server, since people are not going to be buying their own DNS
domains just to get their browser to stop complaining that
accessing local software is insecure. So, you either teach your
users to ignore insecure browser warnings, or you tie the server to
some kind of online subscription that you manage and generate fake
certificates for your customer's private IPs just to get the
browsers to shut up.
noAnswer wrote 16 hours 32 min ago:
Private CAs and CERTs will still be allowed to have longer lives.
tsimionescu wrote 9 hours 35 min ago:
This doesn't help that much, since you still have to fiddle
with installing the private CA on all devices. Not much of a
problem in corporate environments, perhaps, but a pretty big
annoyance for any personal network (especially if you want
friends to join).
ottah wrote 18 hours 19 min ago:
Enforcing an arbitrary mitigation to a problem the industry does
not know how to solve doesn't make it a good solution. It's just a
solution the corporate world prefers.
bigbuppo wrote 20 hours 55 min ago:
It also ignores the real world as the CA/Browser forum admits they
don't understand how certificates are actually used in the real
world. They're just breaking shit to make the world a worse place.
the8472 wrote 20 hours 17 min ago:
> They're just breaking shit to make the world a worse place.
Well, it's the people who want to MITM that started it, a lot of
effort has been spent on a red queen's race ever since. If you
humans would coordinate to stay in high-trust equilibria instead
of slipping into lower ones you could avoid spending a lot on
security.
johncolanduoni wrote 20 hours 47 min ago:
They are calibrated for organizations/users that have higher
consequences for mis-issuance and revocation delay than
someoneâs holiday blog, but I donât think theyâre behaving
selfishly or irrationally in this instance. There are meaningful
security benefits to users if certificate lifetimes are short and
revocation lists are short, and for the most part public PKI is
only as strong as the weakest CA.
OCSP (with stapling) was an attempt to get these benefits with
less disruption, but it failed for the same reason this change is
painful: server operators donât want to have to configure
anything for any reason ever.
alwillis wrote 3 hours 10 min ago:
> OCSP failed for the same reason this change is painful:
server operators donât want to have to configure anything for
any reason ever.
OCSP is going end-of-life because it makes it too easy to track
users.
From Lets Encrypt[1]:
We ended support for OCSP primarily because it represents a
considerable risk to privacy on the Internet. When someone
visits a website using a browser or other software that checks
for certificate revocation via OCSP, the Certificate Authority
(CA) operating the OCSP responder immediately becomes aware of
which website is being visited from that visitorâs particular
IP address. Even when a CA intentionally does not retain this
information, as is the case with Letâs Encrypt, it could
accidentally be retained or CAs could be legally compelled to
collect it. CRLs do not have this issue.
[1]
HTML [1]: https://letsencrypt.org/2025/08/06/ocsp-service-has-re...
dijit wrote 20 hours 50 min ago:
everything that uses TLS is publicly routable and runs a web
service donât you know.
Well, you could also give every random server you happen to
configure an API key with the power change any DNS record it
wishes.. what could go wrong?
#security
nickf wrote 20 hours 10 min ago:
If it doesnât run a web service, or isnât publicly routable
- why do you need it to work on billions of users browsers and
devices around the world?
bigfatkitten wrote 11 hours 3 min ago:
Because the service needs to be usable from non-managed
devices, whether that be on the internet or on an isolated
wifi network.
Very common in mobile command centres for emergency
management, inflight entertainment systems and other systems
of that nature.
I personally have a media server on my home LAN that I let my
relatives use when theyâre staying at our place. It has a
publicly trusted certificate I manually renew every year,
because I am not going to make visitors to my home install my
PKI root CA. That box has absolutely no reason to be
reachable from the Internet, and even less reason to be
allowed to modify my public DNS zones.
nickf wrote 10 hours 21 min ago:
Sure, but in those examples - automation and short-lifetime
certs are totally possible.
bigfatkitten wrote 9 hours 26 min ago:
Except when it's not, because the system rarely (or
never) touches the Internet.
nickf wrote 7 hours 24 min ago:
It might never 'touch' the internet, but the
certificates can be easily automated. They don't have
to be reachable on the internet, they don't have to
have access to modify DNS - but if you want any machine
in the world to trust it by default, then yes -
there'll need to be some effort to get a certificate
there (which is an attestation that you control that
FQDN at a point-in-time).
dijit wrote 6 hours 18 min ago:
and we're back to: How do I create an API token that
only enables a single record to be changed on any
major cloud provider?
Or.. any registrar for that matter (Namecheap, Gandi,
Godaddy)?
The answer seems to be: "Bro, you want security so
the way you do that is to give every device that
needs TLS entire access to modify any DNS record, or
put it on the public internet; that's the secure
way".
(PS: the way this was answered before was: "Well then
don't use LE and just buy a certificate from a major
provider", but, well, now that's over).
nickf wrote 2 hours 13 min ago:
There are ways to do this as pointed out below -
CNAME all your domains to one target domain and
make the changes there.
Thereâs also a new DCV method that only needs a
single, static record. Expect CA support widely in
the coming weeks and months. That might help?
dpkirchner wrote 3 hours 38 min ago:
One answer I've seen to this (very legitimate)
concern is using CNAME delegation to point
_acme-challenge.$domain to another domain (or a
subdomain) that has its own NS records and
dedicated API credentials.
dijit wrote 19 hours 50 min ago:
Are we pretending browsers arenât a universal app delivery
platform, fueling internal corporate tools and hobby projects
alike?
Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP
over TLS; and TLS secures far more, from APIs and email to
VPNs, IoT, and non-browser endpoints? Both are bunk; take
your pick.
Or opt for door three: Ignore how CA/B Forumâs relentless
ratcheting burdens ops into forking browsers, hacking root
stores, or splintering ecosystems with exploitable kludges
(they wonât: theyâll go back to âthis cert is invalid,
proceed anyway?â for all internal users).
Nothing screams âsound securityâ like 45-day cert churn
for systems outside the public browser fray.
And hey, remember back in the day when all the SMTP
submission servers just blindly accepted any certificate they
were handed because doing domain validation broke emailâ¦
yeah
Inspired.
johncolanduoni wrote 18 hours 25 min ago:
> Or opt for door three: Ignore how CA/B Forumâs
relentless ratcheting burdens ops into forking browsers,
hacking root stores, or splintering ecosystems with
exploitable kludges (they wonât: theyâll go back to
âthis cert is invalid, proceed anyway?â for all
internal users).
It does none of these. Putting more elbow grease into your
ACME setup with existing, open source tools solves this for
basically any use case where you control the server. If
you're operating something from a vendor you may be
screwed, but if I had a vote I'd vote that we shouldn't
ossify public PKI forever to support the business models of
vendors that don't like to update things (and refuse to
provide an API to set the server certificate
programmatically, which also solves this problem).
> Nothing screams âsound securityâ like 45-day cert
churn for systems outside the public browser fray.
Yes, but unironically. If rotating certs is a once a year
process and the guy who knew how to do it has since quit,
how quickly is your org going to rotate those certs in the
event of a compromise? Most likely some random service
everyone forgot about will still be using the compromised
certificate until it expires.
> And hey, remember back in the day when all the SMTP
submission servers just blindly accepted any certificate
they were handed because doing domain validation broke
email⦠yeah
Everyone likes to meme on this, but TLS without
verification is actually substantially stronger than
nothing for server-to-server SMTP (though verification is
even better). It's much easier to snoop on a TCP connection
than it is to MITM it when you're communicating between two
different datacenters (unlike a coffeeshop). And most mail
is between major providers in practice, so they were able
to negotiate how to establish trust amongst themselves and
protect the vast majority of email from MITM too.
dijit wrote 6 hours 20 min ago:
> Everyone likes to meme on this, but TLS without
verification is actually substantially stronger than
nothing for server-to-server SMTP (though verification is
even better). It's much easier to snoop on a TCP
connection than it is to MITM it when you're
communicating between two different datacenters (unlike a
coffeeshop). And most mail is between major providers in
practice, so they were able to negotiate how to establish
trust amongst themselves and protect the vast majority of
email from MITM too.
No, it's literally nothing, since you can just create
whatever TLS cert you want and just MITM anyway.
What do you think you're protecting from? Passive
snooping via port-mirroring?
Taps are generally more sophisticated than that.
How do I establish trust with Google? How do they
establish trust with me: I mean, we're not using the
system designed for it, so clearly it's not possible-
otherwise they would have enabled this option at the
minimum.
johncolanduoni wrote 20 hours 40 min ago:
Thatâs why the HTTP-01 challenge exists - itâs perfect for
public single-server deployments. If youâre doing something
substantial enough to need a load balancer, arranging the DNS
updates (or centralizing HTTP-01 handling) is going to be the
least of your worries.
Holding public PKI advancements hostage so that businesses can
be lazy about their intranet services is a bad tradeoff for the
vast majority of people that rely on public TLS.
dijit wrote 20 hours 33 min ago:
and my IRC servers that donât have any HTTP daemon (and
thus have the port blocked) while being balanced by anycast
geo-fenced DNS?
There are more things on the internet than web servers.
You might say âuse DNS-01â; but thats reductive- Iâm
letting any node control my entire domain (and many of my
registrars donât even allow API access to records- let
alone an API key thats limited to a single record; even cloud
providers dont have that).
I donât even think mail servers work well with the
letsencrypt model unless its a single server for everything
without redundancies.
I guess nobody runs those anymore though, and, I can see why.
bruce511 wrote 12 hours 12 min ago:
Putting DNS Api keys on every remote install is indeed
problematic.
The solution however is pretty trivial. For our setup I
just made a very small server with a couple of REST
endpoints.
Each customer gets their own login to our REST server. All
they do is ask "get a new cert".
The DNS-01 challenge is handled by the REST server, and the
cert then supplied to the client install.
So the actual customer install never sees our DNS API keys.
johncolanduoni wrote 18 hours 47 min ago:
I've operated things on the web that didn't use HTTP but
used public PKI (most recently, WebTransport). But those
services are ultimately guests in the house of public PKI,
which is mostly attacked by people trying to skim financial
information going over public HTTP. Nobody made IRC use
public PKI for server verification, and I don't know why
we'd except what is now an effectively free CA service to
hold itself back for any edge case that piggybacks on it.
> and my IRC servers that donât have any HTTP daemon (and
thus have the port blocked) while being balanced by anycast
geo-fenced DNS?
The certificate you get for the domain can be used for
whatever the client accepts it for - the HTTP part only
matters for the ACME provider. So you could point port 80
to an ACME daemon and serve only the challenge from there.
But this is not necessarily a great solution, depending on
what your routing looks like, because you need to serve the
same challenge response for any request to that port.
> You might say âuse DNS-01â; but thats reductive-
Iâm letting any node control my entire domain (and many
of my registrars donât even allow API access to records-
let alone an API key thats limited to a single record; even
cloud providers dont have that).
The server using the certificate doesn't have to be the one
going through the ACME flow, and once you have multiple
nodes it's often better that it isn't. It's very rare for
even highly sophisticated users of ACME to actually
provision one certificate per server.
cpach wrote 19 hours 22 min ago:
FWIW, there are ways to use DNS-01 without an API key that
can control your entire domain.
HTML [1]: https://hsm.tunnel53.net/article/dns-for-acme-chal...
Spooky23 wrote 20 hours 57 min ago:
Itâs a stupid policy. To solve the non-existent problem with
certificates, we are pushing the problem to demonstrating that we
have access to a DNS registrarâs service portal.
toddgardner wrote 19 hours 12 min ago:
It's not really a stupid problem, its the BygoneSSL problem:
HTML [1]: https://www.certkit.io/blog/bygonessl-and-the-certificat...
LtWorf wrote 21 hours 3 min ago:
It costs more to let's encrypt.
ryandrake wrote 21 hours 23 min ago:
The problem is when the automation fails, you're back to manual.
And decreasing the period between updates means more chances for
failure. I've been flamed by HN for admitting this, but I've never
gotten automated L.E. certificate renewal to work reliably.
Something always fails. Fortunately I just host a handful of hobby
and club domains and personal E-mail, and don't rely on my domains
for income. Now, I know it's been 90 days because one of my web
sites fails or E-mail starts to complain about the certificate
being bad, and I have to ssh into my VPS to muck around. This news
seems to indicate that I get to babysit certbot even more
frequently in the future.
vrighter wrote 6 hours 8 min ago:
I set it up last year and haven't had to interact with it in the
slightest. It just works all the time for me.
eek2121 wrote 20 hours 16 min ago:
Really? I've never had it fail. I simply ran the script provided
by LE, it set everything up, and it renewed every time until I
took the site down for unrelated (financial reasons). Out of
curiousity, when did you last use LE? Did you use the script they
provided you or a third party package?
ryandrake wrote 19 hours 48 min ago:
I set it up ages ago, maybe before they even had a script. My
setup is dead simple: A crontab that runs monthly:
0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script:
#!/bin/sh
certbot renew
service lighttpd restart
service exim4 restart
service dovecot restart
... and so on for all my services
That's it. It should be bulletproof, but every few renewals I
find that one of my processes never picked up the new
certificates and manually re-running the script fixes it.
Shrug-emoji.
noAnswer wrote 15 hours 42 min ago:
I don't know how old "letsencrypt-renew" is and what it does.
But you run "modern" acme clients daily. The actual renewal
process starts with 30 days left. So if something doesn't
work it retries at least 29 times.
I haven't touched my OpenBSD (HTTP-01) acme-client in five
years:
acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I
said, it will retry daily and work eventually.
Dylan16807 wrote 12 hours 40 min ago:
> I don't know how old "letsencrypt-renew" is and what it
does.
It's the five lines below "the script:"
jacquesm wrote 19 hours 46 min ago:
Yes, same for me. Every few months some kind internet denizen
points out to me that my certificate has lapsed, running it
manually usually fixes it. LE software is pretty low quality,
I've had multiple issues over the years some of which
culminated in entire systems being overwritten by LE's broken
python environment code.
account42 wrote 4 hours 26 min ago:
If it's happening regularly wouldn't it make sense to add
monitoring for it? E.g. my daily SSL renew check
sanity-checks the validity of the certificates actually
used by the affected services using openssl s_client after
each run.
LtWorf wrote 21 hours 2 min ago:
I did manage to set it up and it has been working ok but it has
been a PITA. Also for some reason they contact my server over
HTTP, so I must open port 80 just to do the renovation.
patmorgan23 wrote 19 hours 33 min ago:
That would be because you set up the HTTP-01 challenge as your
domain verification method.
HTML [1]: https://letsencrypt.org/docs/challenge-types/
LtWorf wrote 10 hours 39 min ago:
Since there is no equivalent HTTPS way of doing the same
thing?
wrs wrote 21 hours 47 min ago:
Just curious⦠Iâve seen this X1/X2 convention before for CA roots.
Does anyone know the origin or rationale for this?
Now we have a âYâ generation showing up, but it seems like whoever
thought of âXâ didnât anticipate more than three generations, or
they would have used A1/A2.
phasmantistes wrote 12 hours 33 min ago:
It's a good question! I know that our first root (ISRG Root X1) used
that naming scheme simply because it was cross-signed by IdenTrust's
root (DST Root CA X3) which used that same scheme. But where they got
the scheme from, I don't know.
Using Y to denote the "next generation" of roots is a scheme I came
up with in the past year while planning our YE/YR ceremony, so it's
certainly not something that people were thinking about when they
named the first roots.
losvedir wrote 21 hours 55 min ago:
The decrease in lifetimes has had a fair bit of discussion, but I
haven't seen a lot of discussion about the mTLS changes. Is anyone else
running into issues there? We'll be hit by it, as we use mTLS as one of
several methods for our customers to authenticate the webhooks we
deliver them, but haven't determined what we'll be doing yet.
nickf wrote 10 hours 11 min ago:
Can I ask - if you're using publicly-trusted TLS server certificates
for client authentication...what are you actually authenticating?
Just that someone has a certificate that can be chained back to a
trust-anchor in a common trust-store? (ie your authentication is that
they have an internet connection and perhaps the ability to read).
jeroenhd wrote 21 hours 34 min ago:
The certificate offered from server to client and the certificate the
server expects from the client do not need to share a CA.
This only affects you if you have a server set up to verify mTLS
clients against the Let's Encrypt root certificate(s), or maybe every
trusted CA on the system. You might do that if you're using the host
HTTPS certificates handed out by certbot or other CAs as mTLS client
certificates.
You can still generate your own mTLS key pairs and use them to
authenticate over a connection whose hostname is verified with Let's
Encrypt, which is what most people will be doing.
losvedir wrote 19 hours 43 min ago:
> The certificate offered from server to client and the certificate
the server expects from the client do not need to share a CA.
Sure, but it seems like all the CAs are stopping issuing
certificates with the client EKU. At least LetsEncrypt and
DigiCert, since by the Google requirement they can't do that and
normal certs, and I guess there's not enough market to have one
just for that.
> You might do that if you're using the host HTTPS certificates
handed out by certbot or other CAs as mTLS client certificate
Sure, what's wrong with that?
> You can still generate your own mTLS key pairs and use them to
authenticate over a connection whose hostname is verified with
Let's Encrypt, which is what most people will be doing.
That lets the client verify the host, but the server doesn't know
where the connection is coming from. Generating mTLS pairs means
pinning and coordinated rotation and all that. Currently servers
can simply keep an up to date CA store (which is common and easy),
and check the subject name, freeing the client to easily rotate
their cert.
jeroenhd wrote 8 hours 39 min ago:
> Sure, what's wrong with that?
Nothing, in principle. I suppose you can use that to validate
domain ownership, or use Let's Encrypt as a weird authentication
service for your cluster. However, it's not exactly common to do
so as far as I can tell.
> Currently servers can simply keep an up to date CA store (which
is common and easy), and check the subject name, freeing the
client to easily rotate their cert.
I understand the ease of use in that approach, but it leaves your
authentication wide open to rogue certificates, i.e. through old
DNS entries on a subdomain, or accidentally letting someone read
email destined to hostmaster@domain.tld, or maybe by a rogue CA
if you want to go full conspiracy mode.
As for pinning: you're required to pick a key store anyway, you
can just point it at whatever CA file you want.
As for automated rotation: you can host your own ACME server for
your own CA (it's like 10 lines of config in Caddy) and have
other servers point an account on their certbot/acme.sh/etc. at
it. This gives you even more control and lets you decide how long
you want certificates to last.
It's not as easy as relying on CAs to do that validation for you,
but also much better than the old-fashioned manual key
configuration of yore.
zzo38computer wrote 11 hours 53 min ago:
I would expect that using the same certificate authorities for
servers is probably not useful for client authentication,
although maybe it might be if the only thing you care about is
the domain name (although, it shouldn't be; anyways, many clients
might not even have a domain name).
But, if you really need to use certificates from the CAs anyways,
you might ignore some of the fields of the certificate.
jeroenhd wrote 8 hours 37 min ago:
> you might ignore some of the fields of the certificate
A lot of software really doesn't like ignoring the constraints.
You can make it work, but there's a good chance it'll require
messing with the validation logic of your TLS library, or
worse, having to write your own validation code.
arccy wrote 21 hours 40 min ago:
it'd be funny if someone decided that DANE would be a good way to
distribute your own roots...
llama052 wrote 21 hours 53 min ago:
We use locally generated certs for Mtls with different lifetimes.
Relying on public CAs for chains of trust like that makes me nervous,
especially if something gets revoked.
noirscape wrote 21 hours 57 min ago:
Insane that they're dropping client certificates for authentication.
Reading the linked post, it's because Google wants them to be separate
PKIs and forced the change in their root program.
They aren't used much, but they are a neat solution. Google forcing
this change just means there's even more overhead when updating certs
in a larger project.
zzo38computer wrote 11 hours 50 min ago:
I think client certificates are a good idea, although it is usually
more useful to use different certificates than those for the domain
names, I think. (I still think CA/Browser Forum is not very good,
despite that; however, I still want to mention my point.)
bigbuppo wrote 20 hours 50 min ago:
Google doesn't understand how the real world works. Big shock.
grayhatter wrote 1 hour 24 min ago:
They do understand, this is moat digging.
throwaway81523 wrote 21 hours 35 min ago:
Is that a temporary situation? Is it that big a deal to implement a
separate set of roots for client certs? Or do you mean that the
entire infrastructure is supposed to be duplicated?
cyberax wrote 21 hours 48 min ago:
It's a good change. I've seen at least one company that had
misconfigured mTLS to accept any client certificate signed by a
trusted CA, rather than just by the internal corporate CA.
LtWorf wrote 19 hours 53 min ago:
Should we remove anything that was at some point misconfigured
somewhere?
cyberax wrote 19 hours 11 min ago:
I won't mind?
But in this case, the upsides are definitely greater than in the
usual case.
LtWorf wrote 10 hours 44 min ago:
We can get rid of computers altogether then but I'm not sure
that would improve anything.
ggm wrote 21 hours 51 min ago:
The certification serves different purposes. It might feel like a
symmetric arrangement but it isn't. On the whole i think implementing
this split is sensible.
I might add I've changed my mind a bit on this.
gruez wrote 22 hours 11 min ago:
>If youâre requesting certificates from our tlsserver or shortlived
profiles, youâll begin to see certificates which come from the
Generation Y hierarchy this week. This switch will also mark the opt-in
general availability of short-lived certificates from Letâs Encrypt,
including support for IP Addresses on certificates.
Does that mean IP certificates will be generally available some time
this week?
infogulch wrote 21 hours 14 min ago:
Now all servers can participate in Encrypted Client Hello for
enhanced user privacy: if clients open TLS connections with ECH where
the server IP is used in the ClientHelloOuter and the target SNI
domain is in the encrypted ClientHelloInner, then eavesdroppers won't
be able to read which domain the user is connecting to.
This vision still needs a several more developments to land before it
actually results in an increment in user privacy, but they are
possible:
1. User agents can somehow know they can connect to a host with
IP SNI and ECH (a DNS record?)
2. User agents are modified to actually do this
3. User agents use encrypted DNS to look up the domain
4. Server does not combine its IP cert with it's other domain
certs (SAN)
btown wrote 22 hours 28 min ago:
The certificate lifetime decrease, to 45 days, was discussed in: [1]
This isn't LE's decision: a 47 day max was voted on by the CA/Browser
Forum. [2] [3] [4] - public votes of all members, which were
unanimously Yes or Abstain.
IMO this is a policy change that can Break the Internet, as many
archived/legacy sites on old-school certificates may not be able to
afford the upfront tech or ongoing labor to transition from annual to
effectively-monthly renewals, and will simply be shut down.
And, per other comments, this will make LE the only viable option to
modernize, and thus much more of a central point of failure than
before.
But Let's Encrypt is not responsible for this move, and did not vote on
the ballot.
HTML [1]: https://news.ycombinator.com/item?id=46117126
HTML [2]: https://www.digicert.com/blog/tls-certificate-lifetimes-will-o...
HTML [3]: https://cabforum.org/2025/04/11/ballot-sc081v3-introduce-sched...
HTML [4]: https://groups.google.com/a/groups.cabforum.org/g/servercert-w...
sunaookami wrote 1 hour 24 min ago:
>IMO this is a policy change that can Break the Internet, as many
archived/legacy sites on old-school certificates may not be able to
afford the upfront tech or ongoing labor to transition from annual to
effectively-monthly renewals, and will simply be shut down.
This is just fear-mongering, legacy sites still need to update their
tech and going fully automatic for cert renewals leads to less
maintenance burden then renewing them manually.
grayhatter wrote 1 hour 26 min ago:
> But Let's Encrypt is not responsible for this move, and did not
vote on the ballot.
"Did not vote", and "not responsible", is definitely a take...
They could call their bluff. I would. The CA/browser forum made a
mistake here. And they can only get away with it if the players
involved comply.
Browers have an incentive to increase the complexity, good engineers
would resist that.
toddgardner wrote 19 hours 15 min ago:
It's more complicated than that. Apple (along with Google and
Mozilla) basically held the CA's hostage. They started unilaterally
reducing lifetimes. It was happening whether the CAB approved it or
not.
The vote was more about whether the CAB would continue to be
relevant. "Accept the reality, or browsers aren't even going to show
up anymore".
I wrote a bunch about this recently:
HTML [1]: https://www.certkit.io/blog/47-day-certificate-ultimatum
nickf wrote 10 hours 3 min ago:
Not quite true - some CAs were not 'held hostage' - some agree with
the changes and supported them. See the endorsers for SC-081.
cprecioso wrote 11 hours 21 min ago:
That was an interesting read, thanks! Two questions:
- What is the problem with stale certificates if a domain changes
hands? It seems to me that whether they renew the certificate or
not, the security situation for the user is still the same, no?
- Is CertKit a similar solution to Anchor Relay? ( [1] )
HTML [1]: https://anchor.dev/relay
toddgardner wrote 2 hours 23 min ago:
> What is the problem with stale certificates if a domain changes
hands?
The previous owners have valid certificates for up to 398 days.
If they are a malicious party cable of doing a man-in-the-middle
attack, they can present a valid certificate and fully
impersonate the owner. For example, when Stripe started, they
purchased the domain from another party, who had a valid
stripe.com payment certificate for nearly a year. ( [1] )
> Is CertKit a similar solution to Anchor Relay?
I hadn't heard about anchor relay before, thanks for the link!
CertKit is similar, but broader. Anchor says it sits between your
ACME clients and the CA and simplifies the validation steps,
which is super useful. But you still have to run ACME clients and
have a bunch of automation logic running on your end.
CertKit IS the ACME client. You CNAME the challenge record to us
and we do all the communication with the CAs and
store/renew/revoke your certificates centrally. Your systems can
pull (or be pushed) the certs they need via our API, then we
monitor the HTTPS endpoints to make sure the correct cert is
running. Its a fully-audited centralized certificate management.
HTML [1]: https://www.certkit.io/blog/bygonessl-and-the-certificat...
ItsHarper wrote 11 hours 15 min ago:
The problem is that the old owner still has a valid certificate
for some period of time.
account42 wrote 5 hours 0 min ago:
Except this is going the wrong way. We should be discouraging
frequent domain ownership changes not making them easier. New
owners getting visibility into traffic meant for the old owners
is as much if not a bigger problem.
paradite wrote 13 hours 45 min ago:
Thanks. This is an informative piece.
Which AI did you use for writing it? It's pretty good.
Analemma_ wrote 18 hours 43 min ago:
It's interesting that this is pretty much identical to the
WHATWG/W3C situation: there is theoretically a standards body, but
in practice it's defunct; the browsers announce what they will
ship, and the "standards body" can do nothing but meekly comply.
The difference being that there's at least a little bit of popular
dissatisfaction with the status quo of browsers unilaterally
dictating web standards, whereas no one came to the defense of CAs,
since everybody hated them. A useful lesson that you need to do
reputation management even if you're running a successful racket,
since if people hate you enough they might not stick up for you
even if someone comes for you "illegally".
bigfatkitten wrote 11 hours 20 min ago:
Uber is a morally bankrupt company that built its market position
through criminal conduct, but everyone looked the other way
because they hated the taxi industry even more.
The CA industry is the new taxi industry.
btown wrote 18 hours 52 min ago:
Thanks for this history, I wasn't aware. It's an interesting point
that if this is happening anyways by Apple's fiat, it's in the
legacy CAs' interest to even further accelerate the mandatory
timeline, so they can pivot to consulting services for their
existing customers.
I do still feel that "that blog/publication that had immense
cultural impact years ago, that was acquired/put on life support
with annual certificate updates, will now be taken offline rather
than migrated to a system that can support ACME automations,
because the consultants charge more than the ad revenue" will be an
unfortunate class of casualty. But that's progress, I suppose.
tptacek wrote 18 hours 15 min ago:
I think it's more broadly "browsers vs. CAs", I think the balance
of power shifted sharply after the Symantec distrusting, and I
think very few people on HN would prefer the status quo ante of
that power shift if we laid out what it meant.
Today, people are complaining that automation of certificate
renewals are annoying (I'm sure they were). Before that, the
complaint was that random US companies were simply buying and
deploying their own root certificates, issuing certs for
arbitrary strangers domains, so their IT teams wouldn't have to
update their desktop configurations.
Things are better now.
ycombinatrix wrote 19 hours 20 min ago:
This is Google's doing, just like the Client Authentication EKU
debacle.
patmorgan23 wrote 19 hours 47 min ago:
Nginx and Apache are free and both can be trivially automated with
ACME bot. Both can be used to set up a reverse proxy in front of
legacy sites or applications.
This is not centralizing everything to Let's Encrypt. it's forcing
everyone to use ACME, and many CAs support ACME (and those that don't
probably will soon due to this change).
evandena wrote 19 hours 52 min ago:
Will the browsers only be forcing the 47 day limit for
publicly-trusted CAs?
cpach wrote 19 hours 32 min ago:
Yes.
RedShift1 wrote 20 hours 30 min ago:
I have been saying it since the beginning that we are centralizing
all the power of the internet to one organization and that this a bad
thing, yet I get downvoted every time. One organization is going to
have a say on whether or not you can have a website on the internet,
how is this objectively a good thing?
bruce511 wrote 12 hours 25 min ago:
I'll upvote you for at least asking the question. But you get
downvoted because your premise is wrong.
There are lots of organizations that support the ACME protocol. LE
is the most well known, but there are others, and more on the way.
Existing CAs don't necessarily vanish with this change. They are
free to implement ACME (or some proprietary protocol) and they are
completely free to keep charging for certificates if they like.
The real result of this change is that processes will change (where
they haven't already) improving both customer experience and
security.
But to be clear there's no "one organization" in the loop here. You
can rest easy on that front.
tssva wrote 20 hours 14 min ago:
Maybe you get downvoted because this isn't centralizing all the
power of the internet to one organization rather than being
downvoted because people don't have an issue with that.
Dylan16807 wrote 20 hours 21 min ago:
The CA/Browser forum has massive power over the web whether you
like it or not, because they make the browsers. And make no
mistake, it's the browser representatives that are the most
aggressive about tighter security and shorter certificate lives.
RedShift1 wrote 20 hours 13 min ago:
I have the feeling that this is much more about control than it
is about security.
bigstrat2003 wrote 20 hours 44 min ago:
> IMO this is a policy change that can Break the Internet
Unfortunately, the people making these decisions simply do not care
how they impact actual real world users. It happens over and over
that browser makers dictate something that makes sense in a nice,
pure theoretical context, but sucks ass for everyone stuck in the
real world with all its complexities and shortcomings.
bigfatkitten wrote 11 hours 15 min ago:
âBrowser makersâ in this context generally just means Google,
though this particular proposal came from Clint Wilson from Apple.
Google calls the shots and the others fall in line.
reactordev wrote 20 hours 55 min ago:
Certificate rotation/renewal has been the biggest headache of my IT
career. Itâs always after the fact. Itâs always a pain point.
Itâs always an argument with accounting over costs. It sucks. Iâm
glad ACME exists but man this whole thing is a cluster fuck.
Whole IT teams are just going to wash their hands of this and punt to
a provider or their cloud IaaS.
toddgardner wrote 19 hours 13 min ago:
Man, I agree. The whole thing sucks so much. We started building a
centralized way to do this internally last year to get better
visibility into renewals and expirations:
We're doing a beta of it for some other groups now.
HTML [1]: https://www.certkit.io/
reactordev wrote 2 hours 41 min ago:
Cool but for us, this kind of thing is better solved closer to
the edge with automation like Caddy server that does this for us
while also being our ingress proxy for all those domains.
I want Apache to do this natively.
I want nginx to do this natively.
I want tomcat to do this natively.
I want express to do this natively.
Every single http server punts on TLS as an afterthought of
supply me your private and public key and Iâll do it. Sure
there are modules now for those servers for ACME but this process
is still old school Web 1.0 deployment logic.
echelon wrote 20 hours 18 min ago:
It's fine for fintechs and social accounts to require SSL, but do
blogs really need certs? You know what blogs I'm reading from my
DNS requests anyway. I doubt anyone is going to MITM my access to
an art historian's personal website. There is zero need for
security theater here.
All of these required, complex, constantly moving components mean
we're beholden to larger tech companies and can't do things by
ourselves anymore without increasing effort. It also makes it
easier for central government to start revoking access once they
put a thumb on cert issuance. Parties that the powers don't like
can be pruned through cert revocation. In the future issuance may
be limited to parties with state IDs.
And because mainstream tech is now incompatible with any other
distribution method, you suddenly lose the ability to broadcast
ideas if you fall out of compliance. The levers of 1984.
reactordev wrote 3 hours 1 min ago:
If you HTTP between you and the internet, yes, yes you absolutely
need SSL. Itâs like âIf itâs hot out, do I really need to
wear shorts or pants at all?â Yes. The answer is always yes.
bruce511 wrote 12 hours 33 min ago:
This is a popular refrain from folks hosting read-only sites.
The problem is not "your part", it's the "between you and the
client" part.
It becomes trivial to inject extra content, malicious JavaScript,
adverts etc into the flow. And this isn't "targetted" at your
site, its simply applied to all insecure sites.
TLS is not about restricting your ability to broadcast
information. It's about preserving your ability to guarentee that
your reader reads what you wrote.
TLS is free and easy to implement. The only reason not to do it
is laziness. You may see TLS as a violation of your principles-
but I see it as an attitude of "I don't care about my readers
safety - let someone inject malicious JavaScript (or worse) on my
page, their security is not my problem".
(If the govt want to censor you they can do that via dns).
account42 wrote 4 hours 53 min ago:
How do you protect your physical letters from being opened by
unauthorized parties along the delivery chain? You don't for
the most part because we have made it a very series crime to do
that. We could have done the same to badly behaving ISPs.
Instead browsers have chosen to make planned obsolescence a
requirement for the web.
bruce511 wrote 3 hours 54 min ago:
Making it a crime is very regional, and enforcement is
basically non existent. But opening it is not the problem
here.
The analogous problem would be letters opened and anthrax
inserted. That doesn't (often) happen because mail is
physical and hard to do at scale. (And the anthrax cant mine
bitcoin.)
Given the ineffectiveness of current laws around ransomware,
bonnets, phishing, identity theft, online scams etc, I don't
think a law saying "don't do that" would be a solution.
And ISPs are (by far) not the only offenders here. Every
public wifi would be an equally attractive attack point.
SyneRyder wrote 14 hours 16 min ago:
I agree. Fortunately for blogs, we still have an option - make
sure your website is accessible via HTTP / port 80. This has the
extra advantage that your website will continue to work on older
tech that doesn't support these SSL certs. It will even be
accessible to retro hardware that couldn't attempt decoding SSL
in the first place.
Of course I have modern laptops, but I still fire up my old Pismo
PowerMac G3 occasionally to make sure my sites are still working
on HTTP, accessible to old hardware, and rendering tolerably on
old browsers.
account42 wrote 4 hours 50 min ago:
Unfortunately this means that now your pages are available at
multiple URLs, one which won't work everywhere - and you have
no control over what URL people will share.
noAnswer wrote 16 hours 58 min ago:
> I doubt anyone is going to MITM my access to an art historian's
personal website.
But that is what ISPs did! Injecting (more) ads. Replaced ads
with their own. Injecting Javascript for all sorts of things.
Like loading a more compressed version of a JPEG and you had to
click on a extra button to load the full thing. Removing the
STARTTLS string from a SMTP connection. Early UMTS/G3 Vodafone
was especially horrendous.
I also remember "art" projects where you could change the DNS of
a public/school PC and it would change news stories on spiegel.de
and the likes.
immibis wrote 18 hours 58 min ago:
It started being enforced only after major USA ISPs started
injecting malware into every HTTP page. If it was a theoretical
concern I might agree with you, but in reality, it actually
happened which overrules any theoretical arguments. Also, PRISM.
mpyne wrote 17 hours 45 min ago:
> Also, PRISM.
PRISM works fine to recover HTTPS-protected communications. If
anything NSA would be happier if every site they could use
PRISM on used HTTPS, that's simply in keeping with NOBUS
principles.
majorchord wrote 17 hours 42 min ago:
> PRISM works fine to recover HTTPS-protected communications
Source:
mpyne wrote 17 hours 38 min ago:
[1] They collect it straight from the company after it's
already been transmitted. It's not a wiretap, it's more
akin to an automated subpoena enforcement.
HTML [1]: https://en.wikipedia.org/wiki/PRISM
immibis wrote 7 hours 53 min ago:
They also do wiretaps, and the exact name of the program
is not relevant.
reactordev wrote 5 hours 49 min ago:
Oh itâs way more pervasive.
Thatâs just what the public knows. It goes far beyond
that.
Most of the good projects in DoD space have Star Wars
names. BESPIN, JEDI, etc. PRISM is a mechanism, one
aspect of a much larger machine.
omcnoe wrote 20 hours 8 min ago:
Without TLS on your blog anyone in the middle can trivially
inject malware to all your readers.
lousken wrote 20 hours 6 min ago:
it still can, just add some 3rd party javascript or unpatched
backend app
vntok wrote 19 hours 59 min ago:
How do you inject anything into a TLS served webpage as an
equipment-in-between without the cert's key?
bigbuppo wrote 21 hours 0 min ago:
The good news is that the CAs signed their own death warrant with
this change. If switching to ACME is more or less mandatory, what
purpose do paid certificates serve? Your options are to use LE,
switch to non-CA-issued encryption, or drop encryption entirely.
briHass wrote 18 hours 18 min ago:
Paid certs are valid for 1-year from the $$ CAs. LE certs are only
good for ~3-4 months before they have to be reissued. If there's no
easy way to do an automated ACME setup to handle the renewal, being
able to defer that for a year is worth the $20 or $70 for a
wildcard.
If paid certs drop in max validity period, then yeah, zero reason
to burn money for no reason.
piperswe wrote 13 hours 46 min ago:
The $$ CAs will soon only issue 45-day certs, because that's all
that browsers will accept.
nickf wrote 20 hours 14 min ago:
You assume itâs just the certs being purchased - and not support,
SLAs, other related products, management platforms, private PKI and
more. If all you do is public TLS, sure, that might be an issue.
xg15 wrote 20 hours 34 min ago:
New web features are https-only by default, since a few years ago.
So if your site uses any recent APIs, dropping encryption is not an
option.
zozbot234 wrote 20 hours 17 min ago:
Secure context is only required for features that are somehow
privacy- or security-sensitive. Some notable features are on the
list, but you can absolutely have a modern site that doesn't rely
on any of these.
manindahat wrote 19 hours 8 min ago:
Securing your communications is required to mitigate against
main in the middle attacks.
gcr wrote 21 hours 28 min ago:
Note the timeline. Nothing's changing soon.
> Next year, youâll be able to opt-in to 45 day certificates for
early adopters and testing via the tlsserver profile. In 2027,
weâll lower the default certificate lifetime to 64 days, and then
to 45 in 2028.
1718627440 wrote 16 hours 25 min ago:
You have a different definition of soon than me.
tgsovlerkhgsel wrote 21 hours 38 min ago:
It will make ACME the only viable option. I believe there is a second
free ACME CA and other CAs will likely adopt ACME if they want to
stay relevant.
Ideally, this will take less ongoing labor than annual manual
rotations, and I'd argue sites that can't handle this would have been
likely to break at the next annual rotation anyways.
If they have certificates managed by hosters, the hosters will deal
with it. If they don't, then someone was already paying for the
renewal and handling the replacement on the server side, making it
much more likely that it will be fixed.
aaomidi wrote 19 hours 36 min ago:
There are 2 and GTS is also technically free. Just hard to use.
ozim wrote 21 hours 10 min ago:
I hope everyone will adopt ACME I still have to send CSR to the
customers and they send me cert back. It is okish once a year.
gregoryl wrote 21 hours 21 min ago:
They won't adopt Acme, as once a customer adopts it, the effort to
transition to a new (free) provider is almost zero.
I expect they will introduce new, "more secure", proprietary
methods, and ride the vendor lock-in until the paid certificate
industries death.
ozim wrote 20 hours 9 min ago:
Free providers have limits and this new time limitation will
also play into that as there will be many more certificates to
renew.
Large companies will keep on using paid providers also for
business continuity in case free provider will fail. Also I
donât know what kind of SLA you have on letâs encrypt.
It is more complicated than âoh it is free letâs move onâ.
oooyay wrote 12 hours 47 min ago:
Most every modern "big company" I have worked for is leveraging
LetsEncrypt in some capacity where appropriate; some definitely
more than others. I don't think you're completely wrong but I
also think you're being a bit dismissive.
michaelt wrote 21 hours 23 min ago:
I'm quite surprised the CA/Browser Forum went for this.
Nobody's paying for EV certificates now browsers don't display the
EV details. The only reason to pay for a certificate is if you're
rotating certificates manually, and the 90 day expiry of Lets
Encrypt certificates is a hassle.
If the CA/Browser Forum is forcing everyone to run ACME clients (or
outsource to a managed provider like AWS or Cloudflare) doesn't
that eliminate the last substantial reason to give money to a CA?
beaugunderson wrote 17 hours 37 min ago:
The CA/BF has a history of terrible decisions, for example 2020's
"Baseline Requirements for the Issuance and Management of
Publicly-Trusted Code Signing Certificates".
Microsoft voted for it, and now they are basically the only game
in town for cloud signing that is affordable for individuals. The
Forum needs voting representatives for software developers and
end users or else the members will just keep enriching themselves
at our expense.
account42 wrote 6 hours 7 min ago:
How is the CA/B forum relevant for code signing certificates?
beaugunderson wrote 1 hour 58 min ago:
How are they not? :)
They set the baseline standard for code signing certificates.
In 2020 they added the requirement to use hardware modules
which resulted in much higher prices and fewer small
developers opting to sign their code.
immibis wrote 18 hours 59 min ago:
Yes. Mozilla presumably want this rent-seeking industry of
useless middleman to disappear.
throw0101a wrote 20 hours 5 min ago:
> I'm quite surprised the CA/Browser Forum went for this.
The CA folks and the Browser folks may have had differences of
opinions.
tptacek wrote 18 hours 18 min ago:
You think? :)
joking wrote 21 hours 10 min ago:
My case, I have to manage a portal for old tvs and those donât
accept the LE root certificate since they changed a couple of
years ago. Unfortunately the vendor is unable to update the
firmware with new certificates and we are sold
deepsun wrote 19 hours 34 min ago:
Well, how the vendor was going to apply other security updates
if they cannot update their basic security trust store?
If the vendor is really unable to update, then it's at best
negligence when designing the product, and at worst -- planned
obsolescence.
michaelt wrote 10 hours 40 min ago:
1. Ship the product with automatic updates delivered over
https
2. Product is a smart fridge or whatever, reasonable users
might keep it offline for 5+ years.
3. New homeowner connects it to the internet.
4. Security update fails because the security update server's
SSL cert isn't signed by a trusted root.
array_key_first wrote 13 min ago:
The real solution is making your shit modifiable by the
client.
We do car recalls all the time. Just send out an email or
something with instructions of what to put on a USB, it's
basically the same thing.
Yes it's inconvenient for consumers and annoying but the
alternative is worse. Essentially hard coding certificates
was always a bad idea.
lokar wrote 13 hours 19 min ago:
Yeah, participation in web tls requires the ability to
regularly update your server and client code.
Nothing stays the same forever, software is never done.
Itâs absurd pretend otherwise.
aftbit wrote 21 hours 3 min ago:
Yeah that LE root certificate change broke our PROD for about
25% of traffic when it happened. Everyone acts like we control
our client's cert chains. Clients don't look at the failure and
think "our system is broken - we should upgrade". They look at
the connection failure and think "this vendor is busted - might
as well switch to someone who works". I switched away from LE
to the other free ACME provider for our public-facing certs
after that.
account42 wrote 6 hours 6 min ago:
And your clients are right. The "security" community's wanton
disregard for backwards compatibility is abhorrent.
nickf wrote 20 hours 16 min ago:
Roots for all CAs are going to be rotating much more
frequently now. Looking to be every 5 years.
silverwind wrote 18 hours 30 min ago:
Sounds like planned obsolescence if devices stop working
after 5 years or less.
majewsky wrote 8 hours 57 min ago:
Only for devices that do not allow you to patch the CA
bundle as an aftermarket repair. Call your representative
and demand Right to Repair legislation.
michaelt wrote 20 hours 7 min ago:
I'd be interested in hearing more - do you have a source
for this?
Seems to me CAs have intermediate certificates and can
rotate those, not much upside to rotating the root
certificates, and lots of downsides.
aaomidi wrote 19 hours 34 min ago:
The upside to rotating roots is:
1. These might need to happen as emergencies if something
bad happens
2. If roots rotate often then we build the muscle of
making sure trust bundles can be updated
I think the weird amount they are being rotated today is
the real root cause if broken devices and we need to stop
the bleed at some point.
michaelt wrote 10 hours 45 min ago:
> 1. These might need to happen as emergencies if
something bad happens
Isn't this the whole point of intermediate
certificates, though?
You know, all the CA's online systems only having an
intermediate certificate (and even then, keeping it in
a HSM) and the CA's root only being used for 20 seconds
or so every year to update the intermediate
certificates? And the rest of the time being locked up
safer than Fort Knox?
aaomidi wrote 4 hours 48 min ago:
The thing is even the most secure facilities need
ingress and egress points.
Those are weaknesses. Itâs also that a root
rotation might be needed for completely stupid
vulnerabilities. Like years later finding that
specific key was generated incorrectly.
selcuka wrote 14 hours 58 min ago:
> If roots rotate often then we build the muscle of
making sure trust bundles can be updated
Five years is not enough incentive to push this change.
A TV manufacturer can simply shrug and claim that the
device is not under warranty anymore. We'll only end up
with more bricked devices.
aaomidi wrote 11 hours 11 min ago:
5 years also is a step not a destination
account42 wrote 6 hours 3 min ago:
Sounds more like a detour across hot coals that
doesn't get us anywhere closer to the destination.
nickf wrote 20 hours 5 min ago:
Chrome root policy, and likely other root policies are
moving toward 5-years rotation of the roots, and annual
rotation of issuing CAs.
Cross-signing works fine for root rotation in most cases,
unless you use IIS, then it becomes a fun problem.
mikestorrent wrote 17 hours 33 min ago:
What an absolute pain in the ass for a mediocre
increase in security.
jsheard wrote 22 hours 15 min ago:
> And, per other comments, this will make LE the only viable option
to modernize, and thus much more of a central point of failure than
before.
Let's Encrypt isn't the only free ACME provider, you can take your
pick from them, ZeroSSL, SSL.com, Google and Actalis, or several of
them for redundancy. If you use Caddy that's even the default
behavior - it tries ZeroSSL first and automatically falls back to
Let's Encrypt if that fails for whatever reason.
idoubtit wrote 20 hours 20 min ago:
> If you use Caddy that's even the default behavior - it tries
ZeroSSL first and automatically falls back to Let's Encrypt if that
fails for whatever reason.
No, that's false. It's the other way around.
âIf Caddy cannot get a certificate from Let's Encrypt, it will
try with ZeroSSLâ. Source: [1] Which makes sense, since the ACME
access to ZeroSSL must go through an account created by a manual
registration step. Unless the landscape changed very recently, LE
is still the only free ACME that does not require registration.
Source:
HTML [1]: https://caddyserver.com/docs/automatic-https#issuer-fallba...
HTML [2]: https://poshac.me/docs/v4/Guides/ACME-CA-Comparison/#acme-...
jsheard wrote 20 hours 8 min ago:
My bad, I misremembered the order. You're right that ZeroSSL
requires credentials to get free certificates, but Caddy has
special-case support for generating those credentials
automatically provided you specify an email address in the
config, so it's almost transparent to the user. [1] Correction:
the default behavior is to use Let's Encrypt alone, but if you
provide an email then it's Let's Encrypt with fallback to
ZeroSSL.
HTML [1]: https://caddy.community/t/using-zerossls-acme-endpoint/9...
xg15 wrote 20 hours 37 min ago:
How are all those free CAs financed?
jsheard wrote 20 hours 33 min ago:
Let's Encrypt is a non-profit funded by donations and corporate
sponsorships.
ZeroSSL, SSL.com and Actalis offer paid services on top of their
basic free certificates.
Google is Google.
bigiain wrote 19 hours 10 min ago:
> Google is Google.
So your "free" ssl certs are provided by surveillance
capitalism, and paid for with your privacy (and probably your
website user's privacy too)?
nmjohn wrote 18 hours 44 min ago:
> and paid for with your privacy (and probably your website
user's privacy too)?
That's not really how ssl certs work - google isn't getting
any information they wouldn't have otherwise had by issuing
the ssl cert.
jsheard wrote 19 hours 4 min ago:
The ethical side is up to you, but in a strictly technical
sense I don't think there's much that Google could do to
intrude on your users privacy as a result of them issuing
your SSL certificate, even if they wanted to. AIUI the ACME
protocol never lets the CA see the private key, only the
public key, which is public by definition anyway.
A more realistic concern with using Googles public CA is they
may eventually get bored and shut it down, as they tend to
do. It would be prudent to have a backup CA lined up.
oooyay wrote 12 hours 39 min ago:
> The ethical side is up to you, but in a strictly
technical sense I don't think there's much that Google
could do to intrude on your users privacy as a result of
them issuing your SSL certificate, even if they wanted to.
I'm not sure that's technically true. As a CA they
definitely have the power to facilitate a MitM attack. They
can also issue fraudulent certificates.
> AIUI the ACME protocol never lets the CA see the private
key, only the public key, which is public by definition
anyway.
That has more to do with HTTPS end to end encryption, not
the protocol of issuance.
majewsky wrote 8 hours 54 min ago:
It absolutely has to do with ACME. There used to be CAs
that would generate a service certificate including
private key for you. This is obviously a terrible idea,
but it is made impossible by ACME only allowing
exchanging CSRs for certs.
oooyay wrote 2 hours 42 min ago:
Ah, I see
xg15 wrote 20 hours 1 min ago:
Ah, that makes sense. Thanks!
autotune wrote 21 hours 50 min ago:
LE has some gnarly rate limiting rules, so I use ZeroSSL. Works out
great for my purposes.
OptionOfT wrote 17 hours 52 min ago:
That's the reason I switched. I had an issue where a path was not
mounted correctly and we blew through our limits.
Oh, on LE the Rate Limit Adjustment Request forms the contractual
things (if that's what they are?) do not load:
HTML [1]: https://isrg.formstack.com/forms/rate_limit_adjustment_r...
mcpherrinm wrote 17 hours 49 min ago:
Hm, it's supposed to be [1] - but it looks like the link is
broken. I'll fix it.
HTML [1]: https://letsencrypt.org/docs/integration-guide/
ChrisArchitect wrote 22 hours 31 min ago:
Related:
Decreasing Certificate Lifetimes to 45 Days
HTML [1]: https://news.ycombinator.com/item?id=46117126
Animats wrote 22 hours 47 min ago:
These short certificate lifetimes make Let's Encrypt a central point of
failure for much of the Internet. That's a concern. Failure may be
technical or political, too.
tecleandor wrote 19 hours 51 min ago:
This is not a Let's Encrypt policy. This is a global policy:
HTML [1]: https://www.digicert.com/blog/tls-certificate-lifetimes-will...
SchemaLoad wrote 20 hours 42 min ago:
For this to be a problem LE would have to be broken for an entire
month. And even then you'd have ample time to move to something else.
stusmall wrote 20 hours 45 min ago:
I'm seeing this hot take a lot but it doesn't make sense. Are people
worried than LE is going to have a 45 day outage or something? ACME
is an open standard with other implementations so I'm having trouble
seeing the political central point of failure too.
It's okay for something to be a good thing and to celebrate it. We
don't have to frown about everything.
patmorgan23 wrote 19 hours 25 min ago:
Yeah, doesn't the ACME bot defaults have it trying to renew the
cert when it has like 30% of its life time left? Which means the CA
would have to be down for Days/Weeks fo it to impact production.
Oh and you would definitely know about this outage because you
would hear about it in your news, and the monitoring you already
have set up to yell at you when you cert is about to retire (you
already have that right? Right?). And you can STILL trivially
switch to another CA that supports ACME.
JackSlateur wrote 21 hours 22 min ago:
No
There are other CA with ACME support
Including paying CA, if you really want to pay : sectigo
bigbuppo wrote 20 hours 51 min ago:
Sectigo is also going to be forced into issuing short-lived
certificates. This is a CA/Browser Forum decision that is binding
on all member CAs.
JackSlateur wrote 9 hours 55 min ago:
But sectigo does not care, they are just like Let's Encrypt
(except you pay)
You can renew your sectigo certificates with ACME so from a
technical point of view, just trigger your cron more often
Dylan16807 wrote 12 hours 33 min ago:
They will, but Sectigo forcing faster renewal doesn't make Let's
Encrypt into a central failure point. Central failure point was
the worry above.
woodruffw wrote 21 hours 37 min ago:
Can you explain how shorter certificate lifetimes make LE more of a
single point of failure? I can squint and see an argument for CA
diversity; I struggle to see how reducing certificate lifetimes
increases CA centralization.
Spooky23 wrote 20 hours 55 min ago:
Increasing the number of touchpoints dramatically increases the
probability that the service will be unavailable and and a service
impact.
woodruffw wrote 20 hours 33 min ago:
Okay, but that isn't about being a single point of failure. That
happens with this policy regardless of whether HTTPS is
centralized around LE or not.
Spooky23 wrote 20 hours 18 min ago:
Oh for sure. This is stupid policy by an organization with no
accountability to anyone, that represents the interests of
parties with their own agendas.
woodruffw wrote 19 hours 40 min ago:
I don't think it's that venal: the CABF holds CAs
accountable, largely through the incentives of browsers
(which in turn are the incentives of users, mediated by what
Google, Microsoft, Apple, and Mozilla think is worth their
time). That last mediation is perhaps not ideal, but it's
also not a black hole of accountability.
Spooky23 wrote 17 hours 11 min ago:
I donât think itâs venal, but the browser makers
donât represent the different constituencies that operate
servers or end users in many capacities.
ferngodfather wrote 20 hours 56 min ago:
Because when they eventually get their wet dream of 7-day renewals,
everyone replies upon them once a week. LE being down for 48-hours
could take out a big chunk of the Internet.
Certificates have historically been a "fire and forget" but
constant re-issuance will make LE as important as DNS and web
hosting.
phasmantistes wrote 12 hours 19 min ago:
FWIW, we're acutely aware of the operational risks of super short
lifetimes and frequent renewals. That's why our `shortlived`
profile is clearly documented as only being appropriate for orgs
that have high operational maturity and an oncall rotation. We
carry pagers too, and if LE goes down for 48 hours, we'll be
desperately trying not to take out a huge chunk of the Internet.
cedilla wrote 20 hours 18 min ago:
More forget than fire.
The longer certificates were valid the more often we'd have
breakage due to admins forgetting renewal, or how do install the
new certificates. It was a daily occurrence, often with hours or
days of downtime.
Today, it's so rare I don't even remember when I last encountered
an expired certificate. And I'm pretty sure it's not because of
better observability...
bigbuppo wrote 20 hours 55 min ago:
The solution is to get rid of CAs entirely.
ferngodfather wrote 20 hours 50 min ago:
Yeah, I completely agree. I'm not sure what the solution is,
but this ain't it.
sethhochberg wrote 21 hours 0 min ago:
Shorter lifetimes means more renewal events, which means more
individual occasions in which LE (or whatever other cert authority)
simply must be available before sites start falling off the
internet for lack of ability to renew in time.
We're not quite there yet, but the logical progression of shorter
and shorter certificate lifetimes to obviate the problems related
to revocation lists would suggest that we eventually end up in a
place where the major ACME CAs join the list of heavily-centralized
companies which are dependencies of "the internet", alongside AWS,
Cloudflare, and friends. With cert lifetimes measured in years or
months, the CA can have a bad day and as long as you didn't wait
until the last possible minute to renew, you're unimpacted. With
cert lifetimes trending towards days or less, now your CA really
does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it
is that the concept of ACME CAs in general join the list of
critically available things required to keep a site online.
PunchyHamster wrote 19 hours 18 min ago:
if you renew 7 days before cert ends, it would need to be down
for entire week in worst case so it' far less bad in general.
Hell, you can still set it to renew when cert still have month
left.
I'm more worried that the clowns at the helm will push into
something stupid like week or 3 days, "coz it improves security
in some theoretical case"
woodruffw wrote 20 hours 30 min ago:
> would suggest that we eventually end up in a place where the
major ACME CAs join the list of heavily-centralized companies
which are dependencies of "the internet"
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than
it is that the concept of ACME CAs in general join the list of
critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs
are critical infrastructure, and that there's latent risk
whenever infrastructure becomes critical. I just think that risk
is justified, and that LE in particular is no more or less of a
SPOF with these policy changes.
Starlevel004 wrote 21 hours 38 min ago:
Here's the key takeaways as to why that's an issue
ocdtrekkie wrote 21 hours 41 min ago:
I am at the point of looking forward to it. The CA/B is so unhinged
and so unaccountable and the appetite to fix it is so small, a broad
scale collapse of the Internet caused by the CA/B's incompetence is
looking like the only way to finally end their regime.
lysace wrote 21 hours 46 min ago:
Agreed. We need a second source, preferably located in the EU. They
could share operational code/protocols/etc. I.e. during peace time
they could collaborate.
The EU started building the Galileo GNSS ("GPS") in 2008 as a backup
in case the US turned hostile. And now look where we are in 2025 with
the US president openly talking about taking Greenland. Wise move. It
seemed like a gigantic waste back then. It was really, really
expensive.
Then lots of European countries ordered F35s from Lockheed Martin.
What an own goal. This includes Denmark/Greenland.
But i digress...
alwillis wrote 2 hours 39 min ago:
> We need a second source, preferably located in the EU.
Absolutely. It feels like a matter of time before the current US
administration will attempt to implement some authoritarian policy
regarding certificates.
darkwater wrote 21 hours 29 min ago:
Actalis is European, Italian to be more precise, owned by Aruba (if
that last thing is good or not can be probably discussed, though).
Cu3PO42 wrote 11 hours 3 min ago:
ZeroSSL is Austrian, however since I last looked at them it
appears they were acquired by a US corporation.
Havoc wrote 22 hours 19 min ago:
>Let's Encrypt a central point of failure
That's been true for a while, regardless of cert length.
Everyone leans on them and unlike CF and other choke points of the
internet...Let's Encrypt is a non-profit
loloquwowndueo wrote 21 hours 14 min ago:
You say it like itâs a bad thing. Do you donate to them? I do !
Havoc wrote 17 hours 6 min ago:
>Do you donate to them?
yes
Sesse__ wrote 22 hours 44 min ago:
There are other free ACME-based providers, so switching should be
fairly painless if needed. (I guess if you've issued CAA records or
similar, you may need some manual intervention.)
bigbuppo wrote 20 hours 52 min ago:
Doesn't matter. This is a push by the CA/Browser Forum. Google,
Mozilla, and all the CAs got together and said, "hey, what if we
just made certificates shorter because we're too stupid to figure
out a revocation mechanism that actually works other than
expiration." They've tried this shit before, but saner heads
prevailed. This time they did not.
ekr____ wrote 20 hours 38 min ago:
Mozilla does have a revocation mechanism that actually works.
HTML [1]: https://hacks.mozilla.org/2025/08/crlite-fast-private-an...
tptacek wrote 20 hours 49 min ago:
Shorter lifetimes strongly push customers towards ACME and thus
away from commercial CAs, so it's odd to suggest that CAs
subverted this process.
gethly wrote 21 hours 26 min ago:
Name one, baseide the NSA run Cloudflare.
madeofpalk wrote 21 hours 21 min ago:
ACME [1] Google [2] SSL.com [3] ZeroSSL [4] I don't actually
think Cloudflare runs an ACME Certificate Authority. They just
partner with LetsEncrypt? Edit: Looks like they don't run any CA,
they just delegate out to a bunch of others
HTML [1]: https://guide.actalis.com/ssl/activation/acme
HTML [2]: https://pki.goog/
HTML [3]: https://www.ssl.com/blogs/sslcom-supports-acme-protocol-...
HTML [4]: https://zerossl.com/documentation/acme/
HTML [5]: https://developers.cloudflare.com/ssl/reference/certific...
gethly wrote 20 hours 40 min ago:
Those are just providers that support the ACME protocol, not
free certificate providers.
teraflop wrote 20 hours 33 min ago:
Google and ZeroSSL both provide free certificates via ACME.
The links posted above have more details.
Animats wrote 22 hours 18 min ago:
You can have more than one CAA record, so it should be possible to
configure backup certificate authorities. It's probably a good idea
to do that for important sites.
DIR <- back to front page