I have had the flag to enable this setting enabled for quite some time. It’s never caused any issues. I have only seen it pop-up once- for a cert that I had just issued a second prior. The cert was logged properly and the page loaded another second later. Very quick.
135.0.1 on Ubuntu is warning me. Maybe Mozilla is doing a delayed rollout?
For context, in about:config my security.pki.certificate_transparency.mode is set to 2. According to https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra... if it's on 0 (disabled) or 1 (not enforcing, collecting telemetry only), you can enable it.
I can imagine Mozilla setting that setting to 1 by default (collecting telemetry on sites you visit) and Debian overridding it for privacy purposes.
Does the browser actually communicate with any external service for enforcing CT?
I was under the impression it just checked the certificate for an inclusion proof, and actual monitoring of consistency between these proofs and logs is done by non-browser entities.
Hmm. I wonder how this will work with certificates generated by enterprise or private certificate authorities. Specifically, I use caddy for local web development and it generates a snake oil ca for anything on *.localhost using code from step-ca.
I also use step-ca and bind to run a homelab top level domain and generate certs using rfc2136. I have to install that root ca cert everywhere, but it’s worth it
As of now, such stricter certificate requirements only apply to publicly trusted CAs that ship with the browser. Custom-added CAs are not subject to these requirements—this applies to all major browsers.
I haven't tested Firefox's implementation yet, but I expect your private CA to continue working as expected since it is manually added.
Private CAs can:
* Issue longer certificates, even 500 years if you want.
Public CAs are limited to 1 year I think, or 2? I think it was 1..
* Can use weaker algorithms or older standards if they want.
* Not subject to browser revocation policies - no need for OCSP/CRL etc.
I'm clearly not the target audience for this, so excuse me if this is a dumb question: what is this tool used for? Who would usually use it and for what purpose?
Are you continuously monitoring consistency proofs? Or in other words, would someone (you or someone else) actually notice if a log changed its contents retroactively?
This is really cool! It discovered even subdomains that lived for a few days on my site. If it’s not a secret, how do you discover those? Is it by listening to DNS record changes?
I tried to do something like this one time and had a problem just finding the logs. All information on the internet points to the fact that certain logs exist, but not how to access them. Are they not public access? Do you have a B2B relationship with the companies like Cloudflare that run logs?
It's a public, tamper-proof log of all certificates issued.
When a CA issues a certificate, it sends a copy to at least two different logs, gets a signed "receipt", and the receipt needs to be included in the certificate or browsers won't accept it.
The log then publishes the certificate. This means that a CA cannot issue a certificate (that browsers would accept) without including it in the log. Even if a government compels a CA, someone compromises it, or even steals the CAs key, they'd have to either also do the same to two CT logs, or publish the misissued certificate.
Operators of large web sites then can and should monitor the CT logs to make sure that nobody issued a certificate for their domains, and they can and will raise hell if they see that happen. If e.g. a government compels a CA to issue a MitM certificate, or a CA screws up and issues a fake cert, and this cert is only used to attack a single user, it would have been unlikely to be detected in the past (unless that user catches it, nobody else would know about the bad certificate). Now, this is no longer possible without letting the world know about the existence of the bad cert.
There are also some interesting properties of the logs that make it harder for a government to compel the log to hide a certificate or to modify the log later. Essentially, you can store a hash representing the content of the log at any time, and then for any future state, the log can prove that the new state contains all the old contents. The "receipts" mentioned above (SCTs) are also a promise to include a certificate by a certain time, so if a log issues an SCT then publishes a new state more than a day later that doesn't include the certificate, that state + the SCT are proof that the log is bad.
CT is an append-only distributed log for certificate issuances. People and client software can use it to check if a certificate is being provided by a trusted CA, if it has been revoked, or is being provided by multiple CAs (the latter possibly indicating CA compromise). CA meaning Certificate Authority, the organizations that issue certificates.
This provides a further layer of technological defense to attempting the mitigation of your web browser traffic being intercepted and potentially tampered with.
In practice a regular person is unlikely to run into this, because web PKI is mostly working as expected, so there's no reason for the edge cases to happen en masse. This change is covering one such edge case.
No idea how the typical corporate interception solutions (e.g. Zscaler) circumvent it in other browsers where this check has long been implemented.
I believe so. You'll need to disable CT enforcement / or add your SPKI hash to the ignore list in the browser settings temporarily to get it working. [0] I guess this is also how corporations get around this issue? Still unsure.
Another comment mentioned [0]. Enterprise and people running a private CA can set "security.pki.certificate_transparency.disable_for_hosts" to disable CT for certain domains (plus all their subdomains).
I just hope they automatically disable it for non-public tlds, both from IANA and RFC 6762.
> Doesn't this effectively render corporate CAs useless?
All of the browsers ignore transparency for enterprise roots. To determine which is which, the list of actual public roots is stored separately in the CA database, listed in chrome://certificate-manager/crscerts for Chrome and listed as a "Builtin Object Token" in Firefox's Certificate Manager.
They're doing different things, and you should do both.
Setting CAA records primarily serves to reduce your attack surface against vulnerable domain validation processes. If an attacker wants to specifically attack your domain, and you use CAA, the attacker now needs to find a vulnerability in your CA's domain validation process instead of any CAs validation process. If it works, it prevents an attacker from getting a valid cert.
Monitoring CT logs only detects attacks after the fact, but will catch cases where CAs wrongly issued certificates despite CAA records, and if you monitor against a whitelist of your own known certificates, it will catch cases where someone got your CA to issue them a certificate, either by tricking the CA or compromising your infrastructure (most alerts you will actually see will be someone at your company just trying to get their job done without going through what you consider the proper channels, although I think you can now restrict CAA to a specific account for LetsEncrypt).
Since CT is required now by browsers, an attacker that compromises (or compels!) a CA in any way would still have to log the cert or also compromise or compel at least two logs to issue SCTs (signed promises to include the cert in the log) without actually publishing the cert (this is unlikely to get caught but if it was, there would be signed proof that the log did wrong).
Let's not let the best be the enemy of the good. Malicious actors who disregard CAA would first have to have gone through the process of accreditation to be added to public trust stores, and then would quickly get removed from those trust stores as soon as the imposture was detected. So while creating a malicious CA and then ignoring CAA records is entirely possible for few-shot high-value attacks, it's not a scalable approach, and it means CAA offers at least partial protection against malicious actors forging certificates as a day-to-day activity.
Transparency logs are of course better because they make it much easier for rogue CAs to be caught rapidly, but it's not a reason to abandon CAA until transparency log checking is universal, not just in browsers, but across the whole PKI ecosystem.
DNSSEC has aged very poorly. I also believe it operates at the wrong layer. When you surf to chase.com you want to make be sure that the website you see is actually JPMorganChase and not Mallory’s fake bank site. That’s why we have HTTPS and the WebPKI. If your local DNS server is poisoned somehow that’s obviosly not good, but it cannot easily send you to a fake version of chase.com.
Part of why it’s so hard for Mallory to create a fake version of a bank site is Certificate Transparency. It makes it much much harder to issue a forged certificate that a browser such as Chrome, Safari or Firefox will accept.
There are a lot of things that are different for sure since that article's release, for example the crypto, but also the existence of DoH/DoT and that it is leaps and bounds more deployed. They also talk about key pinning, but key pinning has been dead for a while and replaced by exactly CT.
I'm also not sure how much to trust the author. The writing is very odd language wise and they seem to have quite the axe to grind even with just public CA-based PKI, let alone their combination. The FAQ they link to also makes no sense to me:
> Under DNSSEC/DANE, CA certificates still get validated. How could a SIGINT agency forge a TLS certificate solely using DNSSEC? By corrupting one of the hundreds of CAs trusted by browsers.
It's literally what I'd want TLSA enforcement for to combat.
The reason browsers didn't implement DANE is because most people's DNS servers are garbage, so if you do this the browser doesn't work and "if you changed last you own the problem".
At the time if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error. What do you mean records other than A exist? RFC what? These days most of them can also answer AAAA? but good luck for the records needed by DNSSEC.
Today we could do better where people have any sort of DNS privacy, whether that's over HTTPS or TLS or QUIC so long as it's encrypted it's probably not garbage and it isn't being intercepted by garbage at your ISP.
Once the non-adoption due to rusted-in-place infrastructure happened, you get (as you will probably see here on HN) people who have some imagined principle reasons not to do DNSSEC, remember always to ask them how their solution fixed the problem that they say DNSSEC hasn't fixed. The fact the problem still isn't fixed tells you everything you need to know.
I guess I did forget that me using Cloudflare and Google as my DNS is not a normal setup to have...
But surely it doesn't have to be so black and white? TLSA enforcement is not even a hidden feature flag in mainstream web clients, it's just completely non-existent to my knowledge.
> if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error.
Really? Because that would mean that anything using SRV records wouldn’t work on home routers, yet it’s an integral part of many protocols at this point.
There’s some room between “my DNS resolver doesn’t do DNSSEC” and “I can only resolve A records”.
Yes really. Like I said - even AAAA though better than it was isn't as reliable as A, the "Happy Eyeballs" tactic makes that tolerable, maybe 90% of your customers have IPv6, get the AAAA answer quickly, reach the IPv6 endpoint, awesome. 9% only have IPv4 anyway, get IPv4 endpoint, also fine, but 1% the AAAA query never returns, a few milliseconds later the IPv4 connection succeeds and the AAAA query is abandoned so who cares.
I'd guess that you if you build something which needs SRV? to "Just work" in 2025, not "nice to have" but as a requirement, you probably lose 1-2% of your potential users for that. It might be worth it. But if you need 100% you'll want a fallback. I suggest built-in DoH to, say, Cloudflare.
There are basically no rules on how to properly operate it, even if there were, there'd be no way to enforce them. There's also almost zero chance a leaked key would ever be detected.
I'm not sure I follow, could you please elaborate a bit more? I'm not really suggesting DNS to be exclusively used for PKI over the current Web PKI system of public CAs either.
What prevents me from putting the hash of the public key of my public CA certificate into the TLSA record? Nothing. What prevents clients from checking both that the public CA based certificate I'm showing is valid and is present on CT, as well as that it's hashes to the same value I have placed into the TLSA record? Also nothing.
Am I grossly misunderstanding something here? Feels like I missed a meta.
Impractical in the sense that there are still TLDs (ccTLDs mind you, ICANN can't force anything for those countries) which do not have any form of DNSSEC, which makes DANE and TLSA useless for those TLDs.
Kind of disappointing if that is the actual stated reason by the various browser vendors, all or nothing doesn't sound like a good policy for this. Surely there is a middle ground possible.
Supporting DANE means you need to maintain both traditional CA validation and DANE simultaneously.
This may be controversial, but I believe that with CT logs already in place, DANE could potentially reduce security by leaving you without an audit trail of certificates issued to your hosts. If you actively monitor certificate issuance to your hosts using CT, you are in a much better security posture than what DANE would provide you with.
People praising DANE seem to be doing so as a political statement ("I don't want a 3rd party") rather than making a technical point.
Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.
> Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.
That seems quite complicated while not increasing security by much, or at all?
I don't necessarily see the complication. The benefit would be that I, the domain owner, would be able to communicate to clients what certificate they should be expecting, and in turn, clients would be able to tell if there's a mismatch. Sounds like a simple win to me.
According to my understanding, multiple CAs can issue a certificate covering the same domain just fine, so that on its own showing up on the CT logs is not a sign of CA compromise, just a clue. Could then check CAA, but that is optional and clients are never supposed to check that according to the standard, only the CAs (which again the idea is that one or more are compromised in this scenario). So there's a gap there. This gap to my knowledge is currently bridged by people auditing CT manually, and is the gap that would be filled with DANE in this setup in my thinking, automating it away (or just straight up providing it, because I can imagine that a lot of domain owners do not monitor CT for their domains whatsoever).
Great move! Curious to see how this will impact lesser-known CAs. Will this make it easier to detect misissued certs, or will enforcement still depend on browser policies?
firefox is just catching up with what chrome implemented years ago. unless you have a site visited only by firefox users, ecosystem effect is likely to be minimal...
though it does protect firefox users in the time between detection and remediation.
You're essentially advocating for security through obscurity.
The fact that public infrastructure is mappable is actually beneficial. It helps enforce best practices rather than relying on the flawed 'no one will discover this endpoint' approach.
> And it's a new single point of control and failure!
This reasoning is flawed. X.509 certificates themselves embed SCTs.
While log unavailability may temporarily affect new certificate issuance, there are numerous logs operated by diverse organizations precisely to prevent single points of failure.
Certificate validation doesn't require active log servers once SCTs are embedded.
They are tradeoffs and it’s not all-or-nothing. There’s a reason security clearances exist, and that’s basically “security through obscurity”.
The argument here is that the loss of privacy and the incentives that will increase centralization might not be worth the gain in security for some folks, but good luck opting out. It basically requires telling bigco about your domains. How convenient for crawlers…
> You're essentially advocating for security through obscurity
So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.
Security through obscurity can be fine when used in addition to other security measures, and has tangible benefits in a significant fraction of real world situations.
> So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.
Directly, or unintentionally implied or not. That's an implication you're allowed to infer when obscurity is the only thing listed, because it's *very* common that is the only defense mechanism. Also, when given the choice between mentioning something that works (literally any other security measure), or mentioning something well known to fail more often than work (obscurity). You're supposed to mention the functioning one, and omit the non-functioning one. https://xkcd.com/463/
> Security through obscurity can be fine when used in addition to other security measures,
No, it also has subtle downsides as well. It changes the behavior of everything that interacts with the system. Humans constantly over value the actual value of security though obscurity. And will make decisions based on that misconceived notion. I once heard an engineer tell me. "I didn't know you could hit this endpoint with curl". The mental model for permitting secrets to be used as part of security is actively harmful to security. Much more than it has ever shown to benefit it. Thus, the cure here is to affirmatively reject security though obscurity.
We should treat it the same way we treat goto. Is goto useful, absolutely. Are there places where it improves code? Another absolutely. Did code quality as a whole improve once SWE collectively shunned goto? Yes! Security though obscurity is causing the exact same class of issues. And until the whole industry adapts to the understanding that it's actually more harmful than useful, we still let subtle bugs like "I thought no one knew about this" sneak in.
We're not going to escape this valley while people are still advocating for security theatre. We all collectively need to enforce the idea that secrets are dangerous to software security.
> and has tangible benefits in a significant fraction of real world situations.
So does racial profiling, but humans have proven over and over and over again, that we're incapable of not misusing in a way that's also actively harmful. And again, when there are options that are better in every way, it's malpractice to use the
error prone methods.
"Passive DNS" is a thing people sell. If people connect to your systems and use public DNS servers, chances are I can "map the complete endpoints of your infrastructure" without touching that infrastructure for a small cost.
If client X doesn't want client-x.your-firm.example to show up by all means obtain a *.your-firm.example wildcard and let them use any codename they like - but know that they're going to name their site client-x.your-firm.example, because in fact they don't properly protect such "secrets".
"Blue Harvest" is what secrets look like. Or "Argo" (the actual event not the movie, although real life is much less dramatic, they didn't get chased by people with guns, they got waved through by sleepy airport guards, it's scary anyway but an audience can't tell that).
What you're talking about is like my employer's data centre being "unlisted" and having no logo on the signs. Google finds it with an obvious search, it's not actually secret.
It does leak domain name info, but then you do still have the option to use a wildcard certificate or set up a private CA instead of relying on public ones, which likely makes more sense when dealing with a private resource anyways.
I guess there might be a scenario where you need "secret" domains be publicly resolvable and use distinct certs, but an example escapes me.
> And it's a new single point of control and failure!
That’s why there is a mandatory minimum of several unaffiliated logs that each certificate has to be submitted to.
If all of these were to catastrophically fail, it would still always be possible for browsers or central monitors to fall back to trusting certificates logged by exactly these without inclusion verification.
I may be (legitimately) flagged for asking a question that may sound antagonizing ... but asked with sincerity: is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?
While this no doubt is an overall win, at least for most and in most cases, afaik this isn't completely without problems of its own. I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
Not trying to be dismissive here. Just have genuine concerns and reservations. Even if mostly intuitively for now; no concrete ones yet. Maybe it's just a Pavlov-reaction, after reading the name Firefox. Honestly can't tell.
You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.
Certificate Transparency [1] is an important technology that improves TLS/HTTPS security, and the name was not invented by Mozilla to my knowledge.
If Firefox were to implement a hypothetical IETF standard called “private caching”, would you also be cynical about Firefox “doing something private at this point in time” without even reading up what the technology in question does?
> You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.
What if I did (understand)? What if I knew a thing or two about it, even some lesser known details and side-effects? Maybe including a controversy or two, or at least an odd limitation and potential hazard at that. But, you correctly do point out that Firefox isn't to blame for implementing somebody else's "standard". Responsible for any and all consequences? Nonetheless, certainly yes.
Aside from now probably not being the best of times for Firefox, my main (potential) concern still stands. However, it is hardly a Firefox-only one, I'll give it that.
> What if I knew a thing or two about it, even some lesser known details and side-effects?
Then you would explicitly mention them instead of alluding to them.
People who know what they are talking about actually bring up the things that they are concerned about. They don't just say, i know an issue but im not going to tell you what it is.
> is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?
What are you expecting them to do? Rename the technology 1984 style?
> I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
This is a non sensical statement. I mean that literally. It does not make sense.
I guess this is a good lesson on what the reasoning one would typically (and unfortunately) bring to a mainstream political thread results in when met with a topic from another area of life instead, particularly a technical one.
Especially this:
> where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
This is always true. There's no arrangement where you can outsource reasoning and decisionmaking (by choice or by coercion) but also not. That's a contradiction.
> This is always true. There's no arrangement where you entrust someone else with decisionmaking (by choice or not nonwithstanding) but then they're somehow not the ones performing the decisionmaking afterwards.
I'm well aware of that. On itself there isn't a problem with it, in principle at least. Right until it leads to bad decisions being pushed through, and more often in ignorance rather than malice. I personally only have a real problem with it when people or tech ends up harmed or even destroyed, just because of ignorance rather than deliberate arbitrary choices (after consideration, hopefully).
To be clear, I'm not saying that any of that is the case here. But lets just say that browser vendors in general, and Mozilla as of lately in particular, aren't on my "I trust you blindly at making the right decisions" list.
I do see pretty massive problems with it, such as those you list off, but the unfortunate truth is that one cannot know or do everything themselves. So usually it's not even a choice but a coercive scenario.
For example, say I want to ensure my food is safe to eat. That would require farmland where I can grow my own food. Say I buy some, but then do I have the knowledge and the means to figure out whether the food I grew is actually safe to eat? After all, I just bought some random plot of farmland, how would I know what was in it? Maybe it wasn't even the land that's contaminated but instead the wind brought over some chance contamination? And so on.
I have had the flag to enable this setting enabled for quite some time. It’s never caused any issues. I have only seen it pop-up once- for a cert that I had just issued a second prior. The cert was logged properly and the page loaded another second later. Very quick.
I am on Debian Firefox 135.0.1 and https://no-sct.badssl.com/ doesn't error out as expected. Is Debian doing something different?
I do get the warning using the same Firefox version on Windows.
Note that the link in the article is mangled. The link text is <https://no-sct.badssl.com/>, but the actual href points to <https://certificate.transparency.dev/useragents/>. Your comment has the correct link which gives the warning.
135.0.1 on Ubuntu is warning me. Maybe Mozilla is doing a delayed rollout?
For context, in about:config my security.pki.certificate_transparency.mode is set to 2. According to https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra... if it's on 0 (disabled) or 1 (not enforcing, collecting telemetry only), you can enable it.
I can imagine Mozilla setting that setting to 1 by default (collecting telemetry on sites you visit) and Debian overridding it for privacy purposes.
Does the browser actually communicate with any external service for enforcing CT?
I was under the impression it just checked the certificate for an inclusion proof, and actual monitoring of consistency between these proofs and logs is done by non-browser entities.
No, I assume but Mozilla was first collecting telemetry to see if enabling CT would cause user-visible errors or not.
Ah, good point – presumably 2 also sends telemetry to Mozilla?
Debian 12 with Firefox 135.0.1 and I do get a warning.
I'm on Debian sid / Firefox 135.0.1-1 and do get the warning.
135.0.1 on macOS is warning me as well.
Librewolf 135.0.1 is working as expected (FreeBSD, Windows and Linux)
Hmm. I wonder how this will work with certificates generated by enterprise or private certificate authorities. Specifically, I use caddy for local web development and it generates a snake oil ca for anything on *.localhost using code from step-ca.
I also use step-ca and bind to run a homelab top level domain and generate certs using rfc2136. I have to install that root ca cert everywhere, but it’s worth it
As of now, such stricter certificate requirements only apply to publicly trusted CAs that ship with the browser. Custom-added CAs are not subject to these requirements—this applies to all major browsers.
I haven't tested Firefox's implementation yet, but I expect your private CA to continue working as expected since it is manually added.
Private CAs can:
* Issue longer certificates, even 500 years if you want. Public CAs are limited to 1 year I think, or 2? I think it was 1..
* Can use weaker algorithms or older standards if they want.
* Not subject to browser revocation policies - no need for OCSP/CRL etc.
* More things that I do not know?
Public CAs are currently limited to 398 days (effectively 13 months).
Shameless plug: Check out my Certificate Transparency monitor at https://www.merklemap.com
The scale is massive, I just crossed 100B rows in the main database! :)
That's interesting, can you share more information about your tech stack?
I'm clearly not the target audience for this, so excuse me if this is a dumb question: what is this tool used for? Who would usually use it and for what purpose?
Anyone setting up infrastructure, security researchers, security teams and IT teams.
It’s also actually very useful too in the brand management field, especially to detect phishing websites.
Are you continuously monitoring consistency proofs? Or in other words, would someone (you or someone else) actually notice if a log changed its contents retroactively?
Not yet, but that’s definitely the short term plan!
Why do it only show a few subdomains for .statuspage.io? I would have expected at least 10K or so. https://www.merklemap.com/search?query=*.statuspage.io&page=...
Is my query wrong or are you just showing less results intentionally if you’re not paying?
> Why do it only show a few subdomains for .statuspage.io? I would have expected at least 10K or so. https://www.merklemap.com/search?query=*.statuspage.io&page=...
Because they have a wildcard for *.statuspage.io, which they are probably hosting their pages on.
> Is my query wrong or are you just showing less results intentionally if you’re not paying?
No, results are the same but not sorted.
This is really cool! It discovered even subdomains that lived for a few days on my site. If it’s not a secret, how do you discover those? Is it by listening to DNS record changes?
By using... certificate transparency logs.
https://www.merklemap.com/documentation/how-it-works
I tried to do something like this one time and had a problem just finding the logs. All information on the internet points to the fact that certain logs exist, but not how to access them. Are they not public access? Do you have a B2B relationship with the companies like Cloudflare that run logs?
Can someone explain in a nutshell what CT is, and how does it help security for the average user?
It's a public, tamper-proof log of all certificates issued.
When a CA issues a certificate, it sends a copy to at least two different logs, gets a signed "receipt", and the receipt needs to be included in the certificate or browsers won't accept it.
The log then publishes the certificate. This means that a CA cannot issue a certificate (that browsers would accept) without including it in the log. Even if a government compels a CA, someone compromises it, or even steals the CAs key, they'd have to either also do the same to two CT logs, or publish the misissued certificate.
Operators of large web sites then can and should monitor the CT logs to make sure that nobody issued a certificate for their domains, and they can and will raise hell if they see that happen. If e.g. a government compels a CA to issue a MitM certificate, or a CA screws up and issues a fake cert, and this cert is only used to attack a single user, it would have been unlikely to be detected in the past (unless that user catches it, nobody else would know about the bad certificate). Now, this is no longer possible without letting the world know about the existence of the bad cert.
There are also some interesting properties of the logs that make it harder for a government to compel the log to hide a certificate or to modify the log later. Essentially, you can store a hash representing the content of the log at any time, and then for any future state, the log can prove that the new state contains all the old contents. The "receipts" mentioned above (SCTs) are also a promise to include a certificate by a certain time, so if a log issues an SCT then publishes a new state more than a day later that doesn't include the certificate, that state + the SCT are proof that the log is bad.
Thanks for the great explanation of both tech design and real world benefits!
There's a site for it here (linked 2 levels deep from the original article):
https://certificate.transparency.dev/
CT is an append-only distributed log for certificate issuances. People and client software can use it to check if a certificate is being provided by a trusted CA, if it has been revoked, or is being provided by multiple CAs (the latter possibly indicating CA compromise). CA meaning Certificate Authority, the organizations that issue certificates.
This provides a further layer of technological defense to attempting the mitigation of your web browser traffic being intercepted and potentially tampered with.
In practice a regular person is unlikely to run into this, because web PKI is mostly working as expected, so there's no reason for the edge cases to happen en masse. This change is covering one such edge case.
No idea how the typical corporate interception solutions (e.g. Zscaler) circumvent it in other browsers where this check has long been implemented.
Will Mitmproxy stop working?
Chrome treats certificates added by user as not requiring CT: https://github.com/mitmproxy/mitmproxy/discussions/5720
And to wit, Firefox too:
> Setting this preference to 2 causes Firefox to enforce CT for certificates issued by roots in Mozilla's Root CA Program.
I believe so. You'll need to disable CT enforcement / or add your SPKI hash to the ignore list in the browser settings temporarily to get it working. [0] I guess this is also how corporations get around this issue? Still unsure.
[0] https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...
No. CT is only required for public CAs. You only need those browser policy settings if you’re using a public CA without CT.
I'd imagine this is why certs that terminate in root certificates manually added to the trust store will work fine then [as stated by other comments]?
Congratulations! That's terrific news.
Doesn't this effectively render corporate CAs useless?
Another comment mentioned [0]. Enterprise and people running a private CA can set "security.pki.certificate_transparency.disable_for_hosts" to disable CT for certain domains (plus all their subdomains).
I just hope they automatically disable it for non-public tlds, both from IANA and RFC 6762.
[0] https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...
> Doesn't this effectively render corporate CAs useless?
All of the browsers ignore transparency for enterprise roots. To determine which is which, the list of actual public roots is stored separately in the CA database, listed in chrome://certificate-manager/crscerts for Chrome and listed as a "Builtin Object Token" in Firefox's Certificate Manager.
No, it just makes any CA accountable for all the certs they issue.
Wonder why they say you should monitor transparency logs instead of setting up CAA records - malicious actors will most likely disregard CAA anyway.
They're doing different things, and you should do both.
Setting CAA records primarily serves to reduce your attack surface against vulnerable domain validation processes. If an attacker wants to specifically attack your domain, and you use CAA, the attacker now needs to find a vulnerability in your CA's domain validation process instead of any CAs validation process. If it works, it prevents an attacker from getting a valid cert.
Monitoring CT logs only detects attacks after the fact, but will catch cases where CAs wrongly issued certificates despite CAA records, and if you monitor against a whitelist of your own known certificates, it will catch cases where someone got your CA to issue them a certificate, either by tricking the CA or compromising your infrastructure (most alerts you will actually see will be someone at your company just trying to get their job done without going through what you consider the proper channels, although I think you can now restrict CAA to a specific account for LetsEncrypt).
Since CT is required now by browsers, an attacker that compromises (or compels!) a CA in any way would still have to log the cert or also compromise or compel at least two logs to issue SCTs (signed promises to include the cert in the log) without actually publishing the cert (this is unlikely to get caught but if it was, there would be signed proof that the log did wrong).
Let's not let the best be the enemy of the good. Malicious actors who disregard CAA would first have to have gone through the process of accreditation to be added to public trust stores, and then would quickly get removed from those trust stores as soon as the imposture was detected. So while creating a malicious CA and then ignoring CAA records is entirely possible for few-shot high-value attacks, it's not a scalable approach, and it means CAA offers at least partial protection against malicious actors forging certificates as a day-to-day activity.
Transparency logs are of course better because they make it much easier for rogue CAs to be caught rapidly, but it's not a reason to abandon CAA until transparency log checking is universal, not just in browsers, but across the whole PKI ecosystem.
In any security setting, it’s usually good to have both controls and detection.
CAA records help prevent unexpected issuance, but what if your DNS server is compromised? DNSSEC might help.
Certificate Transparency provides a detection mechanism.
Also, unlike CAA records which are enforced only by policy that CAs must respect them, CT is technically enforced by browsers.
So they are complimentary. A security-sensitive organization should have both.
Would be cool if DANE/TLSA record checks were also implemented. Not sure why browsers are not adopting it.
DNSSEC has aged very poorly. I also believe it operates at the wrong layer. When you surf to chase.com you want to make be sure that the website you see is actually JPMorganChase and not Mallory’s fake bank site. That’s why we have HTTPS and the WebPKI. If your local DNS server is poisoned somehow that’s obviosly not good, but it cannot easily send you to a fake version of chase.com.
Part of why it’s so hard for Mallory to create a fake version of a bank site is Certificate Transparency. It makes it much much harder to issue a forged certificate that a browser such as Chrome, Safari or Firefox will accept.
For further info about the flaws of DNSSEC I can recommend this article: https://sockpuppet.org/blog/2015/01/15/against-dnssec/ It’s from 2015 but I don’t think anything has really changed since that.
There are a lot of things that are different for sure since that article's release, for example the crypto, but also the existence of DoH/DoT and that it is leaps and bounds more deployed. They also talk about key pinning, but key pinning has been dead for a while and replaced by exactly CT.
I'm also not sure how much to trust the author. The writing is very odd language wise and they seem to have quite the axe to grind even with just public CA-based PKI, let alone their combination. The FAQ they link to also makes no sense to me:
> Under DNSSEC/DANE, CA certificates still get validated. How could a SIGINT agency forge a TLS certificate solely using DNSSEC? By corrupting one of the hundreds of CAs trusted by browsers.
It's literally what I'd want TLSA enforcement for to combat.
The reason browsers didn't implement DANE is because most people's DNS servers are garbage, so if you do this the browser doesn't work and "if you changed last you own the problem".
At the time if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error. What do you mean records other than A exist? RFC what? These days most of them can also answer AAAA? but good luck for the records needed by DNSSEC.
Today we could do better where people have any sort of DNS privacy, whether that's over HTTPS or TLS or QUIC so long as it's encrypted it's probably not garbage and it isn't being intercepted by garbage at your ISP.
Once the non-adoption due to rusted-in-place infrastructure happened, you get (as you will probably see here on HN) people who have some imagined principle reasons not to do DNSSEC, remember always to ask them how their solution fixed the problem that they say DNSSEC hasn't fixed. The fact the problem still isn't fixed tells you everything you need to know.
I guess I did forget that me using Cloudflare and Google as my DNS is not a normal setup to have...
But surely it doesn't have to be so black and white? TLSA enforcement is not even a hidden feature flag in mainstream web clients, it's just completely non-existent to my knowledge.
> if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error.
Really? Because that would mean that anything using SRV records wouldn’t work on home routers, yet it’s an integral part of many protocols at this point.
There’s some room between “my DNS resolver doesn’t do DNSSEC” and “I can only resolve A records”.
Yes really. Like I said - even AAAA though better than it was isn't as reliable as A, the "Happy Eyeballs" tactic makes that tolerable, maybe 90% of your customers have IPv6, get the AAAA answer quickly, reach the IPv6 endpoint, awesome. 9% only have IPv4 anyway, get IPv4 endpoint, also fine, but 1% the AAAA query never returns, a few milliseconds later the IPv4 connection succeeds and the AAAA query is abandoned so who cares.
I'd guess that you if you build something which needs SRV? to "Just work" in 2025, not "nice to have" but as a requirement, you probably lose 1-2% of your potential users for that. It might be worth it. But if you need 100% you'll want a fallback. I suggest built-in DoH to, say, Cloudflare.
DNSSEC is not a good PKI, that's why.
There are basically no rules on how to properly operate it, even if there were, there'd be no way to enforce them. There's also almost zero chance a leaked key would ever be detected.
I'm not sure I follow, could you please elaborate a bit more? I'm not really suggesting DNS to be exclusively used for PKI over the current Web PKI system of public CAs either.
That is kind of the value proposition for DANE though.
What prevents me from putting the hash of the public key of my public CA certificate into the TLSA record? Nothing. What prevents clients from checking both that the public CA based certificate I'm showing is valid and is present on CT, as well as that it's hashes to the same value I have placed into the TLSA record? Also nothing.
Am I grossly misunderstanding something here? Feels like I missed a meta.
Impractical in the sense that there are still TLDs (ccTLDs mind you, ICANN can't force anything for those countries) which do not have any form of DNSSEC, which makes DANE and TLSA useless for those TLDs.
Kind of disappointing if that is the actual stated reason by the various browser vendors, all or nothing doesn't sound like a good policy for this. Surely there is a middle ground possible.
Supporting DANE means you need to maintain both traditional CA validation and DANE simultaneously.
This may be controversial, but I believe that with CT logs already in place, DANE could potentially reduce security by leaving you without an audit trail of certificates issued to your hosts. If you actively monitor certificate issuance to your hosts using CT, you are in a much better security posture than what DANE would provide you with.
People praising DANE seem to be doing so as a political statement ("I don't want a 3rd party") rather than making a technical point.
Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.
> Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.
That seems quite complicated while not increasing security by much, or at all?
I don't necessarily see the complication. The benefit would be that I, the domain owner, would be able to communicate to clients what certificate they should be expecting, and in turn, clients would be able to tell if there's a mismatch. Sounds like a simple win to me.
According to my understanding, multiple CAs can issue a certificate covering the same domain just fine, so that on its own showing up on the CT logs is not a sign of CA compromise, just a clue. Could then check CAA, but that is optional and clients are never supposed to check that according to the standard, only the CAs (which again the idea is that one or more are compromised in this scenario). So there's a gap there. This gap to my knowledge is currently bridged by people auditing CT manually, and is the gap that would be filled with DANE in this setup in my thinking, automating it away (or just straight up providing it, because I can imagine that a lot of domain owners do not monitor CT for their domains whatsoever).
Highlight point: They are just using Chrome's transparency logs. Yet again, Firefox chooses to be subservient to Google's view of the world.
Great move! Curious to see how this will impact lesser-known CAs. Will this make it easier to detect misissued certs, or will enforcement still depend on browser policies?
firefox is just catching up with what chrome implemented years ago. unless you have a site visited only by firefox users, ecosystem effect is likely to be minimal... though it does protect firefox users in the time between detection and remediation.
[flagged]
[flagged]
In theory it is good, but somehow it is also a big threat to privacy and security of your infrastructure.
No need anymore to scan your network to map the complete endpoints of your infrastructure!
And it's a new single point of control and failure!
You're essentially advocating for security through obscurity.
The fact that public infrastructure is mappable is actually beneficial. It helps enforce best practices rather than relying on the flawed 'no one will discover this endpoint' approach.
> And it's a new single point of control and failure!
This reasoning is flawed. X.509 certificates themselves embed SCTs.
While log unavailability may temporarily affect new certificate issuance, there are numerous logs operated by diverse organizations precisely to prevent single points of failure.
Certificate validation doesn't require active log servers once SCTs are embedded.
They are tradeoffs and it’s not all-or-nothing. There’s a reason security clearances exist, and that’s basically “security through obscurity”.
The argument here is that the loss of privacy and the incentives that will increase centralization might not be worth the gain in security for some folks, but good luck opting out. It basically requires telling bigco about your domains. How convenient for crawlers…
> You're essentially advocating for security through obscurity
So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.
Security through obscurity can be fine when used in addition to other security measures, and has tangible benefits in a significant fraction of real world situations.
> So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.
Directly, or unintentionally implied or not. That's an implication you're allowed to infer when obscurity is the only thing listed, because it's *very* common that is the only defense mechanism. Also, when given the choice between mentioning something that works (literally any other security measure), or mentioning something well known to fail more often than work (obscurity). You're supposed to mention the functioning one, and omit the non-functioning one. https://xkcd.com/463/
> Security through obscurity can be fine when used in addition to other security measures,
No, it also has subtle downsides as well. It changes the behavior of everything that interacts with the system. Humans constantly over value the actual value of security though obscurity. And will make decisions based on that misconceived notion. I once heard an engineer tell me. "I didn't know you could hit this endpoint with curl". The mental model for permitting secrets to be used as part of security is actively harmful to security. Much more than it has ever shown to benefit it. Thus, the cure here is to affirmatively reject security though obscurity.
We should treat it the same way we treat goto. Is goto useful, absolutely. Are there places where it improves code? Another absolutely. Did code quality as a whole improve once SWE collectively shunned goto? Yes! Security though obscurity is causing the exact same class of issues. And until the whole industry adapts to the understanding that it's actually more harmful than useful, we still let subtle bugs like "I thought no one knew about this" sneak in.
We're not going to escape this valley while people are still advocating for security theatre. We all collectively need to enforce the idea that secrets are dangerous to software security.
> and has tangible benefits in a significant fraction of real world situations.
So does racial profiling, but humans have proven over and over and over again, that we're incapable of not misusing in a way that's also actively harmful. And again, when there are options that are better in every way, it's malpractice to use the error prone methods.
Thank you for putting this up so clearly!
"Passive DNS" is a thing people sell. If people connect to your systems and use public DNS servers, chances are I can "map the complete endpoints of your infrastructure" without touching that infrastructure for a small cost.
If client X doesn't want client-x.your-firm.example to show up by all means obtain a *.your-firm.example wildcard and let them use any codename they like - but know that they're going to name their site client-x.your-firm.example, because in fact they don't properly protect such "secrets".
"Blue Harvest" is what secrets look like. Or "Argo" (the actual event not the movie, although real life is much less dramatic, they didn't get chased by people with guns, they got waved through by sleepy airport guards, it's scary anyway but an audience can't tell that).
What you're talking about is like my employer's data centre being "unlisted" and having no logo on the signs. Google finds it with an obvious search, it's not actually secret.
> In theory it is good, but somehow it is also a big threat to privacy and security of your infrastructure.
This is silly. Certificates have to be in CT logs regardless of if firefox valudates or not.
Additionally this doesnt apply to private CAs, so internal infrastructure is probably not affected unless you are using public web pki on them.
It does leak domain name info, but then you do still have the option to use a wildcard certificate or set up a private CA instead of relying on public ones, which likely makes more sense when dealing with a private resource anyways.
I guess there might be a scenario where you need "secret" domains be publicly resolvable and use distinct certs, but an example escapes me.
> And it's a new single point of control and failure!
That’s why there is a mandatory minimum of several unaffiliated logs that each certificate has to be submitted to.
If all of these were to catastrophically fail, it would still always be possible for browsers or central monitors to fall back to trusting certificates logged by exactly these without inclusion verification.
I may be (legitimately) flagged for asking a question that may sound antagonizing ... but asked with sincerity: is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?
While this no doubt is an overall win, at least for most and in most cases, afaik this isn't completely without problems of its own. I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
Not trying to be dismissive here. Just have genuine concerns and reservations. Even if mostly intuitively for now; no concrete ones yet. Maybe it's just a Pavlov-reaction, after reading the name Firefox. Honestly can't tell.
You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.
Certificate Transparency [1] is an important technology that improves TLS/HTTPS security, and the name was not invented by Mozilla to my knowledge.
If Firefox were to implement a hypothetical IETF standard called “private caching”, would you also be cynical about Firefox “doing something private at this point in time” without even reading up what the technology in question does?
[1] https://en.wikipedia.org/wiki/Certificate_Transparency
> You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.
What if I did (understand)? What if I knew a thing or two about it, even some lesser known details and side-effects? Maybe including a controversy or two, or at least an odd limitation and potential hazard at that. But, you correctly do point out that Firefox isn't to blame for implementing somebody else's "standard". Responsible for any and all consequences? Nonetheless, certainly yes.
Aside from now probably not being the best of times for Firefox, my main (potential) concern still stands. However, it is hardly a Firefox-only one, I'll give it that.
> What if I did (understand)?
I think its pretty clear you don't.
> What if I knew a thing or two about it, even some lesser known details and side-effects?
Then you would explicitly mention them instead of alluding to them.
People who know what they are talking about actually bring up the things that they are concerned about. They don't just say, i know an issue but im not going to tell you what it is.
> is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?
What are you expecting them to do? Rename the technology 1984 style?
> I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
This is a non sensical statement. I mean that literally. It does not make sense.
> Just have genuine concerns and reservations
Do you? Because it doesn't sound like it.
I guess this is a good lesson on what the reasoning one would typically (and unfortunately) bring to a mainstream political thread results in when met with a topic from another area of life instead, particularly a technical one.
Especially this:
> where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).
This is always true. There's no arrangement where you can outsource reasoning and decisionmaking (by choice or by coercion) but also not. That's a contradiction.
> This is always true. There's no arrangement where you entrust someone else with decisionmaking (by choice or not nonwithstanding) but then they're somehow not the ones performing the decisionmaking afterwards.
I'm well aware of that. On itself there isn't a problem with it, in principle at least. Right until it leads to bad decisions being pushed through, and more often in ignorance rather than malice. I personally only have a real problem with it when people or tech ends up harmed or even destroyed, just because of ignorance rather than deliberate arbitrary choices (after consideration, hopefully).
To be clear, I'm not saying that any of that is the case here. But lets just say that browser vendors in general, and Mozilla as of lately in particular, aren't on my "I trust you blindly at making the right decisions" list.
I do see pretty massive problems with it, such as those you list off, but the unfortunate truth is that one cannot know or do everything themselves. So usually it's not even a choice but a coercive scenario.
For example, say I want to ensure my food is safe to eat. That would require farmland where I can grow my own food. Say I buy some, but then do I have the knowledge and the means to figure out whether the food I grew is actually safe to eat? After all, I just bought some random plot of farmland, how would I know what was in it? Maybe it wasn't even the land that's contaminated but instead the wind brought over some chance contamination? And so on.
> browser vendors in general, and Mozilla as of lately in particular, aren't on my "I trust you blindly at making the right decisions" list.
That's entirely fair. But what does this have to do with Mozilla's decision to enforce Certificate Transparency in Firefox?
If you have a concrete concern, voicing it could lead to a much more productive discussion than exuding a general aura of distrust, even if warranted.