https://youtube.com/watch?v=gnB76DQI1GE&t=19517s
https://research.mozilla.org/files/2025/04/clubcards_for_the...
Couldn't it just be responsible for its own key and signing incremental advances to a log that all publishers are responsible for storing up to their latest submission to it?
If it needed to restart and some last publisher couldn't give it its latest entries, well they would deserve that rollback to the last publish from a good publisher..
Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.
The storage requirements don't seem that bad though and it might not be worth any reduced redundancy and increased complexity for a different storage scheme. E.g. what keeps me from doing this is the >1Gbps and >1 pager requirements.
This is not true. A rollback is instantly noticeable (because the consistency of Signed True Heads can not be demonstrated) and is a very large failure of the log. What could happen is that a log issues a Signed Certificate Timestamp that can be used to show browsers that the cert is in the log, but never incorporating said cert in the log. This is less obvious, but doing this maliciously isn't really going to achieve much because all certs have to be logged in at least 2 logs to be accepted by browsers.
> Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.
An important source of stress in the PKI community is that there are many CAs, and a significant portion of them don't really want the system to be secure. (Their processes are of course perfect, so all this certificate logging is just them being pestered). Browser operators (and other cert users) do want the system to be secure.
An important design goal for CT was that it would require very little extra effort from CAs (and this drove many compromises). Google and other members of the CA/Browser would rather spend their goodwill on things that make the system more secure (ie shorter certificate lifetimes) than on getting CAs to pay for operating costs of CT logs. The cost for google to host a CT log is very little.
(I.e. your log ends abruptly but polling any other CA that published to the same CT shows there is more including reasons to shut you down.)
I don't see how a scheme where the CT signer has this responsibility makes any sense. If they stop operating because they are sick of it, all the CAs involved have a somewhat suspicious looking CT history on things already issued that has to be explained instead of having always had the responsibility to provide the history up to anything they have signed whether or not some CT goes away.
This requires the logs be held by independent parties, and retained forever.
If 12 CAs send to the same log and all have to save up to their latest entry not to be declared incompetent to be CAs, how would all 12 possibly do a worse job of providing that log on demand than a random 3rd party who has no particular investment at risk?
(Every other CA in a log is a 3rd party with respect to any other, but they are one who can actually be told to keep something indefinitely because they would also need to return it for legitimizing their own issuance.)
The info they get back from the CT log may be a Merkle Hash that partly depends on the other entries in the log - but they don't have to store the entire log, just a short checksum.
Consumers and publishers take certificates and certs for granted. I see many broken certs, or brands using the wrong certs and domains for their services.
SSL/TLS has done well to prevent eavesdropping, but it hasn't done well to establish trust and identity.
At the same time, it sounds like the issues you describe aren’t CA/issuance issues, but rather, simple misconfigurations. Those aren’t incidents for the ecosystem, although definitely can be disruptive to the site, but I also wouldn’t expect them to call trust or identity into disrepute. That’d be like arguing my drivers license is invalid if I handed you my passport; giving you the wrong doc doesn’t invalidate the claims of either, just doesn’t address your need.
There is systematic checking - e.g. crt.sh continuously runs linters on certificates found in CT logs, I continuously monitor domains which are likely to be used in test certificates (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1496088), and it appears the Chrome root program has started doing some continuous compliance monitoring based on CT as well.
But there is certainly a lot of ad-hoc checking by community members and academics, which as Sleevi said is one of the great things that CT enables.
Happened to see it in the CT logs, and when that CA next came up for discussion on the Mozilla dev security policy list, their failure to address and disclose the misissuance in a timely manner was enough to stop the process to approve their request for EV recognition, and it ended in a denial from Mozilla.
See https://www.mozilla.org/en-US/about/governance/policies/secu... and https://www.ccadb.org/auditors and https://www.ccadb.org/policy#51-audit-statement-content
Apple and Microsoft mainly have power because they control Safari and Edge. Firefox is of course dying, but they still wield significant power because their trusted CA list is copied by all the major Linux distributions that run on servers.
I can't even imagine how much a pain it would be to try and moderate certs based on some consistent international notion of trustworthiness. I think the best you could hope to do is have 3rd parties like the BBB sign your cert as a way of them "vouching" for you.
And:
> Bandwidth: 2 – 3 Gbps outbound.
I am not sure if this is correct, is 2-3Gbps really required for CT?
Do you have a reason to think his number is off?
If all certs are sent to just one CT log server, and each cert generates ~10KBytes outbound traffic, it's ~200GB/day, or ~20Mbps (full & even traffic), not in the same ballpark (2-3Gbps).
So I guess there are something I don't understnad?
It’s unfortunately an estimate, because right now we see 300 Mbps peaks, but as Tuscolo moves to Usable and more monitors implement Static CT, 5-10x is plausible.
It might turn out that 1 Gbps is enough and the P95 is 500 Mbps. Hard to tell right now, so I didn’t want to get people in trouble down the line.
Happy to discuss this further with anyone interested in running a log via email or Slack!
In Germany 2 – 3 Gbps outbound is a milestone, even for enterprises. As a individual I am privileged to have 250Mbs down/50Mbs up.
So it`s at least off by what any individual in this country could imagine.
But 2-3Gbps of bandwidth makes this pretty inaccessible unless you're just offloading the bulk of this on to CloudFront/CloudFlare at which point... it seems to me we don't really have more people running logs in a very meaningful sense, just somebody paying Amazon a _lot_ of money. If I'm doing my math right this is something like 960TB/mo which is like a $7.2m/yr CloudFront bill. Even some lesser-known CDN providers we're still talking like $60k/yr.
Seems to me the bandwidth requirement means this is only going to work if you already have some unmetered connections laying around.
If anyone wants to pay the build out costs to put an unmetered 10Gbps line out to my house I'll happily donate some massively overprovisioned hardware, redundant power, etc!
Still, even in Germany, with its particularly lacking internet infrastructure for the wealth the country possesses, M-net is slowly rolling out 5gbps internet.
According to the readme, it seems like the bulk of the traffic is highly cacheable, so presumably you could park something a CDN in front and substantially reduce the bandwidth requirements.
That is one of the primary motivations of its design over the previous CT API, which had some relatively flexible requests that led to less good caching.
I’ll mail that one towards the end of the week.
Is this actually a good use case for (gasp) blockchains? Or would it be too much data?
People love to say it, but when we had GSuite issues at my previous workplace we spoke to GSuite support and had a resolution quickly. When we had GCP queries we spoke to our account manager who gave us a technical contact who escalated internally and got us the advice we needed. When we asked about a particular feature we were added to the alpha stage of an in-development product and spoke with the team directly about that. I've got friends who have had various issues with Pixel phones over the years and they just contact support and get a replacement or fix or whatever.
Meanwhile I've seen colleagues go down the rabbit hole of AWS support and have a terrible time. For us it was fine but nothing special, I've never experienced the amazing support that I've heard some people talk about.
We were a <100 person company with a spend quite a bit less than many companies of our size. From what I've heard from YouTubers with a million followers, they have account managers and they always seem to encourage talking to account managers.
source: I used to do vendor relations for a large public org where contractors (medium tech companies) would routinely try to skirt the line on what they had to deliver. I would rather deal with them than GoogleFi, because in that situation there was a certain point where I could give up and hand it off to our lawyers.
That certainly wasn't my experience. Unless 'we're not going to help you' counts as a resolution. We did get a response quickly, but there was no path to resolving the issues I had other just ignoring the issues.
I should've qualified what I wrote, but what I mean is that no matter who you are, if you don't know someone there and aren't paying them money, there's no way to communicate with humans there.
It's like companies that won't let you sign up unless you give them a cell phone number, but not only do they not have a number themselves, they don't even have email. Or, for companies like Verizon, they don't have email, but they have phone numbers with countless layers of "voice assistants" you can't skip. It's a new way of "communicating" that's just crazymaking.
In this case, you point to the hypocrisy of being uncontactable but demanding your contact details, except that Google does provide support to customers, and in this relationship they are essentially a customer of your CT log, and given the criticality of that service they rightly expect the service provider to be held to a high standard. I don't think they're holding you to a standard that they themselves wouldn't agree to be held to for a service that critical. I've got to make it clear that this is my personal opinion though.
http://www.aaronsw.com/weblog/squarezooko
Ben Laurie read this post by Aaron Swartz while thinking about how a certificate transparency mechanism could work. (I think Peter Eckersley may have told him about it!) The existence proof confirmed that we sort of knew how to make useful append-only data structures with anonymous writers.
CT dropped the incentive mechanism and the distributed log updates in favor of more centralized log operation, federated logging, and out-of-band audits of identified log operators' behavior. This mostly means that CT lacks the censorship resistance of a blockchain. It also means that someone has to directly pay to operate it, without recouping the expenses of maintaining the log via block rewards. And browser developers have to manually confirm logs' availability properties in order to decide which logs to trust (with -- returning to the censorship resistance property -- no theoretical guarantee that there will always be suitable logs available in the future).
This has worked really well so far, but everyone is clear on the trade-offs, I think.
If you figure out a good way to involve an incentive structure like that, let us know!
In all seriousness, the incentive is primarily in having the data imo
WAL replication, rsync, bittorrent, etc all things that don't quite work as needed.
[1] https://github.com/mchangrh/sb-mirror/blob/main/docs/breakdo...
[2] https://github.com/ajayyy/SponsorBlock/issues/1570