The most important point is buried at the bottom of the page:
> all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
Using a hybrid scheme ensures that you're not actually losing any security compared to the pre-quantum implementation.
Hybrid schemes give you improved security against algorithmic flaws. If either algorithm being used is broken, the other gives you resilience. But hybrid schemes also double (or more) your exposure to ordinary implementation bugs and side-channels.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
Unless the implementation bug is severe enough to give RCE, memory dumping, or similar, I don't see how a bug in the MLKEM implementation (for example) would be able to leak the x25519 secret, even with sidechannels. A memory-safe impl would almost guarantee you don't have any bugs of the relevant classes (I know memory-safe != sidechannel-safe, but I don't see how sidechannels would be relevant). You still need to break need both to break the whole scheme.
I've rewritten some PQ implementations that had RCEs and memory disclosure vulnerabilities in them. No shade, but those implementations were from scientists who don't typically build production systems. As an industry, we're past this phase. Side-channels more commonly reveal plaintext than key material, but that shouldn't be fatal in the case of hybrid key agreement.
Based on what we've seen so far in industry research, I'd guess that enabling Denial of Service is the most common kind of issue.
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
This is true, but there is a subtle point that key K1 used for the classical algorithm must be statistically independent of key K2.
If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)
Here's we're talking about hybrid key-agreement. It's more like you agree secret A with a peer using the magic of Diffie-Helman, separately you make up secret B and encapsulate (which is basically a form of asymmetric encryption) that using a PQ algorithm and then send that on, and then derive C by mixing A and B. You're not actually encrypting something twice.
Some government and military standards do call for multiple layers of encryption when handling data, but it's just that multiple layers. You can't ever really make that kind of encryption weaker by adding a new "outer" layer. But you can make encryption weaker if you add a new "inner" layer that handles the plaintext. Side-channels in that inner layer can persist even through multiple layers of encryption.
I think the answer is either very simple, or impossible to give without details.
If I recall my crypto classes and definitions correctly, if you have a perfect encryption X, a C = X(K, P) has zero information about P unless you know K. Thus, once X is applied, Y is not relevant anymore.
Once you have non-perfect encryptions, it depends on X and Y. Why shouldn't a structure in some post-quantum algorithm give you information about, say, the cycle length of an underlying modular logarithm like RSA? This information in turn could shave fractions of bits off of the key length of the underlying algorithm. These could be the bits that make it feasible to brute-force. Or they could be just another step.
On the other hand, proving that this is impossible is ... would you think that a silly sequence about rabbits would be related to a ratio well-known in art? There are such crazy connections in math. Proving that something cannot possibly connected is the most craziest thing ever.
But that's the thing about crypto: It has to last 50 - 100 years. RSA is on a trajectory out. It had a good run. Now we have new algorithms with new drawbacks.
What kinds of side channels are you thinking of? Given the key exchanges have a straightforward sha256/sha512 combiner, it would be surprising that a flaw in one of the schemes would give a real vulnerability?
I could see it being more of a problem for signing.
The industry definitely seems to be going in this hybrid PQC-classical direction for the most part. At least until we know there's a real quantum computer somewhere that renders the likes of RSA, ECC, and DH no longer useful, it seems this conservative approach of using two different types of locks in parallel might be the safest bet for now.
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
In light of the recent hilarious paper around the current state of quantum cryptography[1], how big is the need for the current pace of post quantum crypto adoption?
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
The page only talks about adopting PQC for key agreement for SSH connections, not encryption in general so the overhead would be rather minimal here. Also from the FAQ:
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics.
If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
>it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics
I've heard this 15 years ago when I started university. People claimed all the basics were done, that we "only" needed to scale. That we would see practical quantum computers in 5-10 years. Today I still see the same estimates. Maybe 5 years by extreme optimists, 10-20 years by more reserved people. It's the same story as nuclear fusion. But who's prepping for unlimited energy today? Even though it would make sense to build future industrial environments around that if they want to be competitive.
> People claimed all the basics were done, that we "only" needed to scale.
This claim is fundamentally different from what you quoted.
> But who's prepping for unlimited energy today?
It's about tradoffs: It costs almost nothing to switch to PQC methods, but i can't see a way to "prep for unlimited energy" that doesn't come with huge cost/time-waste in the case that doesn't happen
Not wrong, but given these algorithms are mostly used at setup, how much cost is actually being occurred compared to the entire session? Certainly if your sessions are short-lived then the 'overhead' of PQC/hybrid is higher, but I'd be curious to know the actually byte and energy costs over and above non-PQC/hybrid, i.e., how many bytes/joules for a non-PQC exchange and how many more by adding PQC. E.g.
> Unfortunately, many of the proposed post-quantum cryptographic primitives have significant drawbacks compared to existing mechanisms, in particular producing outputs that are much larger. For signatures, a state of the art classical signature scheme is Ed25519, which produces 64-byte signatures and 32-byte public keys, while for widely-used RSA-2048 the values are around 256 bytes for both. Compare this to the lowest security strength ML-DSA post-quantum signature scheme, which has signatures of 2,420 bytes (i.e., over 2kB!) and public keys that are also over a kB in size (1,312 bytes). For encryption, the equivalent would be comparing X25519 as a KEM (32-byte public keys and ciphertexts) with ML-KEM-512 (800-byte PK, 768-byte ciphertext).
For an individual session, the cost is certainly small. But in aggregate it adds up.
I don't think the cost is large, and I agree that given the tradeoff, the cost is probably worth it, but there is a cost, and I'm not sure it can be categorized as "almost nothing".
Anyway, what does prepping for unlimited energy look like? I guess, favoring electrical over fossil fuels. But for normal people and the vast majority of companies, that looks like preparing for mass renewable electricity anyway, which is already a good thing to do.
could also be just massively scaling up energy consumption with little concern for efficiency (since limitless would imply very low cost), which would probably be a bad idea for renewables, and in case of not-so-cheap energy also very expensive
I would just take this to mean that most people are bad at estimating timelines for complex engineering tasks. 15 years isn't a ton of time, and the progress that has been made was done with pretty limited resources (compared to, say, traditional microprocessors).
The costs to migrate to PQC continue to drop as they become mainstream algorithms. Second, the threat exists /now/ of organizations capturing encrypted data to decrypt later. There is no comparable current threat of "not preparing for fusion", whatever that entails.
It's been "engineering challenges" for 30 years. At some point, "engineering challenges" stops being a good excuse, and that point was about 20 years ago.
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
You might be right that we'll never have quantum computers capable of cracking conventional cryptographic methods, but I'd rather err on the side of caution in this regard considering how easy it is to switch, and how disastrous it could be otherwise.
As others pointed out, it's not so easy to switch, as the PQC versions require much more data to be sent to establish a connection, and consequently way more CPU time. So the CPS you can achieve with this type of cryptography will be MUCH worse than classical algorithms.
it doesn't get much easier than that, and the downsides are much much much less of an inconvenience than having your data breached depending on what it is.
Yeah, except when your "2048-bit" numbers are guaranteed to have factors that differ by exactly two bits, you can factor them with any computer you want.
The D-wave also isn't capable of Shor's algorithm or any other quantum-accelerated version of this problem.
I was at a lecture by a professor who's working in the field, his main argument was that quantum computers are physically impossible to scale.
He presented us with a picture of him and a number of other very important scientists in this field, none of them sharing his attitude. We then joked that there is a quantum entanglement of Nobel prize winners in the picture.
D-Wave themselves do not emphasize this use case and have said many times that they don't expect annealing quantum computers to be used for this kind of decryption attack. Annealers are used for optimization problems where you're trying to find the lowest energy solution to a constraint problem, not Shor's Algorithm.
In that sense, they're more useful for normal folks today, and don't pose as many potential problems.
Those are two odd questions to even ask/answer as first quantum computers exist and secondly, we have them on a certain scale. I assume what they mean is at a scale to do calculations that surpass existing classical calculations.
That paper is hilarious, and is correct that there's plenty of shit to make fun of... but there's also progress. I recommend watching Sam Jacques' talk from PQCrypto 2025 [0]. It would be silly to delay PQC adoption because of focusing on the irrelevant bad papers.
In the past ten years, on the theory side, the expected cost of cryptographically relevant quantum factoring has dropped by 1000x [1][2]. On the hardware side, fault tolerance demonstrations have gone from repetition code error rates of 1% error per round [3] to 0.00000001% error per round [fig3a of 4], with full quantum codes being demonstrated with an error rate of 0.2% [fig1d of 4] via a 2x reduction in error each time distance is increased by 2.
If you want to track progress in quantum computing, follow the gradual spinup of fault tolerance. Noise is the main thing blocking factoring of larger and larger numbers. Once the quality problem is turned into a quantity problem, then those benchmarks can start moving.
As a number of people have observed, what's happening now is mostly about key establishment, which tends to happen relatively infrequently, and so the overhead is mostly not excessive. With that said, a little more detail:
- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.
- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).
- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well
Besides what's public knowledge, I tend to put a bit of stock in our intelligence agency calling for PQ adoption for systems that need to remain confidential for 20 years or more
I don't want my government to keep secrets for 20 years. There is nothing I am OK with them doing that they can't be generally open about in time. Ex. the MLK files. No justification for the courts saying that the FBI files regarding MLK have to be kept under lock and key for 50 years.
I think that's a different discussion. Some people would like their chat messages to simply be secure until they die. So long as that's a valid desire, or one can think of another purpose for this, I think we can agree that it's worth considering whether PQC is worth implementing today
Also, 2030 isn't 20 years away anymore and that's the recommendation I ended up finding in sources, even if they think it's only a small chance
Yes but if they're ever sent over an HTTPS connection that was established using ECDHE key exchange, anyone who recorded that can make it public in the future if quantum computers exist.
On the other hand - we already give our passport information to every single airline and hotel we use. There must be hundreds if not thousands of random entities across the globe that already have mine. As long as certain key information is rotated occasionally (e.g. by making passports expire), maybe it doesn't really matter
That's just a fun joke paper deflating some of the more aggressive hype around QC. You shouldn't use it for making security and algorithm adoption decisions.
> After our successful factorisation using a dog, we were delighted to learn that scientists have now discovered evidence of quantum entanglement in other species of mammals such as sheep [32]. This would open up an entirely new research field of mammal-based quantum factorisation. We hypothesise that the production of fully entangled sheep is easy, given how hard it can be to disentangle their coats in the first place. The logistics of assembling the tens of thousands of sheep necessary to factorise RSA-2048 numbers is left as an open problem.
The paper is a joke, but Gutmann does make some useful, non-joke suggestions in section 7. There's probably room for a serious, full-length paper on quantum factorization evaluation criteria.
> As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms
This is somewhat correct, but needs some nuance.
First, the problem is bigger with signatures, which is why nobody is happy with the current post quantum signature schemes and people are working on better pq signature schemes for the future. But signatures aren't an urgent issue, as there is no "decrypt later" scenario for signatures.
For encryption, the overhead exists, but it isn't too bad. We are already deploying pqcrypto, and nobody seems to have an issue with it. Use a current OpenSSH and you use mlkem. Use a current browser with a server using modern libraries and you also use mlkem. I haven't heard anyone complaining that the Internet got so much slower in recent years due to pqcrypto key exchanges.
Compared to the overall traffic we use commonly these days, the few extra kb during the handshake (everything else is not affected) doesn't matter much.
I imagine the key exchange is just once per connection, right? So the overhead seems not too bad.
Especially since I think a pretty large number of computers/hostnames that are ssh'able today will probably have the same root password if they're still connected to the internet 10-20 years from now
Fwiw some distros ask if you want root access enabled on install; I assume there's always some chance of it being enabled for install stuff and forgotten, or the user misreading and thinking it means any root access.
>... which leads to huge overheads in network traffic and of course CPU time.
This is just the key exchange. You're exchanging keys for the symmetric cipher you'll be using for traffic in the session. There's really no overhead to talk about.
Indeed, I'll expand a bit: Asymmetrical crypto has always been incredibly slow compared to symmetrical crypto which is either HW accelerated (AES) or fast on the CPU (ChaCha20).
But since the symmetrical key is the same for both sides you must either share it ahead of time or use asymmetrical crypto to exchange the symmetrical keys to go brrrrr
This still greatly affects connections/second, which is an important metric. Especially since servers don't always like very long lived connections, so you may get plenty of connections during an HTTP interaction.
It doesn't "greatly" affect it at all. The extra traffic and time required between curve25519 and ML-KEM768+X25519 is actually less than the jump from RSA2048 to RSA4096. Imagine how silly a person would appear if they had been this alarmist about RSA4096. When building for scales where it may eventually add up you should already be taking such scale into consideration.
>As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
Eh? Public-key (asymmetric) cryptography is already very expensive compared to symmetric even under classical, that's normal, what it's used for is the vital but limited operation of key-exchange for AES or whatever fast symmetric algorithm afterwards. My understanding (and serious people in the field please correct me if I'm wrong!) is that the potential cryptographically relevant quantum computer issue threats almost 100% to key exchange, not symmetric encryption. The best theoretical search algorithm vs symmetric is Grover's which offers a square-root speed up, and thus trivially countered if necessary by doubling the key size (ie, 256-bits vs Grovers would offer 128-bits classical equivalent and 512-bits would offer 256-bits, which is already more than enough). The vast super majority of a given SSH session's traffic isn't typically handshakes unless something is quite odd, and you're likely going to have a pretty miserable experience in that case regardless. So even if the initial handshake gets made significantly more expensive it should be pretty irrelevant to network overhead, it still only happens during the initiation of a given session right?
The macOS app Secretive [1] stores SSH keys in the Secure Enclave. To make it work, they’ve selected an algorithm supported by the SE, namely ecdsa-sha2-nistp256.
I don’t think SE supports PQ algorithms, but would it be possible to use a “hybrid key” with a combined algorithm like mlkem768×ecdsa-sha2-nistp256, in a way that the ECDSA part is performed by the SE?
ssh-audit [1] should be updated to test for this theoretical algo. I still get an "A" despite fixating on a specific algo and not including the quantus. I'm doing the cha-cha.
They're not the same, they're completely different:
> Additionally, all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
The 256 one is actually newer than the 512 one, too:
> OpenSSH versions 9.0 and greater support sntrup761x25519-sha512 and versions 9.9 and greater support mlkem768x25519-sha256.
We're nowhere near the point where there's any general concern regarding the sizes of 256 bits or 512 bits for hashes, block sizes, key sizes etc. Currently we don't need to consider the problem as a question of what time is required, because we don't have the electrical energy required to explore even a fraction of an unfathomably smaller 128 bit space. We don't have computers that can ingest such power either. "Relax, guy."
MLKEM768 offers better performance and smaller keys, while SNTRUP761 has stronger security assumptions and better resilience against potential cryptanalysis.
NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ). You can use either, but my guess is using sntrup is going to be a little like how GPG used to default to CAST as its cipher.
NTRU Prime was written by Dan Bernstein, who also had a strong hand in the creation of ed25519 elliptic curve keys, and the chacha20-poly1305 AEAD cipher.
The first version of NTRU Prime in an SSH server was implemented in TinySSH and later adopted by OpenSSH. Bernstein provided new guidance, and OpenSSH developed an updated algorithm that TinySSH implemented in return.
The NIST approval process was fraught, and Bernstein ended up filing a lawsuit over treatment that he received. I don't know how that has progressed.
And given that NTRU made it to the third round, and NTRU Prime is labelled as an alternative, I'm not how strong a claim Bernstein can make to being ill-treated by NIST.
No, there won't. The world will standardize on MLKEM, at least until some important new piece of knowledge is uncovered. The process wasn't at all fraught. Who's the highest-profile cryptographer or cryptography engineer you can think of who took Bernstein's claims about the process seriously?
> NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ).
ML-KEM (originally "CRYSTALS-Kyber") was available, it's just the Tiny/OpenSSH folks decided not to choose that particular algorithm (for reasons beyond my pay grade).
NIST announced their competition in 2016 with the submission deadline being in 2017:
> We (OpenSSH) haven't "disregarded" the winning variants, we added NTRU before the standardisation process was finished and we'll almost certainly add the NIST finalists fairly soon.
Nothing in his statements talks about 'availability', just a particular choice (from the ideas floating around at the time).
CRYSTALS-Kyber (now ML-KEM) was available at the same time as SNTRUP because they were both candidates in the NIST competition. NTRU (Prime) is listed as round three finalist / alternate (along with CRYSTALS-Kyber):
Given that they were both candidates in the same competition, they would have been available at the same time. Tiny/OpenSSH simply chose a candidate that ended up not winning (I'm not criticizing / judging their choice: they made a call, and it happened to be a different call than NIST).
I’m happy to see they’re thinking ahead. There no value in disparaging efforts like this as long as the alternatives that provide better security in the future don’t make things worse.
If you need to access a server across a network you don't 100% control, you have to assume your traffic is captured and post-quantum will mean it can be decrypted. Whether that's a concern or not is another matter
This is an extremely import topic and one I'm glad is being brought up.
I come from the physical ID and anti-counterfeiting space (think passports, banknotes, etc..) there is A LOT of buzz around this and how it relates to one's digital footprint and identity. We need to think differently about how to approach encryption... math-based cryptography is becoming very vulnerable.
We're building something that even the smartest ai or the fastest quantum computer can't bypass and we need some BADASS hackers...to help us finish it and to pressure test it.
Any takers?? Reach out: cryptiqapp.com (sorry for link but this is legit collaborative and not promotional)
> all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
Using a hybrid scheme ensures that you're not actually losing any security compared to the pre-quantum implementation.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
Based on what we've seen so far in industry research, I'd guess that enabling Denial of Service is the most common kind of issue.
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)
Some government and military standards do call for multiple layers of encryption when handling data, but it's just that multiple layers. You can't ever really make that kind of encryption weaker by adding a new "outer" layer. But you can make encryption weaker if you add a new "inner" layer that handles the plaintext. Side-channels in that inner layer can persist even through multiple layers of encryption.
If I recall my crypto classes and definitions correctly, if you have a perfect encryption X, a C = X(K, P) has zero information about P unless you know K. Thus, once X is applied, Y is not relevant anymore.
Once you have non-perfect encryptions, it depends on X and Y. Why shouldn't a structure in some post-quantum algorithm give you information about, say, the cycle length of an underlying modular logarithm like RSA? This information in turn could shave fractions of bits off of the key length of the underlying algorithm. These could be the bits that make it feasible to brute-force. Or they could be just another step.
On the other hand, proving that this is impossible is ... would you think that a silly sequence about rabbits would be related to a ratio well-known in art? There are such crazy connections in math. Proving that something cannot possibly connected is the most craziest thing ever.
But that's the thing about crypto: It has to last 50 - 100 years. RSA is on a trajectory out. It had a good run. Now we have new algorithms with new drawbacks.
I could see it being more of a problem for signing.
It's a trade-off, yes, but that doesn't make it useless.
aside the marketing bluff, quantum computing is nowhere near close
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
[0] https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
[1]: https://eprint.iacr.org/2025/1237
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics. If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
I've heard this 15 years ago when I started university. People claimed all the basics were done, that we "only" needed to scale. That we would see practical quantum computers in 5-10 years. Today I still see the same estimates. Maybe 5 years by extreme optimists, 10-20 years by more reserved people. It's the same story as nuclear fusion. But who's prepping for unlimited energy today? Even though it would make sense to build future industrial environments around that if they want to be competitive.
This claim is fundamentally different from what you quoted.
> But who's prepping for unlimited energy today?
It's about tradoffs: It costs almost nothing to switch to PQC methods, but i can't see a way to "prep for unlimited energy" that doesn't come with huge cost/time-waste in the case that doesn't happen
It costs:
- development time to switch things over
- more computation, and thus more energy, because PQC algorithms aren't as efficient as classical ones
- more bandwidth, because PQC algorithms require larger keys
Not wrong, but given these algorithms are mostly used at setup, how much cost is actually being occurred compared to the entire session? Certainly if your sessions are short-lived then the 'overhead' of PQC/hybrid is higher, but I'd be curious to know the actually byte and energy costs over and above non-PQC/hybrid, i.e., how many bytes/joules for a non-PQC exchange and how many more by adding PQC. E.g.
> Unfortunately, many of the proposed post-quantum cryptographic primitives have significant drawbacks compared to existing mechanisms, in particular producing outputs that are much larger. For signatures, a state of the art classical signature scheme is Ed25519, which produces 64-byte signatures and 32-byte public keys, while for widely-used RSA-2048 the values are around 256 bytes for both. Compare this to the lowest security strength ML-DSA post-quantum signature scheme, which has signatures of 2,420 bytes (i.e., over 2kB!) and public keys that are also over a kB in size (1,312 bytes). For encryption, the equivalent would be comparing X25519 as a KEM (32-byte public keys and ciphertexts) with ML-KEM-512 (800-byte PK, 768-byte ciphertext).
* https://neilmadden.blog/2025/06/20/are-we-overthinking-post-...
"The impact of data-heavy, post-quantum TLS 1.3 on the Time-To-Last-Byte of real-world connections" (PDF):
* https://csrc.nist.gov/csrc/media/Events/2024/fifth-pqc-stand...
(And development time is also generally one-time.)
I don't think the cost is large, and I agree that given the tradeoff, the cost is probably worth it, but there is a cost, and I'm not sure it can be categorized as "almost nothing".
The costs to migrate to PQC continue to drop as they become mainstream algorithms. Second, the threat exists /now/ of organizations capturing encrypted data to decrypt later. There is no comparable current threat of "not preparing for fusion", whatever that entails.
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
The D-wave also isn't capable of Shor's algorithm or any other quantum-accelerated version of this problem.
He presented us with a picture of him and a number of other very important scientists in this field, none of them sharing his attitude. We then joked that there is a quantum entanglement of Nobel prize winners in the picture.
In that sense, they're more useful for normal folks today, and don't pose as many potential problems.
In the past ten years, on the theory side, the expected cost of cryptographically relevant quantum factoring has dropped by 1000x [1][2]. On the hardware side, fault tolerance demonstrations have gone from repetition code error rates of 1% error per round [3] to 0.00000001% error per round [fig3a of 4], with full quantum codes being demonstrated with an error rate of 0.2% [fig1d of 4] via a 2x reduction in error each time distance is increased by 2.
If you want to track progress in quantum computing, follow the gradual spinup of fault tolerance. Noise is the main thing blocking factoring of larger and larger numbers. Once the quality problem is turned into a quantity problem, then those benchmarks can start moving.
[0]: https://www.youtube.com/watch?v=nJxENYdsB6c
[1]: https://arxiv.org/abs/1208.0928
[2]: https://arxiv.org/abs/2505.15917
[3]: https://arxiv.org/abs/1411.7403
[4]: https://arxiv.org/abs/2408.13687
- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.
- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).
- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well
edit: adding in some sources
2014: "between 2030 and 2040" according to https://www.aivd.nl/publicaties/publicaties/2014/11/20/infor... (404) via https://tweakers.net/reviews/5885/de-dreiging-van-quantumcom... (Dutch)
2021: "small chance it arrives by 2030" https://www.aivd.nl/documenten/publicaties/2021/09/23/bereid... (Dutch)
2025: "protect against ‘store now, decrypt later’ attacks by 2030", joint paper from 18 countries https://www.aivd.nl/binaries/aivd_nl/documenten/brochures/20... (English)
Also, 2030 isn't 20 years away anymore and that's the recommendation I ended up finding in sources, even if they think it's only a small chance
On the other hand - we already give our passport information to every single airline and hotel we use. There must be hundreds if not thousands of random entities across the globe that already have mine. As long as certain key information is rotated occasionally (e.g. by making passports expire), maybe it doesn't really matter
> After our successful factorisation using a dog, we were delighted to learn that scientists have now discovered evidence of quantum entanglement in other species of mammals such as sheep [32]. This would open up an entirely new research field of mammal-based quantum factorisation. We hypothesise that the production of fully entangled sheep is easy, given how hard it can be to disentangle their coats in the first place. The logistics of assembling the tens of thousands of sheep necessary to factorise RSA-2048 numbers is left as an open problem.
This is somewhat correct, but needs some nuance.
First, the problem is bigger with signatures, which is why nobody is happy with the current post quantum signature schemes and people are working on better pq signature schemes for the future. But signatures aren't an urgent issue, as there is no "decrypt later" scenario for signatures.
For encryption, the overhead exists, but it isn't too bad. We are already deploying pqcrypto, and nobody seems to have an issue with it. Use a current OpenSSH and you use mlkem. Use a current browser with a server using modern libraries and you also use mlkem. I haven't heard anyone complaining that the Internet got so much slower in recent years due to pqcrypto key exchanges.
Compared to the overall traffic we use commonly these days, the few extra kb during the handshake (everything else is not affected) doesn't matter much.
Especially since I think a pretty large number of computers/hostnames that are ssh'able today will probably have the same root password if they're still connected to the internet 10-20 years from now
Not that this is a bad thing, but first start using keys, then start rotating them regularly and then worry about theoretical future attacks.
In TinySSH, which also implements the ntru exchange, root is always allowed.
I don't know what the behavior is in Dropbear, but the point is that OpenSSH is not the only implementation.
TinySSH would also enable you to quiet the warning on RHEL 7 or other legacy platforms.
This is just the key exchange. You're exchanging keys for the symmetric cipher you'll be using for traffic in the session. There's really no overhead to talk about.
But since the symmetrical key is the same for both sides you must either share it ahead of time or use asymmetrical crypto to exchange the symmetrical keys to go brrrrr
Eh? Public-key (asymmetric) cryptography is already very expensive compared to symmetric even under classical, that's normal, what it's used for is the vital but limited operation of key-exchange for AES or whatever fast symmetric algorithm afterwards. My understanding (and serious people in the field please correct me if I'm wrong!) is that the potential cryptographically relevant quantum computer issue threats almost 100% to key exchange, not symmetric encryption. The best theoretical search algorithm vs symmetric is Grover's which offers a square-root speed up, and thus trivially countered if necessary by doubling the key size (ie, 256-bits vs Grovers would offer 128-bits classical equivalent and 512-bits would offer 256-bits, which is already more than enough). The vast super majority of a given SSH session's traffic isn't typically handshakes unless something is quite odd, and you're likely going to have a pretty miserable experience in that case regardless. So even if the initial handshake gets made significantly more expensive it should be pretty irrelevant to network overhead, it still only happens during the initiation of a given session right?
The macOS app Secretive [1] stores SSH keys in the Secure Enclave. To make it work, they’ve selected an algorithm supported by the SE, namely ecdsa-sha2-nistp256.
I don’t think SE supports PQ algorithms, but would it be possible to use a “hybrid key” with a combined algorithm like mlkem768×ecdsa-sha2-nistp256, in a way that the ECDSA part is performed by the SE?
[1]: https://github.com/maxgoedjen/secretive
If you look at http://mdoc.su/o/ssh_config.5#KexAlgorithms and http://bxr.su/o/usr.bin/ssh/kex-names.c#kexalgs, `ecdsa-sha2-nistp256` is not a valid option for the setting (although `ecdh-sha2-nistp256` is).
[1] - https://www.ssh-audit.com/
Which of the two options given is stronger? Presumably the 512 one?
> Additionally, all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
The 256 one is actually newer than the 512 one, too:
> OpenSSH versions 9.0 and greater support sntrup761x25519-sha512 and versions 9.9 and greater support mlkem768x25519-sha256.
I was thinking about whether to move the Terminal-based microblogging / chat app I'm building into this direction.
(Especially after watching several interviews with Paul Durov and listening to what he went through...)
https://news.ycombinator.com/item?id=37520065
https://www.metzdowd.com/pipermail/cryptography/2016-March/0...
The first version of NTRU Prime in an SSH server was implemented in TinySSH and later adopted by OpenSSH. Bernstein provided new guidance, and OpenSSH developed an updated algorithm that TinySSH implemented in return.
The NIST approval process was fraught, and Bernstein ended up filing a lawsuit over treatment that he received. I don't know how that has progressed.
https://news.ycombinator.com/item?id=32360533
While Kyber may have been the winning algorithm, there will be great preference in the community for Bernstein's NTRU Prime.
There's IETF WG drafts for use of Kyber / ML-KEM, but none for NTRU, so I'm not sure about that:
* https://datatracker.ietf.org/doc/draft-ietf-tls-mlkem/
* https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/
* https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-desig...
* https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-ml...
And given that NTRU made it to the third round, and NTRU Prime is labelled as an alternative, I'm not how strong a claim Bernstein can make to being ill-treated by NIST.
ML-KEM (originally "CRYSTALS-Kyber") was available, it's just the Tiny/OpenSSH folks decided not to choose that particular algorithm (for reasons beyond my pay grade).
NIST announced their competition in 2016 with the submission deadline being in 2017:
* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
TinySSH added SNTRUP in 2018, with OpenSSH following in 2019/2020:
* https://blog.josefsson.org/2023/05/12/streamlined-ntru-prime...
SSH just happened to pick one of the candidates that NIST decided not to go with.
https://news.ycombinator.com/item?id=32366614
I'm curious where you got the idea that they had mlkem available to them? They disagree with you.
> We (OpenSSH) haven't "disregarded" the winning variants, we added NTRU before the standardisation process was finished and we'll almost certainly add the NIST finalists fairly soon.
Nothing in his statements talks about 'availability', just a particular choice (from the ideas floating around at the time).
CRYSTALS-Kyber (now ML-KEM) was available at the same time as SNTRUP because they were both candidates in the NIST competition. NTRU (Prime) is listed as round three finalist / alternate (along with CRYSTALS-Kyber):
* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
Given that they were both candidates in the same competition, they would have been available at the same time. Tiny/OpenSSH simply chose a candidate that ended up not winning (I'm not criticizing / judging their choice: they made a call, and it happened to be a different call than NIST).
We're building something that even the smartest ai or the fastest quantum computer can't bypass and we need some BADASS hackers...to help us finish it and to pressure test it.
Any takers?? Reach out: cryptiqapp.com (sorry for link but this is legit collaborative and not promotional)
Can you explain this a bit more?