Just re-visited this thread and saw there were some open questions still ...
I'm interested to hear in an user understandable way just how much would this speed up some aspects of Bitcoin like synchronization time? Is there any practical data on it yet or it is still more theoretical?
Synchronization would not be accelerated much by Utreexo, because Utreexo only affects the UTXO set.
But a zero knowledge proof based synchronization like ZeroSync could be a
massive speed boost. On Bitcoin these techniques are still more of a theoretic concept afaik, but in altcoins like Ethereum it seems to be already working. In the
whitepaper you find a description of the basic mechanism:
Naively, users can sync in three simple steps: Verify the current chain state using a proof, then download the corresponding UTXO set (≈ 5 GB of data), copy it into the “chainstate” folder, and run Bitcoin Core as usual. This procedure allows users to bootstrap a (pruned) full node without having to download and verify 500 GB of historical blockchain data. It reduces the initial sync time from many hours (or even days) to minutes.
While they don't give exact numbers, a sync time of "minutes" for the whole Bitcoin (!) chain looks like a dramatic speedup. They also mention Utreexo and claim that with this technique syncing would be even faster.
How much data would this save?
(Here we were talking about nodes being able to ignore OP_RETURN outputs and only storing the merkle tree and a proof hash.)
This would depend entirely how much OP_RETURN transactions are in the chain. Currently, they make up between about 5 and 30%, but in the Runes wave in 2024 it was 50% or more during some weeks. See also
this thread. So for the whole blockchain I think the syncing time improvement would not be that important (perhaps 10%). But the essence here is that nodes could be sure that they don't store any unwanted data.
While I tend to agree with you I am a bit more conservative on making statements of certainty here, that is why I like to explore different scenarios regardless of their probabilities. As long as they are realistically possible one day, I'm interested in analyzing what could happen.
The thing is that quantum computer progress will not come out of thin air. What you are assuming in this scenario is that somebody is able to make a leap of let's say from 1000-2000 qubits (the current maximum, not counting "adiabatic" QCs which can't run Shor's algorithm) to millions of qubits without anybody noticing, and this criminal could then crack wallets "trivially" "like current VanityGen". This is more or less impossible. (A recent Google research paper has claimed that about a million qubits may be sufficient for running Shor's algorithm to crack RSA-2048 [which is not the same than Bitcoin's ECDSA but in the same order of magnitude when it comes to security] in a week. Before, they thought 20 million were necessary.)
Instead with more than 99.9999% likelihood there will be gradual improvements. Even an exponential increase of 20% or even 50% more qubits per year will give the Bitcoin community a lot of time to react. It would mean that we would know eventually that when the biggest quantum computer reaches 100,000 qubits and can operate for a day continuously (which is another challenge besites of the number of qubits),
then the "period of danger" begins, i.e. in the next years after this happens there should be some progress adopting quantum resistant cryptography, because there "may" be that someone achieves a 1000% progress in quantum computing making "wallet stealing" feasible with Shor's algorithm.