The problem is you’d need to split it down to an amount that people would be happy hosting and then host it multiple times in case any node goes offline.
Another comment in the thread says it’s likely over 100PB today (100,000 terabytes). I’d say 4 copies (spread over different time zones) is a relatively minimal level of redundancy (people may host on machines that aren’t powered all the time), and I reckon you’d get a network with the most participants, whilst still getting enough storage, at around the 150gb per node mark.
That comes to nearly 3 million participants needed just to cover today’s archive, new people will obviously need to join every day. Also given I imagine it would need to be open to all, the redundancy level could do with increasing to avoid malicious actors with a lot of resources taking on a lot of the network and forcing it all offline at once in an effort to cause data loss
Nothing here is insurmountable, but also not remotely easy
The problem is you’d need to split it down to an amount that people would be happy hosting and then host it multiple times in case any node goes offline.
Another comment in the thread says it’s likely over 100PB today (100,000 terabytes). I’d say 4 copies (spread over different time zones) is a relatively minimal level of redundancy (people may host on machines that aren’t powered all the time), and I reckon you’d get a network with the most participants, whilst still getting enough storage, at around the 150gb per node mark.
That comes to nearly 3 million participants needed just to cover today’s archive, new people will obviously need to join every day. Also given I imagine it would need to be open to all, the redundancy level could do with increasing to avoid malicious actors with a lot of resources taking on a lot of the network and forcing it all offline at once in an effort to cause data loss
Nothing here is insurmountable, but also not remotely easy