Depends on what you need to store it for. Probably the cheapest option would be on tape cartridges, though you'd still be setting yourself back quite a bit. A quick search shows a 10-pack of 1.5 TB tape cartridges on Amazon for $210 each, so that would be $490,140 altogether. Though I'm going to assume that if you are going to buy 2334 tape cartridges you're going to get a better deal and some kind of bulk discount, also I know they do make higher capacity cartridges, and also you will have to account for some of them failing.
I'd estimate it'd still set you back at least 200-300 grand though.
>>45446709 You'd think that. But this has the smell of typical shit like miracle free energy devices. Someone, through their own retardation, fools themselves into believing that their magic technology actually works, and seeks out investors. Using the enthusiasm they gain from being stupid, they manage to kindle some interest. What happens from here is where it turns from retard to scam. You can't stop here and go 'whoops it was a mistake' to the investors, and get back the money you wasted. You keep going. You either fool yourself even harder, or you realise it doesn't work, have that 'oh god' moment and keep up the illusion that it's real out of panic. Fleischmann and Pons is the classic story of fools turned scammers. One repeated over and over again.
>>45446765 They're rarer than you might expect. People don't usually go into this kind of thing trying to scam people right away. Well okay a lot do, but the difference between a scammer and a fool is that the fool actually believes his own nonsense.
I recommend reading Voodoo Science by Robert Park. Honestly it should be mandatory reading in schools. Or at least something with the same kind of ideas.
>>45446709 I made a case earlier this year that 'compression' of this scale actually breaks the laws of thermodynamics. You can break down information itself into a thermodynamically compatible process. Flipping a bit from a 0 to a 1 requires energy, and states of bits are states of entropy levels. Basically with the idea that even an 100% efficient processor would still use minute amounts of energy to flip bits around, compression of this level is akin to saying you get more energy out of a computation than you put in. IE more bits are flipped out than you put energy in to flip them.
It's a fairly flimsy argument, but I think it's more or less on the right lines.
>>45446811 But lossless compression is possible and common. It's not that rare to see certain types of data compressed to 10% or less of their original size. I'm not sure what your cutoff would be where a certain compression rate starts to violate the laws of physics.
>>45446912 The cutoff is at meaningful data for lossless compression. Lossless compression is pretty simple. Just cut out the nonsense. Since it's a lossless compression claim we're talking about- imagine an entire book of words and meaning compressed into just a single word. That's nonsense. you can see how just a single word can never mean as much as the whole book without a book worth of algorithms to 'decode' it.
The cutoff point isn't hard and fast I'm afraid. As I said, the argument is flimsy.
>>45446936 In what way is 10PB into 1.2kB compression? It's nonsense. You could compress a 10PB long chain of 0s into 1.2kB. Hell I just did it myself. "10PB long chain of 0s" clearly takes less space than 10PB. But what we're talking about is 10PB of books, papers, words, letters, language. Or the other claims are compressing a 40GB bluray movie into 20 bytes. Claiming that this is possible is more absurd than claiming it is not.
>>45446931 But it's not always intuitively obvious what is "nonsense" or what kinds of patterns can be constructed to compress the data further. Of course there is a lower limit of bits that some data can't possibly be reduced any further, but I don't think we have any way to really know what that limit is. Only that we cannot reduce it further with known methods.
>>45446996 And my argument revolves around the idea that you could use thermodynamics to find these lower limits, and that this kids' claims probably falls under it. Like I said, it's not a strong argument.
>>45447044 i just compressed the ebook (after stripping out the cover jpeg), and split it into chunks small enough to be encoded into datamatrix codes, then converted them and layed them out in libreoffice
before that i also tried printing codes at various DPI to see how small i could print them before they became too poor quality to read back
it was just for fun, i didn't put effort into making an automated way of producing or scanning the pages
What if we stored every possible bit pattern of a certain length in a big hash table and then replaced each bit string in the data with the index of the data? It would require a shitload of storage space but you could just store one copy of the data on a server somewhere and then all the clients make use of the server for compressing/decompressing. Would probably be slow as fuck too since you basically have to download the entire program from the cloud server, but you should be able to get a constant high rate of compression for anything, so it might be good for archival purposes.
I guess you'd be fucked if the server went down but you could just recompute the table on your end if you really desperately needed to decompress something.
>>45447181 Yeah, wouldn't work the way I imagined it though since to address your table with 2^n entries you'd need a n-bit index which would be as large as the data you wanted to compress in the first place.
>>45447453 At some point before 2055 we should have memristor based drives that are not only petabytes big, but also so fast as to make RAM unnecessary. We might even have that by 2035, but that's probably a little optimistic.
>>45446996 >But it's not always intuitively obvious what is "nonsense" or what kinds of patterns can be constructed to compress the data further. Of course there is a lower limit of bits that some data can't possibly be reduced any further, but I don't think we have any way to really know what that limit is. Only that we cannot reduce it further with known methods. It's called the Information Theory. It's been researched for years. The relevant topic is information entrophy. A VERY simplified summary is: the more random your data appears, the less you can compress it. And compressed (reduced size) data always has more entrophy (is more random) than before compression. Due to the fact that claimed numbers are way off the scale of what is currently documented, tested and verified you may understand why people here are very suspicious of those claims and unless proven (in the scientific sense) assume them to be uninformed at best, malicious at worst.
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.
If a post contains personal/copyrighted/illegal content you can contact me at email@example.com with that post and thread number and it will be removed as soon as possible.