C

There is a big difference between https://pt.wikipedia.org/wiki/Informa%C3%A7%C3%A3o_teoricamente_segura (or perfect confidentiality) and https://pt.wikipedia.org/wiki/Seguran%C3%A7a_sem%C3%A2ntica . The first is more of theoretical interest and in this context one cannot "manage" randomness - or you have truly random numbers or you do not have (and if you have, you can only use them once and then have to discard them). The second, of practical interest, concerns only what one can reasonably expect from a computational process that operates in polynomial time. The concept of entropy in this case is the same, but the use of entropy is quite different - and in this case yes, one can achieve enough safety from a small amount of entropy.(Note: replace "safety" by "imprevisibility", if your focus is other than encryption - for example, ensuring randomness during a scientific simulation)In theoryA good way to illustrate what entropy is about is through an example. Consider the following sequence of bits:01001101010011010100110101001101010011010100110101001101010011010100110101001101
I used 80 characters to describe it, but I could "compress it" for example as follows:01001101 repetido 10 vezes
Which takes me only 26 characters. I could continue looking for more succinct ways of describing this sequence, until it reaches a point where it is not possible to compress any more, because it would be in the most compact form possible and that still describes only that same sequence (i.e. without ambiguity, a way that cannot also describe a different sequence). If this form uses, say, 10 characters, then I can say that it has 10 characters of entropy.(You can convert this measure to bits if you want: log23710 = 52 bits of entropy, assuming that a "character" is a letter, number or space)What does that mean? Why is this the entropy of this sequence? It's simple: if someone wants to get in that same sequence from nowhere, all they need to do is generate all possible 10-character arrangements and one of them will describe their sequence.Intuitively, one can see why entropy is linked to the concept of "imprevisibility". Imagine I'd show you just a piece of:010011010100110101001101...
By observing well, you can see that a pattern appears, and even though there is no guarantee that the sequence continues following this pattern (the next bit could be a 1) is still a good "chute", it is preferable to test this hypothesis first rather than try by brute force all possible 80-bit sequences.In practiceA pseudo-random number generator usually starts from a random seed and then goes "producing" new numbers through a well-defined process, in the hope of these numbers to be unpredictable. But are they really unpredictable? Theoretically, if you know that your seed is a 32-bit integer, and that the sequence is generated by encrypting the natural numbers in order using this seed as a key (e.g.: flow ciphers), then by observing the first generated number it is already possible to predict the next (and similarly, all others):Create a list with all 232 possible seeds;Encrypt number 0 using each of these seeds as key;Compare with the observed number, thus discovering what is the right seed;Encrypt number 1 using the correct seed; you just predict with 100% certainty the next sequence number.That is, from the point of view of perfect safety, after you have generated the first number you have already "gasted" the whole entropy of the seed, and therefore you should not use it again (total or partially) to generate new numbers - otherwise they will not really be unpredictable. That is, the entropy of the entire sequence is only as large as the entropy of the seed, perhaps smaller, but never greater, and you cannot increase it by combining it with white noise or any other source of randomness (it is necessary replace it by this white noise, the original no longer serves anything).What about semantic security? Well, in practice test 232 possibilities are quite costly, especially because each test involves a large number of operations. Therefore, even if an opponent observes a number generated by a 32-bit seed, it is still considered that the next number will have an entropy of approximately 32-bit. Only after observing a very large sequence (see https://pt.stackoverflow.com/q/31526/215 ) is considered a reduction in the entropy of the process, assuming that fewer and fewer operations are necessary to predict the remainder of the sequence.Repeat cyclesI mentioned that when using a 32-bit key the sequence entropy would be maximum 32, but that could be less. This is related to the quality of PRNG itself. If a 32-bit seed is used to generate also 32-bit numbers, then by the House of Pigeon Principle the greatest possible sequence to be generated without repetition has size 232. However, if the generation procedure is not perfect, the numbers can start repeating long before a cycle of that size is reached. The example of the function rand() of PHP in Windows shows a premature repetition of the generated numbers (or at least a premature repetition of part of the same), revealing a pattern. At worst, one can even share the space of solutions, reaching a perfect predictability after a very small number of observations.Whether the PRNG good or bad, the fact is that it will eventually begin to repeat, unless more entropy is added to the system. For semantic safety, in general the amount of entropy needed does not need to be too large, since the "loss" is small after each new observation. For perfect safety, as I have mentioned, the loss is always total, and the addition of white noise would be the sole source of system security. But for practical purposes, one can consider that as new entropy is added - taking into account lost entropy - the total would continue to grow, tending to infinity.SummarizingIt is necessary rather an external source of randomness ("barulho") to obtain - not "manage" - true randomness, even because this source is solely responsible for any randomness except the seed itself (and from the theoretical point of view, restricted to the very first use of this seed). And it is not possible to get "absolute entropy" in any way, because from the moment you stop adding entropy in the system, it already begins to decrease according to the use (lently, from the semantic point of view, or very quickly, from the theoretical point of view), and eventually it will reach zero.P.S. I am assuming that any PRNG, either encryptedly safe or not, is periodic. I may be wrong in that sense, however this does not change the fact that, for a 32-bit seed, maximum 232 Different sequences can be generated. And although the increase of noise does not change this periodic nature, a good mixing algorithm can greatly lengthen this period, while a bad mixture perhaps "resect" the sequence but keep its period unchanged.P.P.S. I interpreted "absolute entropy" as an eternal, inexhaustible entropy, which was what I understood based on your commentary. If the concept is another, please clarify. In any case, even without entering the merit of the "calculus" (personally, I would call "estimation") of entropy, I can still affirm as the previous reasoning that entropy always "beams", and eventually it will reach zero unless new entropy is continually added to the system.