Reply To: Support for large SLV/signed/unsigned?
In general, I dislike a solution that consumes random quantities of seeds, even if the consumption is repeatable. It could get very consumptive for words with more than a few chunks, depending upon the actual constraints.
Alternatively, I could reduce the probability of the first draw being A, compared to each of the lesser values (0-9), to compensate for the increased probability of 0-3 relative to 0-15.
The flaw in the original approach was not that the lesser chunk’s probabilities were different, but that the greater chunk’s probabilities did not reflect that difference.
So, all* I have to do is:
1) Efficiently reduce the probability of a max value compared to the others (this will negatively impact the optimum chunk size, which will increase the number of chunks).
2) Develop an algorithm to calculate the reduced probability based on lesser chunk(s)’ constrained vs unconstrained probabilities for N-chunk words.
Does more than chunk N-1’s probabilities need to be considered when calculating the probability of pulling the max value for chunk N?
*That’s all… 🙂