Support for large SLV/signed/unsigned?
Why OSVVM™? › Forums › OSVVM › Support for large SLV/signed/unsigned?
Tagged: RandSigned, RandSlv, RandUnsigned, Uniform
 This topic has 4 replies, 2 voices, and was last updated 9 years, 2 months ago by Andy Jones.

AuthorPosts

November 8, 2012 at 06:39 #435Andy JonesMember
The current implementation supports creating largerthan32bit SLV/signed/unsigned random values, but the useful resolution is limited to 31/32 bits due to the natural/integer foundation.
Are there any plans or ideas for fully supporting arbitrary vector size with at least some of the methods/distributions?
Until/unless they are fully supported, should the current implementation issue a warning if the requested vector size exceeds 31 or 32 bits, or should we constrain the size argument appropriately?
Ideas for implementation:
First, I’m assuming we are only interested in numerically significant vectors, with no metavalues. We could use a vector of random integers to create largerthan32bit, fully populated random SLVs by “chunking” them 31/32 bits at a time.
This would take multiple calls to the internal random function, consuming/advancing multiple seed values. Would this consumption of multiple seed values be a problem? Note that the number of seeds consumed would be based only on the size of the vector. Multiple (variable) trials would not be required to satisfy constraints.
I understand how to implement at least the min/max constraints and the arithmetic involved, and am willing to help with the implementation, I just don’t know enough about the potential randomization seed issues.
Andy
November 8, 2012 at 08:00 #436Jim LewisMemberHi Andy,
Good catch. ðŸ™‚
Issue: RandSlv, RandUnsigned, RandSigned all use RandInt and use integer inputs to specify the range, hence, going beyond 31 bits unsigned or 32 bits signed is not possible.
WRT seeds, I think we are ok. My understanding is that if the seed produces a independent random value, then every Nth draw from the randomization function is also an independent random value.
The range inputs need to be based on the return type. Something more practical with VHDL2008 bit string literal extensions.
If you write them, I will make sure they get added.
Jim
November 8, 2012 at 09:46 #437Andy JonesMemberSo there would be no impact on “randomness” if we built the result a bit at a time? That might actually be easier.
Yes the min & max arguments would be of the same type as the result. The length of the result would be the maximum of the min & max lengths (like signed/unsigned arithmetic). This allows, for instance a default min of “0” and a max of some larger vector that sets the length of the result.
Since it would be difficult to implement the nonuniform distributions available in the RandSlv() functions, this should be additional overloads of Uniform() with slv/signed/unsigned arguments and results.
Andy
November 9, 2012 at 08:54 #438Jim LewisMemberI have some second thoughts.
While multiple draws on the same seed will remain uniform, you will need to be careful about how you construct the number as that may cause the number to be nonuniform.
Consider the small example done in two 4 bit pieces: randomize a number between 0 and 16#A3#. If you first randomize the upper nibble as 0 to A. If the result is between 0 and 9, then randomize the lower between 0 and F, otherwise randomize the lower between 0 and 3. This will result in the numbers A0, A1, A2, and A3 higher frequency (4X) of occurrence the other values.
I think this would work: randomize the upper word with the range constrained. Randomize the lower word(s) without the range constrained. If the resulting value is not within the range, then repeat the steps again until a value in range is found.
November 12, 2012 at 11:54 #442Andy JonesMemberGood catch!
In general, I dislike a solution that consumes random quantities of seeds, even if the consumption is repeatable. It could get very consumptive for words with more than a few chunks, depending upon the actual constraints.
Alternatively, I could reduce the probability of the first draw being A, compared to each of the lesser values (09), to compensate for the increased probability of 03 relative to 015.
The flaw in the original approach was not that the lesser chunk’s probabilities were different, but that the greater chunk’s probabilities did not reflect that difference.
So, all* I have to do is:
1) Efficiently reduce the probability of a max value compared to the others (this will negatively impact the optimum chunk size, which will increase the number of chunks).
2) Develop an algorithm to calculate the reduced probability based on lesser chunk(s)’ constrained vs unconstrained probabilities for Nchunk words.
Does more than chunk N1’s probabilities need to be considered when calculating the probability of pulling the max value for chunk N?
*That’s all… ðŸ™‚
Andy

AuthorPosts
 You must be logged in to reply to this topic.