# Support for large SLV/signed/unsigned?

Why OSVVM™? Forums OSVVM Support for large SLV/signed/unsigned?

Viewing 5 posts - 1 through 5 (of 5 total)
• Author
Posts
• #435
Andy Jones
Member

The current implementation supports creating larger-than-32-bit SLV/signed/unsigned random values, but the useful resolution is limited to 31/32 bits due to the natural/integer foundation.

Are there any plans or ideas for fully supporting arbitrary vector size with at least some of the methods/distributions?

Until/unless they are fully supported, should the current implementation issue a warning if the requested vector size exceeds 31 or 32 bits, or should we constrain the size argument appropriately?

Ideas for implementation:

First, I’m assuming we are only interested in numerically significant vectors, with no meta-values. We could use a vector of random integers to create larger-than-32-bit, fully populated random SLVs by “chunking” them 31/32 bits at a time.

This would take multiple calls to the internal random function, consuming/advancing multiple seed values. Would this consumption of multiple seed values be a problem? Note that the number of seeds consumed would be based only on the size of the vector. Multiple (variable) trials would not be required to satisfy constraints.

I understand how to implement at least the min/max constraints and the arithmetic involved, and am willing to help with the implementation, I just don’t know enough about the potential randomization seed issues.

Andy

#436
Jim Lewis
Member

Hi Andy,

Good catch.  ðŸ™‚

Issue:  RandSlv, RandUnsigned, RandSigned all use RandInt and use integer inputs to specify the range, hence, going beyond 31 bits unsigned or 32 bits signed is not possible.

WRT seeds, I think we are ok.  My understanding is that if the seed produces a independent random value, then every Nth draw from the randomization function is also an independent random value.

The range inputs need to be based on the return type.  Something more practical with VHDL-2008 bit string literal extensions.

If you write them, I will make sure they get added.

Jim

#437
Andy Jones
Member

So there would be no impact on “randomness” if we built the result a bit at a time? That might actually be easier.

Yes the min & max arguments would be of the same type as the result. The length of the result would be the maximum of the min & max lengths (like signed/unsigned arithmetic). This allows, for instance a default min of “0” and a max of some larger vector that sets the length of the result.

Since it would be difficult to implement the non-uniform distributions available in the RandSlv() functions, this should be additional overloads of Uniform() with slv/signed/unsigned arguments and results.

Andy

#438
Jim Lewis
Member

I have some second thoughts.

While multiple draws on the same seed will remain uniform, you will need to be careful about how you construct the number as that may cause the number to be non-uniform.

Consider the small example done in two 4 bit pieces:   randomize a number between 0 and 16#A3#.  If you first randomize the upper nibble as 0 to A.  If the result is between 0 and 9, then randomize the lower between 0 and F, otherwise randomize the lower between 0 and 3.  This will result in the numbers A0, A1, A2, and A3 higher frequency (4X) of occurrence the other values.

I think this would work: randomize the upper word with the range constrained.  Randomize the lower word(s) without the range constrained.  If the resulting value is not within the range, then repeat the steps again until a value in range is found.

#442
Andy Jones
Member

Good catch!

In general, I dislike a solution that consumes random quantities of seeds, even if the consumption is repeatable. It could get very consumptive for words with more than a few chunks, depending upon the actual constraints.

Alternatively, I could reduce the probability of the first draw being A, compared to each of the lesser values (0-9), to compensate for the increased probability of 0-3 relative to 0-15.

The flaw in the original approach was not that the lesser chunk’s probabilities were different, but that the greater chunk’s probabilities did not reflect that difference.

So, all* I have to do is:

1) Efficiently reduce the probability of a max value compared to the others (this will negatively impact the optimum chunk size, which will increase the number of chunks).

2) Develop an algorithm to calculate the reduced probability based on lesser chunk(s)’ constrained vs unconstrained probabilities for N-chunk words.

Does more than chunk N-1’s probabilities need to be considered when calculating the probability of pulling the max value for chunk N?

*That’s all… ðŸ™‚

Andy

Viewing 5 posts - 1 through 5 (of 5 total)
• You must be logged in to reply to this topic.