Andy Jones
Forum Replies Created
-
AuthorPosts
-
June 30, 2016 at 10:09 #1175Andy JonesMember
For option 2 to be meaningful, the entity needs to have visibility of the type (for its port declaration), in addition to the architecture.
Also, somewhere outside the entity, will other signals or entities’ ports need this same type definition? IINM, if two instances of a generic package define a type, those two types are not the same type, and are not compatible, even if the same generic actuals were used on both package instances.
Andy
June 29, 2016 at 09:25 #1174Andy JonesMemberJim’s suggestion is good, but there are limitations:
You can’t use multiple index ranges, or an index range and individual index, in separate associations of parts of the same formal.
All formals that are elements of the same port must appear in consecutive associations (but the associations of a single aggregate port need not appear in any specific order).
For example, you can’t do this:
input(0) => in_a,
output(0) => out_a,
input(1) => in_b,
output(1) => out_b,
The restrictions were probably intended to make it easier on the compiler to confirm that all elements of an aggregate formal are mapped, and none are repeated.
Note that elemental formal associations also work with formals of a record type.
Andy
May 11, 2014 at 11:45 #796Andy JonesMemberSince this question has nothing to do with OSVVM, I suggest reposting it in the VHDL forum.
Andy
March 27, 2014 at 18:29 #783Andy JonesMemberAmos,
1st issue: I have seen very few designs that use a signed numeric interpretation of every std_logic_vector in the design. Most designs that use signed interpretation also use unsigned interpretations, which would not be possible if such a package were used. If you need signed interpretations, it is best to use either integer subtypes or explicit signed data types to clearly define where signed interpretations and arithmetic are desired.
2nd issue: Unlike the mathematical operators that have a widely accepted operator precedence, boolean operators do not. Therefore explicitly defining the desired precedence of boolean operators using parentheses provides an unambiguous description of the desired result. When you wrote “a or b and c” I had no idea whether you meant “(a or b) and c” or “a or (b and c)”.
Think less about how little you can type, and more about how clearly you can express the desired behavior. Those who will have to review, reuse or maintain your design will thank you for it.
Andy
January 18, 2014 at 08:50 #739Andy JonesMemberYeah, mentor graphics survey indicates design verification takes more than twice the effort of design itself (70% of total project)
Just because you have a default parameter does not mean that you can’t have a simple check on the parameter and choose an optimal algorithm base on its value.
With separate functions you have to test the erstwhile default value case on two different functions (uless you test the value in the larger function and call the smaller function from within the larger one.)
But consistency with precedents is extremely important for usability. Even if defaulted parameters are not in the optimal order to allow simpler use of positional association, they still come in handy for named association.
Andy
January 16, 2014 at 21:19 #735Andy JonesMemberWhy is it desired to have, for all of the variants of RandIntV(), size as the last parameter, when it is always required?
If the additional (optional) parameters were last, then they could be defaulted, while still allowing positional notation, and fewer functions could be defined, thereby reducing maintenance and code bulk.
Unique and exclude could be defaulted to 0 and a null vector, respectively.
Andy
January 16, 2014 at 19:34 #734Andy JonesMemberI think 1 ns is a reasonable default unit.
If you default the exclude parameter to a null time_vector, then do you need versions without exclude? Same for units?
If you wanted no excludes but ms units, you’d have to use named association for the units param.
Andy
October 23, 2013 at 19:34 #727Andy JonesMemberUnfortunately, protected type method calls cannot span time (e.g. wait) because they are mutually exclusive. Multiple calls to the same variable’s same method execute one at a time from start to finish. This would require some sort of implicit wait (across time) in one call while another call finished. While possible, the effects of this are not well understood, and therefore not (yet) allowed.
Andy
May 28, 2013 at 07:06 #645Andy JonesMemberSrini,
Yep, you touched it! And for a reason that is completely, absolutely unnecessary. The advised code (qualified expression) works on all tools, so there is no reason to use a pre-processor/compiler directive.
Such crutches are often needed in poorly specified languages like Verilog/SV, because of misunderstandings about what the correct compiler/language behavior actually should be, due to the lack of a complete, formal specification for the language.
Furthermore, once allowed, these hacks work their way into standard libraries and usage models (UVM!), instead of fixing the shortcomings of the language and/or SV library itself.
Andy
May 1, 2013 at 07:23 #582Andy JonesMemberJim,
Very nice!
Thanks,
Andy
November 12, 2012 at 11:54 #442Andy JonesMemberGood catch!
In general, I dislike a solution that consumes random quantities of seeds, even if the consumption is repeatable. It could get very consumptive for words with more than a few chunks, depending upon the actual constraints.
Alternatively, I could reduce the probability of the first draw being A, compared to each of the lesser values (0-9), to compensate for the increased probability of 0-3 relative to 0-15.
The flaw in the original approach was not that the lesser chunk’s probabilities were different, but that the greater chunk’s probabilities did not reflect that difference.
So, all* I have to do is:
1) Efficiently reduce the probability of a max value compared to the others (this will negatively impact the optimum chunk size, which will increase the number of chunks).
2) Develop an algorithm to calculate the reduced probability based on lesser chunk(s)’ constrained vs unconstrained probabilities for N-chunk words.
Does more than chunk N-1’s probabilities need to be considered when calculating the probability of pulling the max value for chunk N?
*That’s all… 🙂
Andy
November 8, 2012 at 09:46 #437Andy JonesMemberSo there would be no impact on “randomness” if we built the result a bit at a time? That might actually be easier.
Yes the min & max arguments would be of the same type as the result. The length of the result would be the maximum of the min & max lengths (like signed/unsigned arithmetic). This allows, for instance a default min of “0” and a max of some larger vector that sets the length of the result.
Since it would be difficult to implement the non-uniform distributions available in the RandSlv() functions, this should be additional overloads of Uniform() with slv/signed/unsigned arguments and results.
Andy
November 6, 2012 at 13:54 #432Andy JonesMemberI think if you use indices of ‘low & ‘high instead of ‘left & ‘right, and of ‘low + 1 instead of 1, then the direction of the supplied array will not matter. The actual left-right direction with which you build the DistArray is irrelevant, so long as you search it in the same direction.
You’re right, DistValInt() will let me do what I want, just not nearly as cleanly. DistValInt() is best for getting random members from a set of significantly non-contiguous integers, where an array with a big enough index range would be too big, and have lots of zero weights (even if it is easy to specify). A good example would be a set of prime numbers.
BTW, thank you very much for your work on this library!
Andy
November 6, 2012 at 10:55 #430Andy JonesMemberJim,
Thanks for your quick response.
I found the issues while reading the source code.
I don’t have any uses for negative weights, and I’m not really sure conceptually how they would work anyway, but I’m not sure that others won’t have a use for them. If there is a viable usage model for negative weights, then we should either support it or error out (like we do for all-zero weights).
The other (3rd?) issue I brought up was that it would be useful to return a result within the index range of the Weight array supplied, rather than always returning 0 to ‘length – 1.
I think it would add useful utility/flexibility, and would not cause problems with most existing code. It would simplify many use cases that currently require supplying value-weight pairs, and would allow use of “others” in those cases too.
Example:
constant weighting : integer_vector(8 to 15) := (9 => 25, 13 => 51, others => 4);
variable value : natural range weighting’range;
…
value := myRnd.DistInt(weighting); — returns value in weighting’rangeAndy
-
AuthorPosts