You are browsing the archive for Randomization.

OSVVM Webinar (June 25th) and Classes

June 22, 2015 in Event, Functional Coverage, OS-VVM in general, Randomization

Webinar OSVVM for VHDL Testbenches. Thursday June 25, 2015
Open Source VHDL Verification Methodology (OSVVM) is a comprehensive, advanced VHDL verification methodology. Like UVM, OSVVM is a library of free, open-source code (packages). OSVVM uses this library to implement functional coverage, constrained random tests, and Intelligent Coverage random tests with a conciseness, simplicity and capability that rivals other verification languages.

In 2015, OSVVM added comprehensive error and message reporting (January, 2015.01) and memory modeling (June, 2015.06). With this expanded capability, this presentation
takes a look at the big picture methodology progressing transactions to randomization to functional coverage to intelligent coverage to alerts (error reporting) and logs (message reporting) to memory modeling.

Worried about keeping up with the latest trends in verification? With Intelligent Coverage, OSVVM has a portable, VHDL-based, intelligent testbench solution built into the library. While Accellera is still working on their Intelligent testbench based portable stimulus solution (in the Portable Stimulus Working Group -PSWG), for OSVVM it is already here. Best of all, OSVVM is free and works in any VHDL simulator that support a minimal amount of VHDL-2008.

Europe Session 3-4 pm CEST 6-7 am PDT 9-10 am EDT Enroll with Aldec
US Session 10-11 am PDT 1-2 pm EDT 7-8 pm CEST Enroll with Aldec

OSVVM World Tour Dates
VHDL Testbenches and Verification – OSVVM+ Boot Camp
Learn the latest VHDL verification techniques including transaction level modeling (tlm), self-checking, scoreboards, memory modeling, functional coverage, directed, algorithmic, constrained random, and intelligent testbench test generation. Create a VHDL testbench environment that is competitive with other verification languages, such as SystemVerilog or ‘e’. Our techniques work on VHDL simulators without additional licenses and are accessible to RTL engineers.

July 20-24 and August 3-7 online class Enroll with SynthWorks
September 14-18 Bracknell, UK Enroll with FirstEDA
September 21-25 and October 5-9 online class Enroll with SynthWorks
October 26-30 Portland, OR (Tigard/Tualatin) Enroll with SynthWorks
November 9-13 Copenhagen, Denmark Enroll with FirstEDA
November 16-20 and November 30 – December 4 online class Enroll with SynthWorks

Presented by:
Jim Lewis, SynthWorks VHDL Training Expert, IEEE 1076 Working Group Chair, and OSVVM Chief Architect

Functional Coverage Goals and Randomization Weights

August 3, 2013 in Functional Coverage, Randomization

In a constrained random approach, different items can be selected more frequently by using randomization weights. Items with a higher randomization weight are selected more frequently.

In Intelligent Coverage, the same effect can be achieved by using coverage goals. A coverage goal specifies how many times a value must land in a bin before the bin is considered covered. Each bin within the coverage model can have a different coverage goal. By default, coverage goals are also used as a randomization weight. Bins with a higher goal/weight will be generated more frequently. When a bin reaches its goal, it is no longer selected by Intelligent Coverage randomization.

Post continues at SynthWorks OSVVM Blog

Using OSVVM for DVB-S2 IP Core Validation

May 2, 2013 in Functional Coverage, OS-VVM in general, Randomization

Hi,

My name is Matthias Alles, I’m CEO and co-founder of Creonic, a Germany-based IP core provider in the field of communications. In this blog post I will show how we are using intelligent coverage provided by OSVVM for validation of our IP cores. I will use the DVB-S2 standard as an example IP core. DVB-S2 is the de-facto standard for satellite communications, e.g., for broadcasting HDTV signals.

The Creonic DVB-S2 IP core performs forward error correction as defined within the standard, i.e. LDPC and BCH decoding. Forward error correction is a technique to correct errors that occur during storage or transmission of digital data.  Before data transmission one adds redundant parity information to the payload information. Payload information plus parity information is denoted as code block. During transmission multiple bits of the code block can flip as they are disturbed for instance by clouds or rain. The receiver exploits the parity information to decode the original payload information.

DVB-S2 defines about 50 configurations that have to be supported by such an IP core. These configurations define parameters like code block size and amount of parity information. In order to achieve functional coverage during validation of a DVB-S2 IP core one obviously has to go through all the configurations. For each configuration, multiple blocks must be tested.

A straight forward approach is to go through all cases in a linear order, i.e., first we test let’s say 1000 code blocks for configuration 1, then 1000 blocks for configuration 2, and so on until all configurations were covered with at least 1000 blocks.

for configuration in 1 to 52 loop
    — Configure IP core now
     for block in 0 to 999 loop
        –- Write block of configuration to the IP core now
    end loop;
end loop;

The drawback of this approach is that is doesn’t reflect how a DVB-S2 system works. In reality, the system is adaptive at run-time to the current signal-to-noise ratio one achieves while communicating with the satellite. These conditions can change quickly, for instance consider a cloud or even rain that suddenly disturbs your transmission. A more realistic scenario is that the single configurations switch randomly and we want our IP core test reflect this when changing the configuration from one block to another. We use the CoveragePkg with a two-dimensional coverage variable to fulfill this requirement:

constant NUM_BLOCKS : natural := 1000; — number of different blocks per configuration
constant AT_LEAST   : natural := 10;   — how often to test each block

shared variable v_cover_cfg_blk : CovPType; — the coverage variable

variable v_cov_current : integer_vector(0 to 1);
variable v_cov_current_cfg : natural;
variable v_cov_current_blk : natural;

-– Generate two-dimensional coverage matrix: (configuration x block)
v_cover_cfg_blk.AddCross(AT_LEAST, GenBin(1, 52), GenBin(0, NUM_BLOCKS – 1));

– Check whether we achieved functional coverage
while not v_cov_cfg_blk.IsCovered loop

    — Get the point in the coverage matrix we are supposed to test next
    v_cov_current := v_cover_cfg_blk.RandCovPoint;
    v_cov_current_cfg := v_cov_current(0); — index 0 is for the configuration
    v_cov_current_blk := v_cov_current(1); — index 1 is for the block

    –
    — Now configure IP core with configuration v_cov_current_cfg and
    — write block v_cov_current_blk.
    –

    — Tell the coverage model that this (configuration x block) combination was exercised
    v_cov_current.ICover(v_cov_current);
end loop;

With just a few lines of code we ensure that we have tested all NUM_BLOCKS blocks for all configurations AT_LEAST number of times.

For the real IP core validation things are a bit more complicated. Not all of the 52 configurations are valid, but we have holes that are not allowed. We deal with this fact during definition of the coverage matrix.

v_cover_cfg_blk.AddCross(AT_LEAST, GenBin(1, 28) & GenBin(33, 52), GenBin(0, NUM_BLOCKS – 1));

With this assignment we exclude the configurations 29 to 32 from the coverage matrix such that we will never exercise the IP core with these invalid configurations.

At Creonic we have used OSVVM for more than one year now and have made it an integral part of our verification plan. The intelligent coverage saves us a significant amount of validation time. The fact that it is open source allowed us to contribute to the code and we are happy to see that our contributions became part of the new 2013.04 release.

Happy coding!
Matthias

Why no constraint solver? Are you going to add one?

December 7, 2012 in Functional Coverage, OS-VVM in general, Randomization

Nope. No constraint solver.  Instead OS-VVM implements an innovative “Intelligent Testbench” feature that does a random walk across functional coverage holes.  We call this feature “Intelligent Coverage”.

Constraint solvers are yesterday’s verification technology.  Intelligent testbenches are the way forward.   In his 2011 DVCON address, Mentor Graphics CEO Wally Rhines noted that constrained random environments generate a significant number of redundant vectors on their way to functional coverage closure.  Randomization theory tells us that a good constraint solver will take O(N * log N) randomizations to generate N different test cases.   The log N factor correlates with Mentor’s observation of a 10X to 100X speedup when using an intelligent testbench tool.

With OS-VVM, intelligent coverage is built into CoveragePkg.  As a result, it is code feature rather than a tool feature – and it is free.

The focus of the next set of enhancements planned for the OS-VVM packages is to further enhance the “Intelligent Coverage” features.  These steps will move OS-VVM further ahead of other verification languages.

Furthermore, since OS-VVM randomizes across functional coverage holes, it provides a naturally balanced solution.  As a supplement to intelligent coverage, the sequential constrained random methodology provided by RandomPkg is sufficient for the time being.

For more information on OS-VVM’s coverage modeling and randomization methods, see CoveragePkg_user_guide.pdf and RandomPkg_user_guide.pdf  (both in the download zip file).  This material plus additional advanced techniques are covered in the class, VHDL Testbenches and Verification – OSVVM Bootcamp.

Webinar about OS-VVM coming soon. Any questions?

July 12, 2012 in Event, Functional Coverage, OS-VVM in general, Randomization

Hello Fellow OS-VVMers,

A webinar introducing OS-VVM is coming soon. It will be broadcast twice on July 19th, 2012 (Thursday):

If you plan to attend, please register using links above. There will be a chance to ask questions during the webinar, but if you already have questions, please ask them now in the replies (comments) to this post. The most interesting ones will be answered live during the webinar — your name will not be mentioned unless you explicitly give us permission in the reply (something like this: “You can call me John from Chicago while answering my question.”)

Hope to see you during the webinar!

Your Friendly Admin.

RandCovPoint for Item/Point coverage

April 24, 2012 in OS-VVM in general, Randomization

A question in the forums came up, “Why is there no RandCovPoint function for simple bins [aka point or item coverage]?”

RandCovPoint works for either cross coverage or item/point coverage.  The issue is that RandCovPoint only returns integer_vector.   This happened due to some ambiguity issue with one that returned integer.  As a result, when calling it for item/point coverage (simple bins), call it one of the following ways:


TestProc : process
variable val_int : integer ;
begin
. . .
(1 => val_int) := Bin1.RandCovPoint ;

TestProc : process
variable val_intvec : integer_vector(0 downto 0) ;
begin
. . .
val_intvec := Bin1.RandCovPoint ;

This is an issue that needs to be revisited to determine if it was a language or tool issue.  Being pragmatic at the time, I removed the one that returned integer so it would work with any simulator that supported protected types.

Jim

randc in OSVVM – another view

April 20, 2012 in Functional Coverage, OS-VVM in general, Randomization

OSVVM has a generalized sense of randc called Intelligent Coverage accessed through the method RandCovPoint.

Intelligent coverage is a superset of randc.  While randc concerns itself with a single object, intelligent coverage works across the entire set of coverage bins.  Like Jerry pointed out in his post, “The myth of ‘randc’”, for large data sets (a 32 bit value) there can be algorithmic challenges.   However, most verification problems do not have this many bins.  Also, most verification problems are only concerned with hitting each bin once.  To emulate randc in OSVVM, add functional coverage on each value of a single object, and use RandCovPoint to randomize values.  To generate each value before repeating, set the threshold value to 0.0 (meaning only consider values that are equal to the minimum percentage of coverage – or minimum coverage if all the coverage weights are the same).   With thresholding, a limited amount of repeating values can be enabled by increasing the threshold – also a valuable thing to do in verification.

Since intelligent works across all of the coverage bins, it actually does what I suspect SystemVerilog wanted out of randc – generate each test case once without repeating.  This is important since without intelligent testbench/coverage capability, to generate a sequence of N transactions in theory takes N* LogN.   Removing the LogN factor is important if your testbenches are taking time measured in hours since even for a small number of sequences, say 25, the log N factor represents a theoretical 3X speedup (due to a reduction in redundant vectors executed).   With VHDL and OSVVM, intelligent coverage is a built-in to CoveragePkg and is language accessible;  with SystemVerilog, to get intelligent testbench capability, plan on paying for a tool feature.

Intelligent coverage goes much beyond randc.  While randc prioritizes each value equally, intelligent coverage allows each bin to have a different coverage goal and/or weight.  While randc can only think in terms of individual values, intelligent coverage allows us to aggregate values into ranges of values.

Need examples?  See the CoveragePkg_users_guide.pdf in the doc directory of the package download.

The myth of ‘randc’

April 17, 2012 in OS-VVM in general, Randomization

Hello fellow VHDL coders!

We have recently received a question about doing randc in VHDL. Let’s have a look at it.

Anybody who worked with SystemVerilog implementation of randomization probably heard about randc which marks variables as cyclic random. Not everybody fully understands how it works, so let’s analyze simple example.

Let’s say that we want to randomize 2-bit logic variable v.(Let’s say that the number of bits will be marked as k, so in our case k=2).

  1. First we have to establish the maximal possible number of values variable v can take, which of course is n=2k=4. We will need n-element internal array A to handle randomization.
  2. We have to fill array A with values 0 to 3 in random order, let’s say 1, 3, 0, 2.
  3. Four subsequent calls of randc will return the numbers in order specified in step 2 above.
  4. Then we will have to shuffle the values inside A to get different permutation, let’s say 3, 2, 0, 1.
  5. Next four calls of randc will return the numbers in order specified in step 4 above.
  6. We repeat two previous steps until we cover all possible n!=24 permutations.

As you see, for larger k (and n) values this process gets memory hungry: for 8-bit values we need 256-element array, but for 16-bit values we need 65536-element array.
Processor resources are used in a very uneven manner: while getting next value from the array is quick, we should suspect that shuffling takes more time.

Shuffling of the array A is actually the source of not-so-obvious problems. The most efficient algorithm for shuffling n-element array is called Fisher-Yates shuffle, popularized by Durstenfeld and Knuth.
It looks more or less like this:

  1. Create n-element array A (let’s say that it is indexed 0 to n-1)
  2. Fill array A with desired numbers ordered from minimal to maximal
  3. Create for loop with counter i and value range n-1 downto 1
  4. Within the loop you have to generate random number j from the range 0 to i and
  5. Swap A(i) with A(j) and go to the next loop iteration

This algorithm has optimal complexity O(n) and is very easy to implement. It has one hidden danger: to generate unbiased permutations (all n! of them) you have to use really good Random Number Generator.

  • If RNG quality is poor and shows hidden short-term cycles, subsequent permutations will be similar or occurring too frequently
  • If the period of RNG is not long enough, some permutations will never show up

The second problem was overlooked by creators of SystemVerilog LRM, which states that the size of numbers randomized with randc can be restricted to conserve memory, but at least 8-bit numbers should be supported.
Let’s see how it works…

We have k=8, which translates to n=2k=256. There are n!=256!=8.5*10506 permutations of of the array of 8-bit values.

As we have seen above, Fisher-Yates needs n random numbers to generate one permutation, so we need at least pmin=n*n!=256!*256=2.2*10509 period of random number generator to get truly random results of randc for 8-bit numbers.
The problem is that all popular (=quick) versions of RNGs use 32-bit or 64-bit state variable, so their period is at best preal=264=1.8*1019. It gives us preal/pmin ratio of 8.4*10-491, which is astronomically small!

In other words, randc-randomized 8-bit numbers meeting SV LRM requirements will be about as random as Three-card Monte played in some shady market place…

If, however, you restrict yourself to more realistic numbers (k equal 4), randc results should be fine.

Current version of OS-VVM does not have direct implementation of randc-style randomization. We can add it in the future versions, but it would have to be restricted to run realistically. What do you think? Please share your opinions in the comments.

Happy coding.

Your Friendly Admin.

 

 

Are random number generators really random?

March 28, 2012 in OS-VVM in general, Randomization

Anybody who attempted programming probably ran into a problem: how to emulate random event such as coin toss. Typical programming languages have special procedures (called random number generatorsRNG) that let you generate so called pseudo-random numbers.

Why not true random? Because numbers generated by those RNGs eventually repeat. The period of repetition is usually very long and not noticeable for human eye, but some advanced applications may be sensitive to short period RNGs.

Some freely available RNGs, e.g. the one built into C programming language, are considered extremely weak due to additional repeatable patterns in the stream of generated random numbers – shorter than the nominal period of the generator. If you ever played computer poker game and were surprised that there were always at least 3 cards of the same color in each ‘randomly generated’ hand, you experienced weak RNG in action.

If you ever simulated LFSR (Linear Feedback Shift Register), generally considered good hardware implementation of RNG, the numbers at the output port probably looked quite random. If you looked at the waveform of LFSR output expanded into individual bits, you probably noticed repeating patterns.

Fortunately for OS-VVM users, RNG used in our RandomPkg package is of significantly better quality. It is not perfect (e.g. it should not be used for cryptology applications), but fast and good enough for testbench applications.