Jim Lewis
Forum Replies Created
-
AuthorPosts
-
October 19, 2025 at 20:02 #2792
Jim LewisMemberHi Hassan,
What version of OSVVM are you using? The current version is 2025.06a.
What simulator and version are you using?> 1. When only the testbench code was compiled. Why did it say “failed”?
Perhaps you can share the log files with me so I can look at it. Ultimately I need a reproducer test case of this.
Also tryputs $::errorInfoDid you do the exact sequence as above or were there other builds before it?
> 2. How can one open the HTML report that shows details of the build and also the single test that was run?
You need to run the test case with build. So put the RunTest in a *.pro file.
> 3. Without changing the VHDL code, how can one cause the log and debug messages to be
> printed only for failing test but not when the test passes? I am referring to the
> terminal in which the test is running and not the HTML report or any other file generated
> with the transcript.Ultimately you need to know a test case is going to fail – this may mean running it twice. Once observing that it failed and a second time rerunning it with a method to change the settings.
You can change the settings by:
1) Add a generic to TestCtrl that turns on info and debug messages.
2) Read the log enables from a file using ReadLogEnables. May not allow test wide settings.I am looking for what is next and am open to suggestions. One thing I need to do is to make sure ReadLogEnables allows test wide settings – either independent of or matching the test case name. Another might be a mechanism to turn on mirroring.
October 16, 2025 at 20:35 #2790
Jim LewisMemberHi Nigel,
In the OSVVM 2024.09 and 2025.02 releases, the 2019 compile switch was turned on if the Questa release is greater than 2024.2. With the OSVVM 2025.04a release and beyond the 2019 switch it is turned back off again. So I recommend getting the newest release of OSVVM (2025.06a) and this problem is resolved.In Questa, your issue is happening because OSVVM was compiled with 2019 and your project was compiled with a different version. So in Questa, when you turn on VHDL-2019, you must turn it on for your whole project. You should be able to compile other parts of your project with 2019 as the VHDL-2019 standard should be backward compatible with older versions.
If you do not want to change your OSVVM revision, simply add the following to either your scripts or to your OsvvmSettingsLocal.tcl file:
SetVHDLVersion 2008So how did this happened? In 2024.2 release of Questa, Questa claimed 2019 support. So I tested the VHDL-2019 based RandomPkg2019.vhd package. It passed all of our regressions and worked fine on Questa 2024.2 and 2024.3. In 2025 releases, the Questa started flagging the VHDL-2019 code in RandomPkg2019.vhd as unsupported.
Best Regards,
JimOctober 16, 2025 at 20:35 #2789
Jim LewisMemberHi Nigel,
In the OSVVM 2024.09 and 2025.02 releases, the 2019 compile switch was turned on if the Questa release is greater than 2024.2. With the OSVVM 2025.04a release and beyond the 2019 switch it is turned back off again. So I recommend getting the newest release of OSVVM (2025.06a) and this problem is resolved.In Questa, your issue is happening because OSVVM was compiled with 2019 and your project was compiled with a different version. So in Questa, when you turn on VHDL-2019, you must turn it on for your whole project. You should be able to compile other parts of your project with 2019 as the VHDL-2019 standard should be backward compatible with older versions.
If you do not want to change your OSVVM revision, simply add the following to either your scripts or to your OsvvmSettingsLocal.tcl file:
SetVHDLVersion 2008So how did this happened? In 2024.2 release of Questa, Questa claimed 2019 support. So I tested the VHDL-2019 based RandomPkg2019.vhd package. It passed all of our regressions and worked fine on Questa 2024.2 and 2024.3. In 2025 releases, the Questa started flagging the VHDL-2019 code in RandomPkg2019.vhd as unsupported.
Best Regards,
JimOctober 14, 2025 at 18:00 #2786
Jim LewisMemberHi Mikael,
This is a good point. In VHDL-2019, we put things in that will allow OSVVM to collect error information from VHDL and PSL assertions. Sounds like a great addition for the next release.Do you know which version of Questa started supporting the following subprograms from std.env:
For VHDL Asserts:
GetVhdlAssertCount, IsVhdlAssertFailed, ClearVhdlAssert, IsVhdlAssertFailed,
SetVhdlAssertEnable, GetVhdlAssertEnable
For PSL:
PslAssertFailed, PslIsCovered, ClearPslStateWith these OSVVM should be able to report these directly in our reports.
In the VHDL-2019 presentation, I am lead to believe these are at least supported in the current version.
Best Regards,
JimSeptember 27, 2025 at 15:50 #2776
Jim LewisMemberHi Mikkie
Use a negative value in the ExternalErrors parameter to EndOfTestReports. For example if you have 3 expected errors:EndOfTestReports(ExternalErrors => (FAILURE => 0, ERROR => -3, WARNING => 0)) ;The value is summed up with its corresponding field, then the absolute value is taken, and finally the separate values are summed to determine the number of test errors. Hence, if you expect 3 errors and get 2 errors and 1 failure, the test will fail with 2 test case errors.
OSVVM also has a request for a mechanism that would be used for known failures that we will eventually fix. I am looking at how to address this in the reporting.
Cheers,
JimSeptember 2, 2025 at 15:45 #2768
Jim LewisMemberHi Marvin,
What Avalon MM capability are you looking for? Basic access is easy – registers. The curious and clever pipelining modes looked challenging when I looked at the interface. If you could enumerate what you need, it may break it loose.Currently I am tied up working on Questa scripts due to their decision to make numerous non-backward compatible changes and have different variants in different tool versions (Questa Classic vs QuestaOne). I think I will wrap up on that shortly.
If you would like Avalon MM prioritized, you could fund its development. Part of my concern is investing in an interface that Altera is gradually replacing with AXI – I was actually expecting them to do that more quickly.
Best Regards,
JimAugust 11, 2025 at 17:29 #2762
Jim LewisMemberHi Francois,
Look at package OsvvmLibraries/Common/src/StreamTransactionPkg.vhd and see StreamRecType.
Note that each type in there is either a special type from OSVVM’s ResolutionPkg (see OsvvmLibraries/osvvm)
or by creating a custom resolution function, as was done for the enumerated type, StreamOperationType,
that was defined in the package. Use what was done in this package as a template.Note AddressBusTransactionPkg.vhd is very similar.
Best Regards,
JimAugust 7, 2025 at 16:17 #2760
Jim LewisMemberHi Francois,
The VC in the OSVVM library use either osvvm_common.StreamTransactionPkg.StreamRecType (for send and get type transactions – used by UART, AxiStream, xMii) or osvvm_common.AddressBusTransactionPkg.AddressBusRecType (for read and write type transactions – used by Axi4Manager, Axi4Memory, DpRam, WishboneManager, WishboneSubordinate).You can access either one using
`
library osvvm_common ;
context osvvm_common.OsvvmCommonContext ;
`A testbench for a counter may only need directive transactions, for which either record will work.
You can create your own record. Use those packages as an example if you like.Best Regards,
JimJuly 16, 2025 at 15:25 #2747
Jim LewisMemberIf the CoverReport process is removed, then the “Stim” process can end with std.env.stop – which is typically how OSVVM ends the test cases.
-- Stimulus Generator Stim: process variable RandA : RandomPType; variable RandB : RandomPType; variable allDone : boolean := false; variable nCycles : natural := 0; begin SetTestName("tb_osvvm_comparator_VHDL"); SetLogEnable(INFO, TRUE); SetLogEnable(PASSED, TRUE); RandA.InitSeed(RandA'instance_name); RandB.InitSeed(RandB'instance_name); while not allDone and (NOW < 1 ms) loop A <= RandA.Randslv(0, 3,2); B <= RandB.Randslv(0, 3,2); wait for OP_DELAY; allDone := cp_A_B.isCovered; nCycles := nCycles + 1; end loop; wait for 1 ns; log("Number of simulation cycles = " & to_string(nCycles)); AffirmIfEqual(CountCovHoles(cp_A_B), 0, "Coverage holes") ; EndOfTestReports( ReportAll => TRUE, ExternalErrors => (0, 0, 0), Stop => FALSE, TimeOut => FALSE ); std.env.stop ; wait; end process;When the stimulus generation gets more complex than this, I will move the OSVVM runner stuff to separate process called “ControlProc”, but that is not necessary in a simple test case like this one.
July 16, 2025 at 15:19 #2746
Jim LewisMemberOne final thing to try. If you like the Alert/AffirmIf printing, you might also like the log printing better than the vhdl Report statement. In the “Stim” process you could change these to logs:
log("Number of simulation cycles = " & to_string(nCycles)); log("Coverage holes: " & to_string(CountCovHoles(cp_A_B)));Logs have levels. These are level ALWAYS, which like report always print. You could give them a level and make them print only when enabled. But here ALWAYS is probably appropriate.
You could also make it a requirement that the Coverage holes is zero by doing the replacing the log for CountCovHoles with:
AffirmIfEqual(CountCovHoles(cp_A_B), 0, "Coverage holes") ;AffirmIfEqual is a short hand for what you were doing with AffirmIf. Your AffirmIf could be updated as follows with AffirmIfEqual. This will add the word “Actual” before the first value, but otherwise, it is the same as your output.
AffirmIfEqual(A_less_B, expected_less, "A_less_B");July 16, 2025 at 15:07 #2745
Jim LewisMemberThis step is recommended, but not required. Do it after you get the above running.
To get the HTML based Functional Coverage reports, you need to use the singleton rather than the older, deprecated shared variable approach.
This requires updating your architecture declarations, “InitCoverage”, “Sample”, and “CoverageReport” processes as follows. I would also recommend that you put your “InitCoverage” in the same process as “Sample”.
architecture Behavioral of tb_osvvm_comparator_VHDL is ------------------------------------------------------ --Coverage IDs signal cp_A : CoverageIDType; signal cp_B : CoverageIDType; signal cp_A_B : CoverageIDType; begin ------------------------------------------------------ -- Coverage Bin Setup InitCoverage: process begin cp_A <= NewID("cp_A") ; cp_B <= NewID("cp_B") ; cp_A_B <= NewID("cp_A_B") ; wait for 0 ns ; -- since cp_A, ... are signals AddBins(cp_A, GenBin(0, 3)); AddBins(cp_B, GenBin(0, 3)); AddCross(cp_A_B, GenBin(0, 3), GenBin(0, 3)); wait; end process; ------------------------------------------------------ -- Sampling Coverage Sample: process begin loop wait on A, B; wait for 1 ns; -- not needed. A and B are updated. Output not necessarily stable, but not looking at it so OK. ICover(cp_A, to_integer(unsigned(A))); ICover(cp_B, to_integer(unsigned(B))); ICover(cp_A_B, (to_integer(unsigned(A)), to_integer(unsigned(B)))); end loop; end process; ------------------------------------------------------ -- Report Coverage CoverReport: process begin wait until STOP; report "A Coverage details"; WriteBin(cp_A) ; report "B Coverage details"; WriteBin(cp_B) ; report "AxB Coverage details"; WriteBin(cp_A_B); report "Coverage holes: " & to_string(CountCovHoles(cp_A_B)); ReportAlerts; end process;Note, I have not tested this code, so there may be some typos. Found one and fixed it WRT CountCovHoles.
Note that the “CoverReport” process is not really needed since now you have HTML reports that are much better than the text reports produced by WriteBin. The only thing that may be interesting is the report at the end of the “CoverReport” process. In the first step, I moved this to the end of the “Stim” process – this one also needs updated to:
report "Coverage holes: " & to_string(CountCovHoles(cp_A_B));July 16, 2025 at 15:06 #2744
Jim LewisMemberThis step is recommended, but not required. Do it after you get the above running.
While I SetTestName, SetLogEnable can be called concurrently, it is more normal to call them in the process that is controlling the overall test. Currently this is the “Stim” process but we will return to this:
Stim: process variable RandA : RandomPType; variable RandB : RandomPType; variable allDone : boolean := false; variable nCycles : natural := 0; begin SetTestName("tb_osvvm_comparator_VHDL"); SetLogEnable(INFO, TRUE); SetLogEnable(PASSED, TRUE); RandA.InitSeed(RandA'instance_name); RandB.InitSeed(RandB'instance_name);July 16, 2025 at 14:43 #2742
Jim LewisMemberThanks for posting your whole testbench as it makes the issue clear.
You are calling EndOfTestReports concurrently. So it runs at time 0 ns. You do not want it to run until your test case has finished. Same goes for WriteAlertYaml and WriteAlertSummaryYaml. However, you do not need to call these or ReportAlerts as they are all called by EndOfTestReports.
So a quick fix is to update your stimulus process as follows:
-- Stimulus Generator Stim: process variable RandA : RandomPType; variable RandB : RandomPType; variable allDone : boolean := false; variable nCycles : natural := 0; begin RandA.InitSeed(RandA'instance_name); RandB.InitSeed(RandB'instance_name); while not allDone and (NOW < 1 ms) loop A <= RandA.Randslv(0, 3,2); B <= RandB.Randslv(0, 3,2); wait for OP_DELAY; allDone := cp_A_B.isCovered; nCycles := nCycles + 1; end loop; wait for 1 ns; report "Number of simulation cycles = " & to_string(nCycles); report "Coverage holes: " & to_string(cp_A_B.CountCovHoles); EndOfTestReports( ReportAll => TRUE, ExternalErrors => (0, 0, 0), Stop => FALSE, TimeOut => FALSE ); STOP <= true; wait; end process;Try that, that should get you running.
June 18, 2025 at 19:51 #2737
Jim LewisMemberHi Charlie,
What I suspect you are doing is that in the checker model you have created your requirements as follows:Req1ID <= NewReqID("Req1", 1, ...) ; Req2ID <= NewReqID("Req2", 1, ...) ;When the build finishes with one or more tests that use this checker module is that the requirements are merged into the final report. However, that report also has the items that it merged together.
With the merged results, how should the requirements goal be tabulated? The default behavior (because of history) is to take the highest requirement level and use this as the total requirement goal. This requires that we merge the requirement specification into the merged requirements report. The other behavior is to sum the coverage goals from each test case to formulate the total goal for that requirement.
These behaviors are controlled by Tcl setting named USE_SUM_OF_GOALS. Its default is in Scripts/OsvvmSettingsDefault.tcl. You override it by creating an OsvvmSettingsLocal.tcl. See Documentation/OsvvmSettings_user_guide.pdf for details on what directory OsvvmSettingsLocal.tcl goes in.
You can also change the setting for a particular build by doing the following in your *.pro script:
set ::osvvm::USE_SUM_OF_GOALS "true"If you want the requirements to be handled separately, you need to give them different names. If you are running with OSVVM scripts, you should already be setting the test name in the Control process of the test sequencer:
architecture T1 of TestCtrl is . . . begin ControlProc : process begin SetTestName("T1") ; -- make sure this is done before any wait statements . . .Then in your checker model you can do:
wait for 0 ns ; -- make sure TestName is set before reading it Req1ID <= NewReqID(GetTestName & ".Req1", 1, ...) ; Req2ID <= NewReqID(GetTestName & ".Req2", 1, ...) ;This now will give you unique test requirements for each test case.
Note that by default, if a requirement in a given test case does not reach its goal, the test fails with an error.
Best Regards,
JimJune 3, 2025 at 00:26 #2722
Jim LewisMemberHi Isaac,
I think Patrick’s solution will be the defacto solution for integration with VUnit. Particularly since he is working on generating reports too – really cool.Before I talked to Patrick today, I worked on the script I mentioned above. For each OSVVM library, it will create a list of files associated with the library. Note the file list may be be different for different simulators since there are files with work arounds for particular simulator issues.
Run this by starting up your simulator with the OSVVM scripts, per the OSVVM Script_users_guide.pdf in the OsvvmLibraries/Documentation directory. For Questa, ModelSim or Riviera-PRO, all you do is:
source <path-to-OsvvmLibraries/OsvvmLibraries/Scripts/StartUp.tclThen do:
source $OsvvmLibraries/Scripts/VendorScripts_CompileList.tclAnd then run the OSVVM Scripts (takes only seconds):
build $OsvvmLibrariesYou will find the scripts in files named:
_ .files. You will find VendorScripts_CompileList.tcl on the dev branch of OsvvmLibraries.
Best Regards,
Jim -
AuthorPosts