Unique Requirement Pass Goals For Each Test Case
Why OSVVM™? › Forums › OSVVM › Unique Requirement Pass Goals For Each Test Case
- This topic has 1 reply, 2 voices, and was last updated 3 months ago by
Jim Lewis.
-
AuthorPosts
-
June 18, 2025 at 11:58 #2736
Charlie
MemberI have two separate test cases in my testbench – randomised_tests.vhd and directed_tests.vhd. I then have a separate checker module incorporating scoreboards and requirements tracking. Each of the requirements and their passed goals are declared inside the checker module, therefore the pass goals are common to both test cases.
I’m wondering if it’s possible to set unique pass goals for each test case, given my testbench architecture.
Thanks
June 18, 2025 at 19:51 #2737Jim Lewis
MemberHi Charlie,
What I suspect you are doing is that in the checker model you have created your requirements as follows:Req1ID <= NewReqID("Req1", 1, ...) ; Req2ID <= NewReqID("Req2", 1, ...) ;
When the build finishes with one or more tests that use this checker module is that the requirements are merged into the final report. However, that report also has the items that it merged together.
With the merged results, how should the requirements goal be tabulated? The default behavior (because of history) is to take the highest requirement level and use this as the total requirement goal. This requires that we merge the requirement specification into the merged requirements report. The other behavior is to sum the coverage goals from each test case to formulate the total goal for that requirement.
These behaviors are controlled by Tcl setting named USE_SUM_OF_GOALS. Its default is in Scripts/OsvvmSettingsDefault.tcl. You override it by creating an OsvvmSettingsLocal.tcl. See Documentation/OsvvmSettings_user_guide.pdf for details on what directory OsvvmSettingsLocal.tcl goes in.
You can also change the setting for a particular build by doing the following in your *.pro script:
set ::osvvm::USE_SUM_OF_GOALS "true"
If you want the requirements to be handled separately, you need to give them different names. If you are running with OSVVM scripts, you should already be setting the test name in the Control process of the test sequencer:
architecture T1 of TestCtrl is . . . begin ControlProc : process begin SetTestName("T1") ; -- make sure this is done before any wait statements . . .
Then in your checker model you can do:
wait for 0 ns ; -- make sure TestName is set before reading it Req1ID <= NewReqID(GetTestName & ".Req1", 1, ...) ; Req2ID <= NewReqID(GetTestName & ".Req2", 1, ...) ;
This now will give you unique test requirements for each test case.
Note that by default, if a requirement in a given test case does not reach its goal, the test fails with an error.
Best Regards,
Jim -
AuthorPosts
- You must be logged in to reply to this topic.