OSVVM 2021.10: Build Summary Reports
When we run a set of tests, we need to be able to assess whether all test cases passed or quickly identify which test cases failed. This is the purpose of the OSVVM 2021.10 Build Summary Reports.
When tests fail, we need to have detailed information for each test case in a separate report that helps reveal the source of the issue. This is the purpose of the OSVVM 2021.10 Detailed Test Case Reports.
Just like FPGA and ASIC verification, OSVVM writes numerous test cases to test a particular item, such as a verification component. We group all of the test cases for an item into a test suite. The OSVVM regression tests have public test suites for each family of verification components. These are grouped as AXI4 Full, Axi4Lite, AxiStream, and UART. We also have private tests suites for each package in the OSVVM utility library.
In OSVVM, when we run one or more tests or test suites, we call that a build. A Build Summary Report contains completion status for the build, for each test suite in the build, and for each test case in a given test suite. I will go into more details of these during this article.
The Detailed Test Case Reports have tables which summarize all of the recorded information for each AlertLogID and all of the coverage models defined in this test case. There is a separate Detailed Test Case Report for each test case. I will go into details of these in the next article in on the 2021.10 release.
Early Work
In the 2020.05 release, we added a CSV test tracking capability in the form of WriteTestSummary, ReadTestSummaries, and ReportTestSummaries. This provided us with text based reporting for each test in a single test suite, however, CSV is not a format suitable for supporting multiple test suites and is not significantly better than filtering test logs.
As a result, for new OSVVM reporting features, we are using YAML as an intermediate format – which allows us to use hierarchically in our reports.
Build Summary Report
The Build Summary Report allows us to confirm if the build passed or quickly identify which test cases did not PASS. This report is presented hierarchically in HTML tables with the following information.
- Completion status of the build
- Completion status for each test suite in the build
- Completion status for each test case in a given test suite
The HTML version of the Build Summary Report uses links and advanced HTML techniques to make navigation easy. In the test suite completion status table, there are links to the corresponding test case status table for that test suite. In the summary of each test case, there are links to each Detailed Test Case Report. Any place you see a triangle, a click will rotate the triangle to either hide information or expose hidden information.
The following is the top half of the Build Summary Report that is produced when running the OSVVM verification component regression tests. It shows all elements of the Build Summary Report.
Exploring OSVVM’s Reports
Seeing is believing. To see the full version of the Build Summary Report, run the OSVVM verification component regression suite. For instructions on how to do this, see the section titled, “Getting Started Running OSVVM Scripts and OSVVM Regressions” in the article “OSVVM Scripting: One Script to Run them All”
The remainder of the article will assume that you have read the article referenced above.
VHDL Aspects of Generating OSVVM Reports
To generate reports, you need to have the following OSVVM elements in your VHDL testbench. More details of this are in OSVVM Test Writers User Guide in the documentation repository.
-- Reference the OSVVM Utility Library library OSVVM ; context OSVVM.OsvvmContext ; . . . TestProc : process begin -- Give the test a name that matches the test case name SetAlertLogName("TbUart_SendGet1") ; . . . -- Do some Checks using Affirmations or Alerts. AffirmIfEqual(Data, X"4A", "Check Data") ; . . . -- Generate Reports with EndOfTestReports (replaces ReportAlerts) EndOfTestReports ; std.env.stop(GetAlertCount) ; end process TestProc ;
Using a Simple Script
If we have a simple test, where the design named is Dut, the testbench is TbDut, and both are in a file of the form “name”.vhd, then we can run the testbench with the following script.
# File name: TbDut.pro
analyze Dut.vhd
analyze TbDut.vhd
simulate TbDut
If we run this test with using “build Dut.pro”, Dut and TbDut will be compiled into the library named default. The simulation TbDut will run and a build summary report will be created with only one test case in it. The test suite will be named Default. When not explicitly named, the test case name will match the name used in simulate, so here it is named TbDut. Be sure to name the test TbDut using SetAlertLogName as otherwise, a NAME_MISMATCH failure will be generated in the Build Summary Report.
Scripting in OSVVM without Configurations
In OSVVM, we use the testbench framework shown below. The test harness is named TbUart. The test sequencer entity is in TestCtrl_e.vhd. Tests are in architectures of TestCtrl in the files, TestCtrl_SendGet1.vhd, TestCtrl_SendGet2.vhd, and TestCtrl_Scoreboard1.vhd.
In this simulation, the tests are run by calling “simulate TbUart”. We name the Test Suite UART using the TestSuite API command. We name the test cases TbUart_”TestName” using the TestCase API command.
# UART/Test.pro
TestSuite UART
library osvvm_TbUart
analyze TestCtrl_e.vhd
analyze TbUart.vhd
#
TestCase TbUart_SendGet1
analyze TestCtrl_SendGet1.vhd
simulate TbUart
#
TestCase TbUart_SendGet2
analyze TestCtrl_SendGet2.vhd
simulate TbUart
#
TestCase TbUart_Scoreboard1
analyze TestCtrl_Scoreboard1.vhd
simulate TbUart
The above call to TestCase puts the TestCase name into the build test summary YAML file. If the simulation for any reason fails to run, there will be no test status information in the YAML file. As a result, when the build summary report is created, it will detect this as a test failure.
Another possibility in the above test scenario is that a particular test case fails to analyze, but the script continues and runs the previous successfully compiled test case. In this situation, the TestCase name (set by TestCase in the script) will not match the VHDL test name (set by SetAlertLogName in the VHDL code) and a NAME_MISMATCH failure will be generated in the Build Summary Report.
Scripting in OSVVM with Configurations
The OSVVM verification component regression suite uses configurations to specify an exact architecture to run in a given test. We give the configuration, the test case, and the file the same name. We also put the configuration declaration at the end of the file containing the test case (try it, you will understand why). When we run a test that uses a configuration, simulate specifies the configuration’s design unit name. Hence, we use the following sequence to run one test case.
TestCase TbUart_SendGet1
analyze TbUart_SendGet1.vhd
simulate TbUart_SendGet1
When running a large test suite, this gets tedious, so we added a shortcut named RunTest that encapsulates the above three steps into the single step. If the name in RunTest has a path, the path is only used with analyze. The following script is used to run the three test cases when configurations are used.
TestSuite Uart
library osvvm_TbUart
analyze TestCtrl_e.vhd
analyze TbUart.vhd
#
RunTest TbUart_SendGet1.vhd
RunTest TbUart_SendGet2.vhd
RunTest TbUart_Scoreboard1.vhd
Note how configurations and RunTest simplify the scripting. Configurations also guarantee that only the specified architecture can run. As a result, if a test case fails to analyze, then the corresponding configuration will fail to analyze, the simulation will fail to run, and Build Summary Report will report this as a test case failure.
Summary
We are excited to add Build Summary and Detailed Test Case Reports to OSVVM. Be sure to check out my next blog post where I will talk about our Detailed Test Case Reports.
1 Comment
Leave a Reply
You must be logged in to post a comment.
Torsten
The new reports lets me finally ditch an internal branch with XML-based reports which I created some years ago. I’m very happy that OSVVM has added such an feature finally. I will test it soon with some of our test benches ?