Is there a built-in way in OSVVM to detect build failure without parsing logs?
Why OSVVM™? › Forums › OSVVM › Is there a built-in way in OSVVM to detect build failure without parsing logs?
Tagged: Questions
- This topic has 7 replies, 3 voices, and was last updated 2 weeks, 4 days ago by
Jim Lewis.
-
AuthorPosts
-
January 17, 2026 at 11:22 #2860
Amine, AdelMemberHi everyone,
I’m trying to determine whether a build/simulation has failed in OSVVM, without having to manually inspect or parse the log files.
At the moment, my approach is to parse the generated logs to infer whether the build failed or not, but this feels a bit brittle. I was wondering if OSVVM already provides an integrated mechanism, API, or status flag that allows a script to directly know whether the build or simulation failed.
In other words:
Does OSVVM expose a built-in function or result that indicates build/simulation success or failure at the script level?
Or is log parsing currently the expected/recommended approach?
Any guidance or best practices would be appreciated.
Thanks!January 17, 2026 at 19:11 #2861
Jim LewisMemberBy failed, do you mean test case failed or the build process errored out for some reason? OSVVM has are controls for both of these.
From the OSVVM Settings User Guide, you will find that they are controlled by variables:
variable FailOnBuildErrors "true" ;# simulator command had errors variable FailOnReportErrors "false" ;# yaml reports caused html failure variable FailOnTestCaseErrors "false" ;# one or more test case(s) had errorsChange these settings in your OsvvmSettingsLocal.tcl. See the OSVVM Settings User Guide for details.
If you look at the proc build in OsvvmScriptsCore.tcl, you will find the following code at the end. This code shows how the variable settings work.
# # Wrap up with error handling via call backs # # Run Callbacks on Error after trying to produce all reports if {$BuildErrorCode != 0} { CallbackOnError_Build $Path_Or_File $BuildErrMsg $LocalBuildErrorInfo } if {$AnalyzeErrorCount > 0 || $SimulateErrorCount > 0} { CallbackOnError_Build $Path_Or_File "Failed with Analyze Errors: $AnalyzeErrorCount and/or Simulate Errors: $SimulateErrorCount" $LocalBuildErrorInfo } if {($ReportErrorCode != 0) || ($ScriptErrorCount != 0)} { CallbackOnError_AfterBuildReports $LocalReportErrorInfo } # Fail on Test Case Errors if {($::osvvm::BuildStatus ne "PASSED") && ($::osvvm::FailOnTestCaseErrors)} { error "Test finished with Test Case Errors" } # Fail on Report / Script Errors? if {($ReportYamlErrorCode != 0) || ($ReportErrorCode != 0) || ($Log2ErrorCode != 0) || ($ScriptErrorCount != 0)} { # End Simulation with errors if {$::osvvm::FailOnReportErrors} { error "Test finished with either Report or Script (wave.do) errors." } }While these controls are enough to get the information to Tcl, it is sometimes not enough to get the errors signaled outside of your simulator – and signaled to your Continuous Integration tools. For example, if you are running Questa (or RivieraPRO) you need to run the simulation by doing:
vsim -c -do "exit -code [catch {source MyScript.tcl}]I generally break out the steps as follows (I generally set OsvvmLibraries as an environment variable):
vsim -c -do "exit -code [catch {source $OsvvmLibraries/Scripts/StartUp.tcl ; LinkLibraryDirectory ; build RunTest.pro}]If you are running another simulator and have to do something like this, please share what you did with the community.
January 17, 2026 at 21:35 #2862
Amine, AdelMemberHi Jim,
Thanks for the detailed explanation and references to the OSVVM settings.
I’ve built a CI framework using GitHub Actions, running ModelSim on an Ubuntu runner. I’ll publish the template once the framework is complete. In my setup:
Initialize OSVVM and build the library via build.pro (example included in the template):
library osvvm_example analyze ../src/example.vhdBuild RunAllTests.pro:
library osvvm_example analyze ../tb/TestCtrl_e.vhd analyze ../tb/example_tb.vhd RunTest ../tb/example_tb_SimpleTest.vhd RunTest ../tb/example_tb_RandomTest.vhdEnd of simulation: I quit ModelSim and run checkXML.tcl to parse the XML files in both the sim_build and sim_runalltests directories. Merges to main are allowed only if the simulation passes; otherwise, they are rejected.
package require tdom # Get arguments set xmlFile [lindex $argv 0] set DEBUG [lindex $argv 1] # Read XML file set fh [open $xmlFile r] set xmlData [read $fh] close $fh if {$DEBUG} { puts "\033\[33mDEBUG\033\[0m .xml file found : $xmlFile\nParsing..." } # Parse XML using tdom set doc [dom parse $xmlData] set root [$doc documentElement] ;# <testsuites> # Extract attributes set errors [$root getAttribute errors 0] set failures [$root getAttribute failures 0] set skipped [$root getAttribute skipped 0] if {$DEBUG} { puts "\033\[33mDEBUG\033\[0m errors=$errors, failures=$failures, skipped=$skipped" } # Print results puts ".xml File: \033\[35m$xmlFile\033\[0m" if {$errors != 0 || $failures != 0 || $skipped != 0} { puts "Build: \033\[31mFAILED\033\[0m, Errors: $errors, Failures: $failures, Skipped: $skipped" exit 1 } else { puts "Build: \033\[32mPASSED\033\[0m, Errors: $errors, Failures: $failures, Skipped: $skipped" exit 0 }My main question is: how can I make ModelSim exit with an error code when a test fails, so that the GitHub Actions workflow correctly marks the CI run as failed?
I’ll review my current simulation script to ensure errors are propagated correctly to the CI tool and that the OSVVM variables are set appropriately. If I run into simulator-specific issues, I’ll share the exact commands so we can discuss adjustments for proper CI signaling.
Thanks again for your guidance!
Best regards,
AmineFebruary 3, 2026 at 17:45 #2866
Jim LewisMemberHi Amine,
CI will be running modelsim from the bash command line. And you can use:
vsim -c -do "exit -code [catch {source $OsvvmLibraries/Scripts/StartUp.tcl ; LinkLibraryDirectory ; build RunTest.pro}]The “exit -code” is required to get the return status. What I return error on depends on what reporter I am using.
If I am using a JUnit reporter, if the build finished (independent of test case errors), I want the test case reporter to run, hence, I only stop if there was a build failure (and test JUnit report is either potentially corrupted or does not exist). You need this report to determine which test cases passed worked and which did not.
If instead your CI run displays the OSVVM build summary report (html), then when the build finishes, it has already run, and you can report any failure, including test case failures, as a failed build. Note this test case reporter is specific to OSVVM output and will detect things like the VHDL *.pro test case name –
TestName Freddoes not match the VHDL test case name –SetTestName( "Fred") ;– which indicates the wrong test architecture ran. This can happen if you are using multiple test architectures of TestCtrl and not using configurations.Best Regards,
JimFebruary 14, 2026 at 20:41 #2868
PatrickMemberThere is also pyEDAA.OSVVM for reading OSVVM’s YAML files (or XML files). You get a data model where you can access fields and aggregated files as well as status properties.
February 19, 2026 at 16:26 #2874
Jim LewisMemberHi Amine,
If you are still in Tcl, there are a number of variables you can check with the build status. Currently these are not documented, but they should be. Lets start here:::osvvm::ReportBuildName – Name of Build
::osvvm::BuildStatus – PASSED / FAILED
::osvvm::TestCasesPassed
::osvvm::TestCasesFailed
::osvvm::TestCasesSkipped
::osvvm::ReportBuildErrorCode – 0 if passed. > 0 if errors
::osvvm::ReportAnalyzeErrorCount
::osvvm::ReportSimulateErrorCountIf you are not in Tcl, you will have to get them from either the Junit XML file. The Yaml file is unprocessed – ie: it provides individual test passed/failed, but not whether the entire build has passed or failed. Maybe OSVVM should be keeping a running count of these things so they are available in Tcl and then Yaml without the reporting.
The Tcl is becoming a project of its own. It is currently around 15K lines.
Best Regards,
JimFebruary 21, 2026 at 21:51 #2875
Amine, AdelMemberHi Jim,
Thanks for the information!
I’ve actually switched to using the built-in JUnit GitHub Action for test reporting. In this setup, I simply specify the location of the XML report files, and the action handles the rest.
Here is the current method I’m using:
# Setup OSVVM and launch simulation - name: Run Simulation run: | cd $GITHUB_WORKSPACE/sim/ vsim -c -do 'set OSVVM_DIR $env(HOME)/OsvvmLibraries; source $OSVVM_DIR/Scripts/StartUp.tcl; build $OSVVM_DIR; build build.pro; build RunAllTests.pro; quit -code 0 -f' - name: Publish Tests Report uses: mikepenz/action-junit-report@v5 if: success() || failure() # Always run even if the previous step fails with: report_paths: sim/sim_RunAllTests/sim_RunAllTests.xml token: ${{ secrets.GITHUB_TOKEN }} check_annotations: true check_name: OSVVM Tests Report include_passed: trueBest regards,
AmineFebruary 21, 2026 at 23:15 #2876
Jim LewisMemberUsing a test reporter is the intended method to get this information in CI.
Ending with error codes has seemed to create more problems than it solves – particularly since some simulators eat the error codes and do not return them outside of their environment.
-
AuthorPosts
- You must be logged in to reply to this topic.