Merging coverage databases
Why OSVVM™? › Forums › OSVVM › Merging coverage databases
- This topic has 9 replies, 3 voices, and was last updated 10 years, 7 months ago by Jim Lewis.
-
AuthorPosts
-
October 4, 2013 at 15:58 #715Lyle SwansonMember
Hi,
I could not find any OS-VVM literature that describes the process for merging/combining coverage files over mulitple simulation runs. Are there built-in packages/methods to merge databases, or is this something that needs to be done with custom scripting?E.g, How to merge/combine coverage databases if it takes 100 simulation runs to cover all possible traffic configurations for a DUT?
– Lyle
October 4, 2013 at 16:42 #716Jim LewisMemberHi Lyle,
What is your application?
There is a ReadCovDb and WriteCovDb. I will be adding a merge option to ReadCovDb in the next revision, however, with Intelligent Coverage I do not expect merging to be a main stream methodology.
Let me explain. OSVVM’s Intelligent Coverage is a closed loop test. It generates stimulus based on the current coverage holes. As long as there is a good correlation between the stimulus generation and the coverage model (for inputs there is 100% correlation), the test will achieve coverage closure in the sum of the coverage goals for each coverage bin. A functional coverage model that is too big to be run in a single simulation run may be partitioned into multiple independent models and run separately. You can even run the separately for your daily fast simulations and combine them together for your longer regression tests.
Intelligent Coverage then is alot different from SystemVerilog Constrained Random. Constrained Random is an open loop test – there is no guarantee that a given set of constraints. will achieve coverage closure and indeed to achieve coverage closure you often end up with multiple testbenches that use different constraints, different controls, and perhaps even different seeds (gag).
There are situations where we use Intelligent Coverage to configure a test. In this situation, the configuration is randomized at the beginning. The steps we do is read the coverage database, randomize the configuration, and write out the coverage database (before running the test). In this case, we can launch multiple simulations provided that each waits long enough for the prior one to read and then write out the coverage database.
If you show me an application where with you need merging with OSVVM, I can make sure that it gets added sooner – adding it is a trivial addition (where ReadCovDb currently replaces the current value of the coverage with the read value, simply replace that with the current value plus the read value) – the more interesting part is testing it.
Jim
October 7, 2013 at 09:03 #718Lyle SwansonMemberHi Jim,
Thanks for the quick response. Here are two scenarios where merging databases could be used:
1) The 2nd-last paragraph of your response appears to answer this application; can you confirm or correct:
A DUT processes a layered protocol: “Prot_x” over Ethernet. In a given operating session, there can be 1-20 “Prot_x” streams coming from the traffic generator, and the streams operate at one of 100 different data rates. So, in a given simulation run, the DUT is configured to process 1-20 data streams at specified data rates.
Therefore, it would take a large number of simulation runs to functionally cover the different DUT configurations. Is database-merging required here, or can OSVVM packages already manage this application?2) As done in simulators like VCS & Questa, merging databases from different testcases is performed in order to show the overall progress of a regression.
Quoting online VCS literature, “Unified coverage aggregates all aspects of coverage in a common database, thereby allowing powerful queries and useful unified report generation.”
https://www.synopsys.com/Tools/Verification/FunctionalVerification/Pages/VCS.aspxthanks,
Lyle.October 13, 2013 at 21:28 #722Jim LewisMemberHi Lyle,
Case 1:
To be able to run many simulations at the same time, a test randomizes its configuration, saves the coverage database, and then runs its test. The next test reads the previous tests database, randomizes, updates the database, then runs its test. The limitation then is that tests must be started in a structured manner to ensure that each test works with an updated version of the database.
Case 2
This case does not really apply to OSVVM. The reason for this approach is all about constrained random. With constrained random, there are no constraints derived from the coverage model. Nothing in this approach drives a simulation toward coverage closure other than running many, many tests. Hence, we create different test cases that use different seeds, or slightly different constraints or controls. We run tests and each test collects functional coverage across all aspects we are interested in.
With OSVVM, within each testcase, we create a targeted functional coverage model for what we want to see during that test. Intelligent Coverage drives a test to coverage closure by only randomly selecting items among the coverage holes.
None the less, it is simple enough to add merging, so I plan on adding it in the next revision. I would not want someone not to try out OSVVM because merging was not there – even if after they try out OSVVM, they never end up using merging.
Jim
March 20, 2014 at 03:53 #763MikaelMemberSo I can’t help from elaborate on this subject.
Why are we using constrained random data generation?* Because we want to write less tests
* Run the same test with multiple seeds and hit different things
* We want to use random data (and random order of data)So functional coverage is there because that is the only way of knowing what you really have tested.
In any, but the most simple cases, reaching 100 % coverage is not easily done.
E.g. if we talk about for example sequential scenarios (ADD followed by a SUB followed by a MULT) it almost immediately becomes nessecary to run multiple tests and merge the data.
Many times you need to limit your goals to what is practically reachable and target your corner cases.Also to make this even more interesting. In the end we only want to run the simulations with specific seeds that actually contribute to coverage. This means that we also need to be able to do test associated merging and ranking of simulations.
So doing as you say:
<i>With OSVVM, within each testcase, we create a targeted functional coverage model for what we want to see during that test. Intelligent Coverage drives a test to coverage closure by only randomly selecting items among the coverage holes. </i>This means that you need to write a lot more tests while the whole idea with CR methodology is
to rerun the same test(s) with different seeds and not spend time writing a lot of tests. I.e. you spend your time writing coverage goals instead of tests. Your way, you are writing coverage goals AND more tests.BR
Mikael AnderssonMikael Andersson
European Application EngineerDigital Design & Verification Solutions
Mentor Graphics (Scandinavia) AB
Kista Science Tower
S-164 51 Kista, Sweden
mailto:Mikael_Andersson@mentor.com
Mobile: +46 (0)709 329516
Office: +46 (0)8 6329516
———————————————————March 20, 2014 at 18:11 #769Jim LewisMemberMarch 21, 2014 at 03:19 #776MikaelMemberMarch 21, 2014 at 22:21 #782Jim LewisMemberApril 4, 2014 at 10:11 #790Lyle SwansonMemberJim,
Is there any documentation or use-examples for the ReadCovDb or ReadCovDb with Merge? I’ve been trying to get it to do something, but it always returns errors when reading my CovDb.txt file (see Error Message below). The WriteCovDb works fine, by appending coverage reports to the file for each simulation run.
And along with the ReadCovDb error, I’m not even sure what it that subprogram is supposed to do. We are thinking of writing a script to accumulate coverage for all testcase runs, but I’m hoping that the Coverage Package can perform this accumulation function.UartRx_RxCov.ReadCovDb ("CovDb.txt") ; -- causes simulation error.
UartRx_RxCov.WriteCovDb ("CovDb.txt", APPEND_MODE) ;
# ** Failure: ReadCovDb: Failed while reading Seed
# Time: 12477500 ns Iteration: 2 Process: /tbmemio/U_TestCtrl/CpuTestProc File: ../SynthWorks/CoveragePkg.vhd
# Break in Subprogram failed at ../SynthWorks/CoveragePkg.vhd line 768
# Stopped at ../SynthWorks/CoveragePkg.vhd line 768
# Simulation Breakpoint: Break in Subprogram failed at ../SynthWorks/CoveragePkg.vhd line 768
# MACRO ./bfm.do PAUSED at line 99April 4, 2014 at 11:11 #791Jim LewisMemberHi Lyle,
One use model is to write out the coverage database file from separate simulation runs into separate files. And hence, at the end of test1 we do:
... UartRx_RxCov.WriteCovDb("Test1_CovDb.txt") ;
Likewise for test2 and so on:
... UartRx_RxCov.WriteCovDb("Test2_CovDb.txt") ;
Then accumulate the coverage using a separate VHDL program:
UartRx_RxCov.ReadCovDb("Test1_CovDb.txt") ; for i in 2 to NUM_TESTS_TO_MERGE loop UartRx_RxCov.ReadCovDb("Test" & to_string(i) & "_CovDb.txt", MERGE =>TRUE) ; end loop ; ... -- do stuff with the merged database
With the current implementation, there is no practical usage of having multiple databases written to the same file. The append_mode was added to WriteCovDb for symmetry with WriteBin (with very little thought about its application – which is unfortunate and in your case misleading). Your use model of allowing ReadCovDb to merge a set of databases from the same file is interesting – it just was not implemented that way in this version. I just looked at ReadCovDb, I don’t think it would take too much to modify it to make your use model work. It may be:
------------------------------------------------------------ procedure ReadCovDb (FileName : string; Merge : boolean := FALSE) is ------------------------------------------------------------ file CovDbFile : text open READ_MODE is FileName ; begin ReadCovDb(CovDbFile, Merge) ; if Merge then while not EndFile(CovDbFile) loop ReadCovDb(CovDbFile, Merge) ; end loop ; end if; end procedure ReadCovDb ;
I have not tried this, so there may be subtle issues. Let me know if you try it.
Jim
-
AuthorPosts
- You must be logged in to reply to this topic.