Forum Replies Created
December 23, 2020 at 11:52 #1766
Hi Jim, sorry for the late reply (I got no notifications and I dont check here often)
We currently use Gitlab (local) as our main repo and Firmware pipelines. I have it running well, encouraging people to add their testbenches to the pipeline when complete, so that we find broken testbenches sooner rather than 2 years after the fact! It is currently 4 stages – checks, sims, build, release. Checks are a combination of syntax checks and elaborations, simulations are all the self checking testbenches (set to only appear when code they rely on changes). There is talk of migrating back to Jenkins, but we will see. It has grown from an initial couple of check jobs and <10 simulations to 7 checks jobs and ~30 testbenches in just over a year.
Ive looked into pytest because while TCL scripts are fine at a basic level, when you have several testcases for a single UUT, the CI just fails at the first error unless you build a TCL test framework. PyTest has an easy setup so that it will always run all tests and report the errors at the end, and was very easy to set up and wrap around existing TCL (plus the current bug with Aldec not throwing errors properly to Catch). Plus it is really easy to set up and generate the testcases from python itself. My current UUT has 73 test cases, and its really useful to see what passed and failed without working through the bugs one by one. I have looked breifly into VUnit for this, but the test setups appear to be neede in VHDL, which then doesnt allow generics as different test paramters (eg, bus widths). And another plus, its also used for our unit tests.
I have in house TCL scripts for compiling and running testbenches, then the Python needed to run PyTest on my current UUT is only 90 lines, and that generates all of the config files for all of the tests – and it would be fairly straightforward to put most of the code in a generic library. I doubt the TCL could be so compact.October 23, 2020 at 16:18 #1736
If you wanted to prohibit the testbench from running further after it stops, use std.env.finish instead. It also allows an integer parameter, so you can call it as:
Given @Chengshan mentioned Aldec, I thought I would mention that ActiveHDL in batch mode doesnt currently return the value from finish() or stop() or assertion failures to the system, so you cannot use this to detect a test failure. Ive worked around it by having a process that sets a “tb_fail” signal to ‘1’ at the end of the test, and use the surrounding tcl to examine its value after the test finishes. This is then thrown from tcl as an error.
My ticket says its targeted for ActiveHDL 11.2, but I raised it 18 months ago.April 4, 2020 at 11:42 #1627
I am answering as someone who was in a simular position to you, who now has a good working knowledge of OSVVM and a full AXI4 Verification BFM available to me hand written using my own interfacing around OSVVM features. I have never done any OSVVM training, but I do have a background with some experience of SV and UVM.
1. I cannot comment on any training, but I think one big advantage of OSVVM is it’s toolbox like approach. It is very easy to add certain parts to existing testbenches. I have seen several older testbenches where I wrok where people have picked up OSVVM purely for the randomisation package and to a lesser extent, the logging. These are probably the two easiest parts to start using. A lot of the knowledge you need to use OSVVM is more testbench theory than OSVVM itself. I did a Doulos advanced verification course about 15 years ago, plus my UVM experience, which made my take up of OSVVM much easier. Often it is these concepts that are the hardest to learn.
2. I custom built an entire AXI verification package, built around OSVVM. I do not use the OSVVM transaction techniques but it is full of scoreboards and logging. My main reasoning was that simulating a MIG and DDR is SLOOOOOW, so I needed something much more lightweight. It is now capable of bursts, transaction queuing and timeout alerts. And because I went all in with AXI4, AXI4L came for free (its just a limited subset of AXI4 – my AXI4L master BFM are little more than a wrapper around the AXI4 BFM). I have two slave BFMs – a simple one that simply absobs transactions and returns responses (randomised data for reads). This is useful if you are testing only the write side, for example. And another that has a memory model underneath. It is extremely fast (plugging a Master BFM into a Slave BFM I can write an entire 4k picture in about 5 seconds with axi transactions). This initially took about 3 months of work pretty much dedicated to this BFM, with plenty of additions and tweaks in the 2 years since I wrote it.
3. Getting a scoreboard working first comparing actual vs expected data would be a good first step. Then you could probably use the Coverage package to maybe reduce the need of using externally generated data (it would likely require a VHDL model though).
4. I think Jim Covered this.
I think ActiveHDL is pretty cheap for a single language licence. I think it is about 1/4 the price of questa. It is very fast and very capable.
Good luck with your OSVVM journey