Dissecting the OSVVM AXI Master BFM
Why OSVVM™? › Forums › OSVVM › Dissecting the OSVVM AXI Master BFM
- This topic has 11 replies, 2 voices, and was last updated 5 months, 2 weeks ago by Jim Lewis.
-
AuthorPosts
-
May 31, 2024 at 22:21 #2440HassanMember
This thread has been created to ask questions about the Axi4Manager found in the OSVVM AXI4 portion. This is the full AXI4 Master BFM. There is a separate BFM for the AXI4-Lite. There are two primary variants of the Axi4Manager. These are one variant without VTI (Axi4Manager) and one with VTI (Axi4ManagerVti). These only differ in how the testbench test control module sends requests to the BFM. The VIT version uses VHDL-2008 external signals while the other one uses direct connection via portmap.
The first question is about the Initialize process.
————————————————————
— Initialize alerts
————————————————————
Initialize : process
variable ID : AlertLogIDType ;
variable vParams : ModelParametersIDType ;
begin— Alerts
ID := NewID(MODEL_INSTANCE_NAME) ;
ModelID <= ID ;
ProtocolID <= NewID(“Protocol Error”, ID ) ;
DataCheckID <= NewID(“Data Check”, ID ) ;
BusFailedID <= NewID(“No response”, ID ) ;vParams := NewID(“Axi4 Parameters”, to_integer(OPTIONS_MARKER), ID) ;
InitAxiOptions(vParams) ;
Params <= vParams ;WriteResponseScoreboard <= NewID(“WriteResponse Scoreboard”, ID, Search => PRIVATE_NAME);
ReadResponseScoreboard <= NewID(“ReadResponse Scoreboard”, ID, Search => PRIVATE_NAME);— FIFOs get an AlertLogID with NewID, however, it does not print in ReportAlerts (due to DoNotReport)
— FIFOS only generate usage type errors
WriteAddressFifo <= NewID(“WriteAddressFIFO”, ID, ReportMode => DISABLED, Search => PRIVATE_NAME);
WriteDataFifo <= NewID(“WriteDataFifo”, ID, ReportMode => DISABLED, Search => PRIVATE_NAME);
ReadAddressFifo <= NewID(“ReadAddressFifo”, ID, ReportMode => DISABLED, Search => PRIVATE_NAME);
ReadAddressTransactionFifo <= NewID(“ReadAddressTransactionFifo”, ID, ReportMode => DISABLED, Search => PRIVATE_NAME);
ReadDataFifo <= NewID(“ReadDataFifo”, ID, ReportMode => DISABLED, Search => PRIVATE_NAME);wait ;
end process Initialize ;Q1. Why are so many different IDs required: ModelID, ProtocolID, DataCheckID, BusFailedID? These are all of AlertLogIDType.
Q2. What is Params of ModelParametersIDType?May 31, 2024 at 23:14 #2443Jim LewisMemberHi Hassan
OSVVM VC creates an AlertLogID for different classes of checkers. It is helps tracking the sources of errors – hence accelerates debug. In your own VC, you can get by with just a ModelID.Params is another singleton data structure that holds settings for the VC. It is setup like a generalized union of values.
Best Regards,
JimJune 2, 2024 at 01:20 #2446HassanMemberIt seems that so many IDs are needed to create a hierarchical output for difference aspects of the BFM in the test result report.
June 2, 2024 at 01:25 #2447HassanMemberThe AXI4Manager source code contains these lines in the architecture declaration part:
signal WriteAddressFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;
signal WriteDataFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;signal ReadAddressFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;
signal ReadAddressTransactionFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;
signal ReadDataFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;signal WriteResponseScoreboard : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;
signal ReadResponseScoreboard : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;It is clear that these are being used as FIFO to accomodate latency in the system.
Then there is this:
signal WriteAddressDelayCov, WriteDataDelayCov, WriteResponseDelayCov : DelayCoverageIDType ;
signal ReadAddressDelayCov, ReadDataDelayCov : DelayCoverageIDType ;What is the reason for the coverage being gathered inside this BFM rather than inside a monitor BFM?
June 2, 2024 at 20:08 #2452Jim LewisMemberThe Axi4Manager has a TransactionDispatcher, which receives transactions from the Test Sequencer (TestCtrl) and dispatches these out to interface handlers (such as WriteAddressHandler). The interface handlers connect to the DUT and represent independently running pieces of the VC. The communication between the TransactionDispatcher and interface handlers uses the FIFOs and/or Scoreboards, hence, for AXI4 with all of its independent interfaces, there are quite a few of them.
ignal WriteAddressFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal WriteDataFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal ReadAddressFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal ReadAddressTransactionFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal ReadDataFifo : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal WriteResponseScoreboard : osvvm.ScoreboardPkg_slv.ScoreboardIDType ; signal ReadResponseScoreboard : osvvm.ScoreboardPkg_slv.ScoreboardIDType ;
There are also corresponding integer based handshaking signals (they do a little more than handshaking too).
The “*DelayCov” signals handle putting randomized delays on the XxValid signals for AXI outputs or XxReady signals for AXI inputs.
June 3, 2024 at 01:28 #2456HassanMemberThis code confused me and is found in the AXI4 manager source:
— Initialize DelayCoverage Models
AddBins (WriteAddressDelayCov.BurstLengthCov, GenBin(2,10,1)) ;
AddBins (WriteAddressDelayCov.BeatDelayCov, GenBin(0)) ;
AddBins (WriteAddressDelayCov.BurstDelayCov, GenBin(2,5,1)) ;…
— Valid Delay between Transfers
if UseCoverageDelays then
— BurstCoverage Delay
DelayCycles := GetRandDelay(WriteAddressDelayCov) ;
WaitForClock(Clk, DelayCycles) ;AddBin and GenBin is used for functional coverage isn’t it? So why is it here?
June 3, 2024 at 03:54 #2457Jim LewisMemberThe DelayCoverage models divide a transfer into segments that are BurstLength in length. For beats (transfers within that burst length) there is BeatDelay between each item transferred on the interface. After BurstLength there is BurstDelay before the next item is transferred. Each of BurstLength, BeatDelay, and BurstLength is randomized using a coverage model (which may have more than the single bin that is used by default).
For more details see: OsvvmLibraries/Documentation/DelayCoveragePkg_user_guide.pdf
June 5, 2024 at 16:13 #2462HassanMemberThe MIT document for streaming and address mapped interfaces states this: “One of the challenges of using a single record, such as AddressBusRecType, as an interface is dealing with multiple drivers on each record element. OSVVM does this giving each element a resolved type, such as bit_max, std_logic_vector_max_c, integer_max, time_max, and boolean_max. These are defined in the OSVVM package ResolutionPkg. These types allow the record to support multiple drivers and use resolution functions based on function maximum (return largest value).”
From my knowledge, VHDL already contains resolution functions for scenarios where something has multiple drivers. So are new functions required to resolve records with multiple drivers?
June 5, 2024 at 16:20 #2463HassanMemberThe functions that are used to perform interface transactions and directive transactions are quite flexible and numerous. They include blocking and non-blocking calls aka asynchronous. These applies to the manager and subordinate, both read and write and also check functions.
Why is there need to have both blocking and non-blocking function calls? I am sure that one can have non-blocking function call and then enter an infinite loop (in the test program) that waits until the transaction is complete and it will behave similar to what a blocking call does from user perspective.
Besides these two there are also Try* functions that return a flag that indicates if the specified transaction (read/write) has actually taken place or not, when it returns. Why are these required when other ways to check data already exist in both blocking and non-blocking aka asynchronous form?
June 5, 2024 at 21:56 #2469Jim LewisMember> From my knowledge, VHDL already contains resolution functions for scenarios where something has multiple drivers. So are new functions required to resolve records with multiple drivers?
VHDL has a resolution function for std_ulogic named resolved. Its non-driving element is ‘Z’. Its default value is ‘U’. If you do not initialize it, then the ‘U’ will be the resolved value. As a result, ports have to be initialized with a ‘Z’. This is a tedious methodology. The foundation work I did on transaction interfaces did it this way.But then, what do we do for integer, real, time, character, …? If you want to be able to send transaction information through a record, you need more types than std_logic and std_logic_vector.
ResolutionPkg uses maximum resolution. If all vendors properly implemented VHDL-2008, we would not need any resolution functions as internally they just call maximum. For maximum resolution, the non-driving element is type’left. The default value is type’left. Hence, no initializations required. It is the easy path to resolution of inout records without using VHDL-2019 features.
Once vendors implement VHDL-2019 features, we will start using mode views.
> Why is there need to have both blocking and non-blocking function calls?
Why blocking? It is the simplest to implement. Some VC need nothing more than blocking. Hence, OSVVM MIT supports blocking.Blocking transactions are interface independent. Lets say I have two versions of a design, one that has an AXI subordinate interface and one that has an Avalon interface. Lets say, I create two test harnesses for the two different designs that are mirror images of each other, except one uses the AXI based design and an Axi4Manager VC and the other uses the Avalon based design and an Avalon VC.
If I use blocking transactions to write the tests of the functionality the design, then I can use the test cases with either the AXI4 or Avalon test harness. This is why both VC support the same blocking transaction API. The only test cases that are design specific are the ones that verify that the design is either AXI4 or Avalon interface compliant.
Why non-blocking? AXI4 Full interface can support dispatching write and read addresses on the same cycle. OSVVM supports this by using non-blocking transactions and queueing up a number of transactions in the VC. Non-blocking transactions are required to test the full capability of an interface.
Alot of testing can be done with a blocking interface. Writing a VC that only does blocking transactions is the easiest thing to do. Implementing non-blocking transactions in a VC is a level (or more) harder – AXI4 manager I am thinking of you.
Hence, if a project must develop a VC to verify their design, I recommend that they start by using OSVVM MIT and writing a basic blocking VC. This quickly brings us to the point where we are ready to start functionality testing. With OSVVM MIT the VC developer just has focus on writing the interface behavior in the VC’s TransactionHandler. The complexity of doing this is not any more than writing a procedure.
Later if needed grow the capability of the VC to support any needed non-blocking transactions. For interfaces which support independent actions on different aspects of the interface, non-blocking transactions are likely to be necessary. For other interfaces though, it may not be needed at all.
June 7, 2024 at 01:56 #2480HassanMemberI can see that the scripts use the explicit name of the file for analyze and run_test TCL commands. However, it is also possible to just do a file search and with the filenames in a list, iterate over the list and call analyze or run_test for each of them.
Why isn’t it done in this way?
Also, I was expecting to see a whole lot of commands that only compile the files and then, a whole lot of commands to run tests. But then I realized that since each testcase has a different configuration in its architecture, we compile one file and run simulation and then move to the next file. So the TCL scripts compile all files except the test cases, these are compiled using run_test command and this causes the test to be run as soon as that file has been compile.
Is my understanding correct?
June 7, 2024 at 03:00 #2482Jim LewisMember> I can see that the scripts use the explicit name of the file for analyze and run_test TCL commands. However, it is also possible to just do a file search and with the filenames in a list, iterate over the list and call analyze or run_test for each of them.
> Why isn’t it done in this way?
It comes down to testing philosophy. I see the scripts as an exact specification of what must be analyzed and/or simulated. No matter what else changes, I can depend on these exact things being analyzed or run.
In a CI/regression flow, we always analyze and simulate everything – so there is no advantage to using a make flow. I saw a post from a simulation vendor a while back suggesting that we should be using their clever file based compile flow – lets get real, compiling all of OSVVM takes 20 to 60 seconds depending on the simulator. Compiling the whole thing is not so painful.
> Also, I was expecting to see a whole lot of commands that only compile the files and then, a whole lot of commands to run tests. But then I realized that since each testcase has a different configuration in its architecture, we compile one file and run simulation and then move to the next file. So the TCL scripts compile all files except the test cases, these are compiled using run_test command and this causes the test to be run as soon as that file has been compile.
The pro scripts that start with OsvvmLibraries/OsvvmLibraries.pro analyze all of OSVVM – so this hierarchy of scripts is a whole lot of compile.
The pro scripts that start with OsvvmLibraries/RunAllTests.pro compile the OSVVM public test suite and run the test cases – this set of scripts is a little compile to build the test harnesses, and then it is RunTest (which calls TestName, analyze, and simulate). You should note that to use RunTest you have to follow a naming pattern where the TestName matches the file name and matches the configuration name that is used to run the test case. This is explained in the Script_user_guide.pdf.
As I am developing tests, I keep a build.pro for the design, I keep a build.pro for the test infrastructure, I keep a RunAllTests.pro that runs all the debugged test cases, and a debug.pro for the test cases I am developing. Once a test case is debugged, it moves from debug to RunAllTests.pro. If a test case fails regression and it takes a lot of debug, it might move back to debug.pro temporarily or alternately I can just rerun it using RunTest from the command line if needed.
-
AuthorPosts
- You must be logged in to reply to this topic.