..
Possible Metrics of ranking Assertions
Existing Metrics
GoldMine: Automatic Assertion Generation Using Data Mining and Static Analysis (2010)
- Since they have access to the RTL code they are able to verify which assertions are correct using some Formal verification engine.
- In their case the ranking is more based on how good the generated assertions were
- Are not all correct assertions good?
- No. There are many assertions that are simply regurgitation of the source code. A human will never write these assertions.
- The original paper’s ranking is subjective. They asked verification experts to rank the assertions on a scale from 1 to 4.
- Trivial assertion that the designer would not write
- Assertion the designer would write
- Assertion that the designer would write and captures subtle design intent.
- Complex assertion that the designer would not be able to write.
Automatic Generation of Assertions from System Level Design Using Data Mining (2011)
The ranking system wasn’t changed much
Word Level Feature Discovery to Enhance Quality of Assertion Mining
They defined quality as
- Number of generated candidate assertions.
- Percentage of true assertions.
- Average number of propositions in assertion’s antecedent.
- Input space coverage analysis of generated assertions.
- Analyzing relationship between word level assertions and bit level assertions.
- Injecting bugs in RTL and using the generated assertions to detect the injected bugs. (Fault coverage)
They argue that using word-level assertions instead of bit-level assertions do better in these areas.
Generating concise assertions with complete coverage (2013)
Convert problem in to an ALL-SAT problem and compare against Goldmine using Coverage and number of assertions. The authors wanted fewer higher coverage assertions compared to GoldMine
Mining Hardware Assertions With Guidance From Static Analysis (2013)
- This is just the original goldmine paper with more details and testing on OpenSPARC RTL code
- They have used a metric called input space coverage which is defined to the $\frac{1}{2^{|P|}}$ where $|P|$ is the number of propositions in the antecedent of an assertion. It can be thought of as the number of rows covered by that assertion if we were to represent the signals as a truth table.
Example:
Let us consider the assertion $(a = 1) \wedge (b = 1) \rightarrow (c = 1)$. The truth table for this
| a | b |
|---|---|
| 0 | 0 |
| 0 | 1 |
| 1 | 0 |
1 |
1 |
Here $(a = 1) \wedge (b = 1)$ represents 25% of the rows (1/4 = 25%)
Automatic Generation of System Level Assertions from Transaction Level Models
They are mining from Transaction Level Model instead of RTL. They have again used a subjective ranking mechanism
A Coverage Guided Mining Approach for Automatic Generation of Succinct Assertions
- Goldmine with alternative for the DT extraction engine
- Uses input space coverage as the metric
- Assertions are only added if they have more gain than the existing assertions
- Assertions are considered succinct if they have lower number of propositions and thus higher input space coverage.
- To compare sets of assertions they have run a directed test and measured the number of assertions that are triggered.
Code Coverage of Assertions Using RTL Source Code Analysis
Assertion Ranking Using RTL Source Code Analysis
- PageRank using Variable dependency graphs
- Assertion Importance
- Important if it is connected to many important variables
A-TEAM
- Fault coverage
- Induce a fault see if the assertion pick up on it
- “Assertion picks up on it” if it fails for the faulty one but holds for the non-faulty assertion