Unraveling debug and design verification snags
Paul notes that with FPGAs taking on the role of much denser, more versatile embedded platforms, designers are seeking advanced controls and views into the embedded system.
Tools that serve the wider spectrum of system validation, software design/debug, and hardware engineering teams are needed to deliver a more sophisticated on-chip instrumentation scheme along with an off-chip development and test environment, as illustrated in Figure 1. On-chip stimulus, performance monitoring, hardware-software correlation, assertions, transaction analysis, and multi-FPGA visibility are needed. And for good measure, we should toss in the ability to use the same technology in simulation and emulation environments or even ASICs, if required.
The tools needed to successfully address today’s broader range of challenges are flexible and programmable RTL instruments inserted into the design pre-synthesis and then utilized once the design is synthesized for a particular target. The ability to reconfigure the instrument function is a significant advantage over existing solutions, which require an incremental synthesis of the design with every change to the database of signals tapped. This new approach not only increases observability of internal signals in the FPGA design, but also significantly reduces the time and effort that go into debug and validation.
Multi-FPGA design support
In order to ensure full system visibility and control of complex systems, tools must be able to work even over designs partitioned across multiple devices. A distributed and programmable RTL instrumentation approach allows a designer to use a single console (through a single JTAG TAP) to control instruments scattered throughout the partitioned design. This way, designers have a single view into transactions occurring in multiple chips. Ideally, the number of primary I/O used for instrumentation should be controlled at insertion time and adjusted as needed, and a satisfactory solution should be able to accommodate multiple configurations, as shown in Figure 2. Such configurations can include a single transaction engine across multiple FPGAs (with a multiplexor structure distributed throughout the design for the other chips), multiple transaction engines with cross triggering across multiple FPGAs, and multiple transaction engines (one per FPGA) with cross-triggering across multiple FPGAs. With multiple transaction engines, multiple debug applications can operate concurrently. With four transaction engines running, for example, three can run performance monitoring functions while the fourth is ready with a trigger function to capture system state when a specified performance abnormality is detected.
On-chip instruments can be used to create a variety of off-chip analysis applications, including logic analysis, transaction stimulus, assertions in silicon, event analysis, and performance monitoring. The same instruments that perform logic analysis can generate internal stimulus and apply it conditionally to the system under test. A designer can create a transaction stimulus that monitors a bus for a particular memory read cycle, for example, and upon detecting the cycle, drive stimulus vectors onto the data bus to substitute the original values produced by the memory resource.
The performance monitor allows the designer to see the number, sequence, latency, and frequency of certain on-chip events. Effective configuration using JTAG connections allows for utilizing the instrumented design to operate multiple functions concurrently, and programming each transaction engine to measure an activity within its own region. Again, because the instruments are reprogrammable, these functions can be changed at any time without affecting the system under test. This allows the user to cycle through different performance monitors to gain a real-time view into general performance, which will help validate expected behavior or diagnose a wide range of problems more readily apparent in this multi-level view.
If some unexpected performance problem is uncovered while using the RTL’s performance monitoring functions, designers may need to take a closer look at the pieces of the system that seem to be involved. With the configurable instruments, the same transaction engines used to carry out performance monitoring can be reprogrammed to serve as logic analyzers. Users then have the ability to view each transaction by extracting information at the bus or signal level. The result is better special and temporal visibility, analysis, and control of all on-chip functions.
On-chip stimulus includes a number of different capabilities, including transaction stimulus, fault insertion, and stress testing. The flexibility to create stimulus conditions in multiple ways can help designers create error conditions to test the software under all potential operating conditions, for example. It’s also possible to rapidly recreate or emulate a hard-to-reach state or control a shared resource in a manner that is too difficult or impossible using software alone. For instance, say a designer needs to view a four-channel arbitration circuit and know when all three resource requests hit at once within the same clock cycle. In this case, one of the three transaction engines can be used to initiate a test condition, while the response is measured with the two remaining. The user can employ transaction stimulus, performance monitoring, and logic analysis at the same time.
Assertions in silicon
The transaction engine can also be used for assertion checking. Designers can create multiple sequential assertion conditions for one or more transaction engines in a similar manner to creating trigger conditions. Applying the assertions either concurrently to multiple transaction engines or serially on one verifies correct behavior over a period of time. Engines can even be tied together with GPIO signals so that more elaborate event sequences occurring in multiple regions of the design and/or chip can be traced and analyzed using cross-triggering. And since the instruments are reprogrammable, time-sharing the resource is possible.
A good example of a reconfigurable instrumentation IP suite is one that includes a Signal Probe Network (SPN), Programmable Trigger Engine (PTE), Tracer, and Trace/Capture and Stimulus instrument (CapStim).
An SPN is a powerful set of multiplexors (with integrated configuration registers, pipeline registers, and FIFOs) used to route user logic signals to a PTE, Tracer, and CapStim. Unlike conventional FPGA tools, advanced FPGA validation tools allow the user to tap (observe) as many signals in the design as deemed necessary for validation; there is no imposed limit of signals.
An SPN traverses across multiple clock domains using FIFOs to handle clock domain crossings. The groups of user logic signals to be observed are determined at the pre-silicon stage, and are connected through the SPN structure to one or more monitoring or analysis instruments.
A PTE is a state machine that can be programmed post-silicon to create on-chip triggers, assertions, stimulus generators, and performance monitoring functions. The number of trigger states and resources such as counters, timers, and general-purpose inputs/outputs can be specified by the user pre-silicon.
The Tracer, as implied by the name, records data into its trace buffer. The capture is initiated and can be controlled by the PTE.
Another powerful debug instrument is the CapStim. Similar to the Tracer, the CapStim has the additional ability to extract stimulus vectors from embedded memory and dynamically apply the vectors at-speed to select user logic signals, under the control of assertions, triggers, and/or performance monitors running in the PTE. This functionality is a significant benefit over conventional FPGA tools, which provide observability but lack the sufficient mechanism to control signals.
Engineers developing more complex FPGA designs must begin to use debug and validation tools that include these on-chip instruments to meet the more advanced requirements for successful hardware and software verification.
The growing complexity of FPGAs requires a new approach to their debugging and validation. The tools described in this article offer the features and capabilities that can help system validation, software design/debug, and hardware engineering teams streamline the process of validating and securing FPGAs, thereby reducing costs and time to market.
Paul Bradley is Chief Technical Officer of DAFCA, Inc. Paul has more than 20 years’ experience in electronics and systems design, and specializes in product development and engineering leadership in emerging technology markets. He has held numerous engineering and technical leadership positions at Motorola, Nortel, CrossComm, Sonoma Systems, and Internet Photonics prior to joining DAFCA.