Creating a new field programmable gate array is no small feat. FPGA vendors spend tens-of-thousands of man-hours simply researching markets to determine the feature set a given device will require and the silicon process that they will use to manufacture the device. This starts years before they embark on the ever more difficult task of actually designing the IC and the software to allow users to program it.

While creating a next-generation FPGA is difficult, creating a device that combines a microprocessor and programmable logic on the same device is even more daunting. Vendors not only have to figure out the most efficient way to integrate programmable logic with a microprocessor in a tiny square of silicon, they must also create an infrastructure that allows users to quickly and easily program both the programmable logic and on-chip microprocessor portions of the device. For silicon vendors, the Holy Grail is to create a device that appeals not only to traditional FPGA users but to embedded system architects and software programmers as well. A new class of device called an Extensible Processing Platform (EPP) is making great strides on this journey.

Hard IP and Soft IP

Implementing microprocessors on FPGAs isn’t new. In fact, for almost two decades now FPGA vendors have offered soft cores (silicon intellectual property, commonly called IP) that users can program (with logic synthesis and place and route tools) into the programmable logic in FPGAs. About a decade ago, FPGA vendors took that a step further and designed microprocessor hard cores — PowerPC, ARM, and MIPs processors — into the silicon alongside traditional programmable logic blocks. Both soft IP and hard IP approaches have their advantages and disadvantages.

Implementing soft IP into an FPGA offers maximum flexibility at the expense of performance, power consumption, and area utilization. Users can determine what processor functionality they need for their specific design and synthesize a single processor core, or multiple processor cores — 8-bit, 16- bit or 32-bit MCU, MPU, or DSP — into the programmable logic in their designs. This, however, means they have to give up programmable logic element real-estate and design around the core they implement in their design. And in designing around the soft MPU, they must also be very mindful of timing constraints and power budgets, which can change depending on what applications are ultimately run on the microprocessor — a task which, unfortunately, typically comes after the hardware has been designed.

Figure 1. Unlike previous chips that combine MPUs in an FPGA fabric, Xilinx’s new Zynq-7000 Extensible Processing Platform device family lets the ARM processor, rather than the programmable logic, run the show.
A hard IP-based device has several advantages over a soft IP approach. These hard IP implementations tend to raise both the performance of the microprocessor and the entire chip’s functionality significantly (over soft implementations) while lowering the device’s overall power consumption. Because FPGA vendors implement the core directly into the silicon (instead of programmable logic), it leaves more real estate on the chip for programmable logic and additional integrated functionality. While the hard IP implementations are significantly faster than soft IP implementations, earlier generations of these devices didn’t have the most elegant interconnections be tween the processor core, the programmable logic blocks, onboard memory, and (if any) onboard peripherals. This meant they were faster than soft implementations but weren’t as fast as they could have been. Of course the most significant downside of the hard IP architecture is lack of flexibility. Users are pretty much limited to the microprocessor and peripherals the vendor chooses to implement in their device. If additional peripherals are needed, users have to add soft IP versions of those peripherals to the programmable logic on the device or add other chips on the PCB.

One of the biggest challenges of both of these implementations is of course that users had to be fairly well versed in hardware design languages (HDLs) such as Verilog or VHDL, as well as hardware design techniques, especially timing/clocking, to program the programmable logic portion of these devices. The last generation of these devices wouldn’t even allow users to access the MPU core until the programmable logic had been programmed to a significant extent. This practically eliminated any potential for using the device to do hardware and software co-design and prototyping.

The EPP Era

Having learned lessons from offering both soft and hard implementations over the last two decades, chip companies, mainly FPGA vendors, are starting to offer a new class of device called Extensible Processing Platforms (EPPs).

Figure 2. The Zynq-7000 Extensible Processing Platform relies on a familiar tool flow for system architects, software developers, and hardware designers.
EPPs are not FPGAs with a hard or soft core implemented on them (Figure 1). While EPPs do have processors and FPGA logic on the same device, a number of things differentiate them from FPGAs. Most significant is the fact that the MPU cores in the EPP run the show. That is, the microprocessor boots first and then — depending on how the user programmed the device — initializes the programmable logic. This processor-first architecture is a significant differentiator in that it allows system designers to program their systems into the processor and then decide what functions they can speed up by implementing them in the programmable logic portion of the device. The processor-first EPP programming model also appeals to a wider audience and allows users more attuned to programming software to use the devices.

Because EPP vendors implement their devices in the latest process geometries, it means they can also implement larger memory blocks and a broader range of hard IP peripherals directly into the silicon to complement the microprocessor core and service a wider range of application spaces. This alleviates even more of the hardware design burden from users, and, because the functionality is all on the same piece of silicon, improves system performance and lowers power consumption.

EPPs also have faster and more elegant on-chip interconnects, speeding communications between MPU cores, programmable logic, memory, and peripherals. EPP vendors are also including programmable point-to-point interconnect, such as ARM AXI4, in these devices to further speed performance between various blocks on a device, from MPU to peripherals and peripheral to peripheral. This greater amount of interconnect and programmable interconnect adds greater degrees of flexibility into these devices and allows users to essentially deactivate unneeded peripheral functions depending on their design needs.

Because EPP vendors are emphasizing system functionality rather than just hardware functionality, EPP vendors are ensuring the devices support commonly used software flows such as compilers, debuggers, OSs, etc., as well as traditional FPGA programming flows (Figure 2). While programming the logic blocks of EPPs still requires some hardware design knowledge today, vendors are hard at work creating flows that will lessen these hardware programming challenges and appeal to a rapidly expanding user base.

Where FPGA vendors have had relatively great success in helping users implement processors into programmable logic, EPPs will certainly take this marriage of processing and programmable logic to a new level. Indeed, as companies start rolling out EPPs and maturing the use model and tool flows, it will be interesting to see what amazing innovations a much broader user base will be able to do with these devices.

This article was written by Larry Getman, Vice President, Processing Platforms, Xilinx Inc. (San Jose, CA). For more information, contact Mr. Getman at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit .