In many applications, processors must receive data from sensors, data ports, or other sources that arrive as a program is executing. Data can arrive at a high rate (e.g., more than 1 million data points per second), low rate (e.g., once per second) or anywhere in between. As examples, high-rate data might come from an ADC that is sampling an RF signal, or from a moving-media storage disc (like a hard disc); and low rate data might come from a keyboard, or temperature sensor. Data can arrive at regular, repetitive intervals, like from a digital audio source, or randomly, like from a metal detector. It can be expected, like from a timer, or unexpected, like from a Geiger counter. Whatever the situation, there are two main ways to check for repetitive signals: polling, and interrupts.
Polling involves creating a software loop to regularly and continuously read an input signal to see if it has changed. Polling can be done in a high speed, tight loop that occupies virtually all of the processors bandwidth, or it can be done at regularly scheduled, less frequent intervals. For example, a sonar system might launch a 40KHz burst of audio energy from a speaker, and then start polling an input signal from a threshold detector in a tight loop to see if and when an echo signal exceeds a predetermined magnitude. Since the echo might return within a few microseconds, or at most a few milliseconds, running a polling loop and consuming all the processors bandwidth for a relatively short time may be acceptable. At the other end of the spectrum, a processor might check a motion detector to see if someone entered a room once every second. For this low data rate, it may not make sense to have the processor execute a short loop that repetitively and continuously checks the input signal (and thereby consumes all the processors bandwidth).
An example polling loop is shown below. This loop assumes r0 holds the address of the polled data register, and that we want to check for any change (1 to 0 or 0 to 1).
Start_Poll: @ Initialize Poll LDR r1, [r0] Poll_loop: LDR r2, [r0] @ Get new data CMP r1, r2 @ Compare new data with old data BEQ Poll_loop @ Loop until data changes . . @ Program continues . B Start_Poll @ Go Poll again
Interrupt signals are asserted by peripheral devices when they need immediate attention. Some peripheral devices produce data, some consume data, and others need to perform some time critical task. Data producers (like ADCs or sensors) have limited local data storage, so they must transfer their latest recorded data to the processor in a timely fashion, or risk skipping over new data or overwriting previously recorded data points. Data producers assert interrupts signal to tell the processor new data is available, and the processor typically has a limited amount of time to respond.
Data consumers (like speakers or motors) also have little or no data storage, and they must receive new data points within a tight time window, or risk failure. For example, if a speaker needs a new data point at say, 96KHz, the processor must deliver the next point on time or an audio “pop” may be heard on the speaker. Data consumers assert interrupts to tell the processor its feeding time.
Whatever the source, interrupts must generally be dealt with relatively quickly. By definition, the processor is performing some other task when the interrupt occurs. The currently executing task may be more important or less important than the interrupt, and it may be on a time-critical tight schedule, or it may not be. This leads to a basic question - should the interrupt be taken? For that matter, should any interrupt be taken at any given time? And how can this general situation be dealt with systematically?
Interrupts can be individually enabled or disabled, and enabled interrupts can be assigned anyone of several priority levels. If an interrupt occurs that has higher priority than the currently executing task, it will be taken; if not, it will be left pending until the operating priority level falls. Priority levels are assigned to each potential interrupt by the programmer, and they can be changed during runtime.
When an interrupt is asserted, it is requesting service from the CPU. The CPU’s currently executing process has a priority level - if the interrupt’s priority is higher, it will be taken. If not, it will be left pending until the CPU is able to process it.
Interrupt Service Routine (ISR)
An interrupt is taken (or “serviced”) by inserting the address of a new instruction into the PC. This obviously causes an abrupt change – whatever processing was happening is suddenly halted, and the “interrupt service routine” (ISR) begins executing. This context switch is sudden and unexpected, so it is imperative that the ISR preserve all register contents. Then when the ISR terminates, processing can resume seamlessly right where it left off.
The ISR is entirely like a subroutine, except it is unexpected and disruptive to program flow. Like subroutines, ISR’s must have a label on the first line to associate their entry point with an address. Since ISR’s aren’t called by a branch instruction, these labels aren’t used to form offset addresses for branches. Rather, these entry-point addresses are collected into a “vector table”, and stored at a predetermined address. Each interrupt must have its own entry point, or vector. Then, if and when the interrupt occurs, the associated vector is loaded into the PC (and the current PC is loaded into the LR). At the end of the ISR, after context has been restored, the final instruction is BX LR (note that R14 must be pushed onto the stack as well, in case another higher priority interrupt occurs while the ISR is running).
Some ISRs perform critical functions that must execute without interruption. These ISRs typically change priority levels (and/or disable other interrupts) so they cannot be interrupted.
Managing interrupts has become more challenging and complex as the number and type of interrupts has increased. Many processors (and processes) used in real-time systems use the majority of their CPU bandwidth dealing with ISRs. Because of this increased complexity, interrupt controllers have become large and important parts of the processing system. A typical interrupt controller can have dozens of inputs, any number of which could be active at any one time. Any given interrupt may need to temporarily alter priority levels and other interrupt options, allow or disable nested interrupts, and temporarily mange other system resources.
Much of the interrupt controller’s work involves managing priorities, and determining which interrupt to service. Since setting up an interrupt controller is more or less a required task, and much of the work is detailed and somewhat obscure, it is fairly typical that design tools automatically insert “preamble” code into the execution environment to configure the controller into a default state. That is the case with the Xilinx® SDK that you are using. In the next module, you will look further into the ARM interrupt controller.
Interrupts vs. Polling
Most real-time systems use interrupts, simply because it is far more efficient to use interrupts than to burn CPU time in polling loops. But there are some situations where polling loops make sense. For “quick and dirty” prototype work, where you are learning about a hardware system and don’t want to invest coding time until you have a better overall understanding of the system; for short-duration data gathering, when an event is expected in a narrow time window; for very slow data, when an occasional check without reference to an accurate internal time base; for slow to medium data when time-stamp data is included with the data; and there are other situations as well.