The database concepts, tools, and runtime software described in this article provide a methodology for creating advanced avionics human-machine interfaces (HMIs), such as those found in modern flat-panel display systems. The key innovation described is the ability to treat the avionics displays as databases rather than as compiled and linked software. This new technology facilitates the goals of open architecture for avionics, a key avionics industry trend.
Much of the development effort in a modern avionics system, such as the one illustrated in Figure 1, is expended creating graphical HMIs to display data and control the avionics system. In recent years, the complexity of HMIs has dramatically increased with new capabilities such as situational awareness fusion of terrain, radar, or traffic data; user-interactive control of systems and navigation; and complex systems displays to assist in managing the aircraft.
A key trend in avionics is change. Increasingly, avionics systems have to deal with new regulatory guidance and technologies being developed to address such issues as:
- Capacity of the Air Transport Management system (RNP, CNS/ATM)
- Integrated Hazard Awareness and new traffic management systems (ADS/B, TCAS, TAWS)
- Improved Flight Deck Communications (CPDLC)
Adapting to such change invariably impacts the HMI of the flight deck system. Multifunction controllers in the system require the ability to adapt and grow to meet new requirements. At the same time, the methods used to build these systems have not kept pace. Increasing complexity has led avionics systems to be more complex and resistant to change. New methods to create avionics HMIs are required to address these emerging needs.
Avionics Display Development
An HMI is composed of dynamic graphical content that displays data and may control user interaction. The development of HMIs typically involves coding the HMI graphical constructs and user-interaction elements using APIs that reference standard libraries such as graphics and windowing libraries. An HMI may need to run on an embedded system with limited resources, making the correct implementation a difficult task. Writing HMI code by hand is tedious and error-prone.
For dynamic graphics in industries such as avionics, code generator tools have been employed. These work by taking a graphical specification of the HMI and, optionally, its logical behavior created by a developer using tools and generating source code to implement the HMI. This code is integrated through a compilation and linking process into an embedded system, and may also be retargeted to desktop training and prototyping systems. An advantage of code generators is that they are often designed to work within a known software architecture or application framework, easing the design burden on the developer, who may use the code generator to customize the design for his application.
When used for HMIs, code generators most often generate code that is conformant to a graphics API, such as OpenGL. This is done so the generated code can take advantage of hardware-accelerated graphics drivers. Unfortunately, directly generating graphics driver calls does not support open architecture, because graphics drivers are too close to the hardware and are not standardized. A more robust solution would support usage of an HMI in an efficient manner, but without binding the HMI to a particular hardware implementation.
For user-interface elements, including commonly used widgets, dialog boxes, and menus, a more common approach is to use a GUI builder tool to create the display; the tool creates a resource file that can be loaded at runtime by a UI system. Developing HMIs using this approach is typically faster than code generation, but tends to limit the developer to a very narrow widget set, which is good for standardization but poor for allowing freedom of development. Also, this approach does not allow for optimization. This approach also depends heavily on the existence of an underlying UI system, such as X/Motif, Microsoft Windows, or ARINC 661 CDS for avionics.
Other industries based on advanced, high-performance graphics that must be portable across different platforms have evolved better methods for tackling their HMI and graphic rendering requirements. Video games almost exclusively rely on model-based development techniques for specifying and rendering graphics. A set of standard model formats, such as those output from 3D modeling tools including 3D Studio Max and Maya, are available as the medium of exchange between the modeling tool and the runtime software. Graphical objects are never represented by handwritten code, but rather as a database conforming to a model specification. A software platform, typically referred to as a game engine, is written to ingest and present the models in real time on the target system. Game engines also support scripted models to allow non-programmers to create a large percentage of the visual content. Without using code generators, game engines support high-performance graphical content on resource-constrained embedded systems, such as cell phones. Because games share many of the same requirements as high-performance embedded systems, it is worthwhile to consider their development paradigm for embedded systems.
All HMIs implement some requirement that specifies the data and functions the HMI must perform. More often than not, the details are left to the HMI developer, who may be a software engineer, a systems engineer, or a human-interface specialist. A key step is refining the high-level requirements into detailed requirements that capture the expected behavior of the system. A model-based development tool suite may be used to allow a system designer to define the HMI and its behaviors. This tool suite should support a WYSI-WYG editing capability with graphical and logical modeling capabilities. Ideally, the tool would provide real-time feedback of the data dynamics and would allow the HMI developer to validate the dynamics of the system as the model is developed.
In high-integrity applications such as avionics, traceability of each stage of implementation to a higher requirement is essential. A typical requirements statement might read:
<RR 2.1.2> The NP gauge shall graphically indicate the engine speed (input NPData) and indicates abnormal conditions by color change.
To implement this requirement in a tool, graphics that represent the intended appearance of the HMI must be defined. This graphical content of an HMI may be represented as a tree composed of geometric elements. In the example in Figure 2, the pie shape is tied to an input called NPAngle, and the color of the pie is tied to a variable called NPColor. The textual readout is directly tied to the incoming NPData value.
The expected behavior of an HMI element can involve higher-level HMI dynamics that cannot be directly specified as transformations or attributed of the geometry itself. This might include mathematical expressions used to derive values for display, event- or state-based logic governing HMI behavior, or invocation of output behaviors based on user inputs. This may be referred to as presentation or backing logic and is often implemented through writing textual requirements followed by implementing code in a language such as C, C++, or Ada. Unfortunately, representing such logic as source code defeats the purpose of open architecture, because custom, untrusted code cannot be allowed to execute in an open-architecture certified avionics system. Another approach is required; one that supports logical specification and execution without code.
To specify the logic that governs the pie’s visual appearance and color change, logical transformations based on the input data must be defined. In order to do so, the HMI model must specify how the color of the element changes based on the input data. The structure of the logic tree allows one to specify a complex set of if/then/else conditions to fully specify each condition and the resulting actions.
The above process can be repeated for all elements within the HMI. The logical conditions can include logic that executes based on user-input events. The actions resulting from logic can include calls to external functions, allowing the developer to specify the behavior of the system based on its inputs. When the specification is complete, it can be reviewed and tested. At this point, the model represents the low-level requirements of the HMI, and it is ready to be moved to the next stages of the process including testing and deployment in an open-architecture HMI system.
The data format is the key to an open-architecture HMI solution. Because of the high level of dynamics, the HMI database format must represent more that just geometry; it must have logical and behavioral representations, as created during the model-based development phase. In addition to fully representing the HMI definition, the data format must also allow for efficient run-time processing of the HMI to meet the goals of an efficient open-architecture solution. A data format that merely echoes the model will require too much extensive run-time optimization to be effectively deployed in a limited embedded system.
The solution must allow for the equivalent of a compilation step while retaining the data-driven nature demanded by open architecture and certification. To realize the goals of our system, we designed a database format, called the HMI Specification Language (HSL). HSL is an efficient, compact, HMI representation format that captures geometry, logic, and user interactivity in a data-driven format. A graphical view of HSL is shown in Figure 3.
We implemented the run-time system, or HSL Rendering Library (HRL), to handle the task of rendering the HMI geometry and logic, maintaining the value of data inputs, and managing user interaction.
In our design, the HRL utilizes OpenGL to render to take advantage of hardware acceleration, and supplies an API to allow HMI embedded logic or external applications to manipulate elements of the HSL database at run time. The user application primarily interacts with the variables that define the interface to the HMI via any means, including passing variables and events over an avionics data bus if required. A key feature of the HRL is that, as the underlying graphics API implementation changes, the HRL can be modified to address different underlying graphics APIs with no change to the database format or capabilities of the system. This provides a significant advantage in managing obsolescence.
The HRL also contains the control state management system, otherwise known as the “window” manager. This component has the capability to accept user input and reflect user events from a cursor control device or keyboard back to the variable system. The variable system maintains a table of events that are user interactions with the system. Events can be accessed at different levels of the HMI logic to accomplish both low-level control and higher-level system actions. Figure 4 shows how the tools and run-time components of the dynamic display system fit together.
The data-driven avionics HMI solution provides extensive benefits:
- Early Testing Advantages — While modeling the HMI components, the ability to fully exercise the HMI using the tool environment is valuable. System designers are able to create complete HMI models without being concerned with underlying system details or code.
- Code Elimination — The system eliminates much coding in the development and integration of applications. Up-front modeling replaces coding, and integration based on HSL proves very flexible in reusing HMIs across application form factors.
- Increased Performance — The rendering library has been deployed on embedded systems ranging from PDAs and automotive processors to embedded avionics computers. The optimization approach supported by the HSL solution has resulted in very good performance on a wide range of avionics target computers.
Overall, a data-based HMI approach has demonstrated substantial benefits that are key to an open-architecture avionics solution. The elimination of code, while retaining the optimized nature of a code solution, is the key benefit of the technology. The benefits include reduced system complexity, simpler system architecture, and easier system development and maintenance. An example of a set of avionics displays is shown in Figure 5. Each display is a HSL separate database output from the modeling tool and composited together.
This article was written by Mark Snyder, director of Embedded SW Engineering for Quantum3D, Glendale, AZ. For more information, click here .