How to efficiently combine test methods for an
automated ISO 26262 compliant software
unit/integration test
Markus Gros
Vice President Marketing & Sales
BTC Embedded Systems AG
Berlin, Germany
markus.gros@btc-es.de
Abstract— The verification of embedded software in today’s
development projects is becoming more and more a challenge.
This is in particular true for the automotive industry, where we
can observe a rapidly growing software complexity combined
with shortened development cycles and an increasing number of
safety critical applications. New methodologies like Model-based
design or agile processes on one hand clearly help to make the
development more efficient, on the other hand they even bring
additional challenges related to the test process. One effect is for
example, that tests need to be executed earlier and more often
and due to the Model-based development approach on more
execution levels like MIL/SIL/PIL. One more dimension of
complexity comes from the fact, that one test method is not
enough to get the necessary confidence regarding the correctness
and robustness of the system-under-test. This conclusion is also
part of several standards like ISO 26262, which recommend a
combination of different test activities on model and code level.
interfaces, data
This paper presents a concept for an integrated verification
platform for models and production code, which addresses the
challenges explained above by focusing on three main aspects:
integration, separation and automation. The integration aspect
can be divided in two different approaches. First of all, the
platform should be integrated with other development tools like
modelling tool, requirements management tool or code generator.
All information needed for the verification of a component
should be extracted as automatically as possible, including
information about
types, data ranges,
requirements or code files. As this kind of information is needed
in a similar way for different verification methods, the second
integration approach consists of
integrating different test
methodologies on top of a shared database within one
environment. The first obvious benefit is, that the information
described above needs to be extracted only once for all
verification activities which can include guideline checking, static
analysis, dynamic analysis and formal methods. We will also
describe a second benefit coming from the fact, that these
different methods deeply leverage from each other’s results.
Separation means that software units shall be thoroughly verified
before they are integrated into software components. Integrated
components are then being verified according to the software
architecture definition. The verification platform should support
this divide and conquer approach as recommended and
described in ISO 26262 or Automotive SPICE. One final topic to
be discussed is automation, which should be made possible by a
complete API as well as integration with technologies like
Jenkins. The discussed verification platform approach automates
many testing activities, from the more mundane activities to
develop MBD and code centric test harnesses to the more
sophisticating activities of automatic test generation.
Keywords—Model-based Development, ISO 26262, Software
Unit Test, Software Integration Test
I.
INTRODUCTION
In today’s development projects for embedded software, the
complexity is growing in many dimensions, which brings in
particular many challenges for the test and verification process.
The size of software in terms of lines of code and number of
software components is constantly growing, which obviously
also increases the number of test projects and test cases. On top
of this, the Model-based development approach is becoming
more and more popular and, despite all advantages, it brings
some additional challenges to the testing workflow because test
activities need to be done on model level as well as on code
level. While these observations seem to lead to an increasing
test effort, it is also obvious that the competitive pressure in the
industry leads to a need to control or even reduce development
cost and time. The amount of test activities is even further
increased by the adoption of agile development methods,
which require a frequent repletion of test tasks on slightly
modified components. As a consequence, software tools are
introduced in the process in order to automate tasks like test
execution, test evaluation or report generation.
One more challenge we can see in particular in the
automotive industry is, that software is more and more taking
over safety critical features related to steering or braking,
slowly leading the way to fully autonomous vehicles. The level
of confidence which is needed for these kinds of features can
only be achieved by combining multiple test methods. This is
also reflected in standards like ISO 26262 and leads to a
growing number of software tools which contribute to the
www.embedded-world.eu
overall quality metrics. While on one hand specialized software
tools for individual verification tasks are available, the growing
number of tools inside development projects becomes more
and more difficult to manage. Reasons are:
• Every
software
tool comes with
specific
limitations regarding the supported environment
(e.g. versions of Microsoft Windows, Matlab etc.)
and the supported language subset (e.g. supported
Simulink blocks). Cross-checking all limitations
before selecting the tools and tool versions for a
specific project is always a time consuming and
error prone tasks.
• While different software tools in a project address
different use cases, they often also have features
and needs in common. One example in the
verification context is the fact, that every tool
needs information about the system under test (or
SUT), which typically includes details about
interface, data ranges or the list of files needed to
compile or simulate. Importing the SUT into
different tools is not only a redundant task, it is
also error prone as the user needs to learn and
apply different workflows for a similar task.
• As software tools often use different file formats
for storing data or reports, users need to learn
different tool specific aspects and need to store
and analyze reports in different environments and
formats. For automation, APIs, if available at all,
might be based on different concepts or are only
available in different programming languages.
• When different test methods in a model-based
process are applied independently, they typically
do not benefit from each other’s results.
This paper presents the concept of a test platform for
software unit test and software integration test within a model-
based development process
including automatic code
generation. While Section II presents the core features of the
platform, sections III to VI focus on the main benefits that we
call integration, separation and automation. Several aspects of
the described approach have already been integrated in the
commercial tool BTC EmbeddedPlatform.
II. CORE FEATURES
This chapter describes some common needs and features
that we find in a redundant way in different tools being
designed for different test methods. The benefits of providing
these features once and making them available to different test
methods will be described in section III.
A. Import of the system under test
The starting point of any test activity is, to provide
information about the SUT to the test tool. As we assume a
model-based development process, we will consider at least
two levels for test activities: Simulink/Stateflow models as well
as production C code. Relevant information includes:
• List of needed files and libraries for model
(models, libraries, data dictionaries, .m/.mat files)
and code level (.c/.h files, include paths)
• Structure of subsystems in the model and structure
of functions in the production code
• List of interface objects on both levels. The main
interface types are inputs, outputs as well as
calibration parameters and observable internal
signals. Interface objects can be scalar variables,
vectors, arrays or they can be structured in form of
bus signals or C code structures. Additional
important information for each interface object
includes data types, scalings and data ranges.
• For test execution, a test frame needs to be
available on both levels. In particular on unit test
level, this might include the need to generate stub
implementations
functions and
variables.
for external
B. Requirements
The traceability to requirements is an important aspect of
test methods
testing or formal
verification. The platform should be able to link test artifacts to
requirements in a bi-directional way.
like requirements-based
C. Debugging
If tests are failed, the platform should support debugging
activities on model and code level
D. Reporting
It should be possible to generate report documents for all
test activities in an open format like html. Creating the different
types of reports with a common look and feel can support
clarity and make them easier to read.
III. TOOL INTEGRATION
A tight integration between the test platform and other tools
used inside the development project is a key prerequisite for an
efficient and automated workflow. In the context, we can
identify three main types of tools to connect to.
are
the modelling environment
In context of a model based development approach with
automatic code generation, the most important tools to
integrate with
(e.g.
Simulink/Stateflow) and the code generator (e.g. dSPACE
TargetLink or EmbeddedCoder). This integration should
enable a highly automated import and analysis of the SUT as
described in II.A. A manual setup of the test project or a semi-
automated approach with third-party formats like Excel should
be avoided for efficiency reasons and to avoid errors.
As requirements play an important role, the platform should
provide a direct connection to requirements management tools
like IBM DOORS or PTC Integrity. It should be possible to
automatically import the desired subset of requirements and to
write information about test results back to the requirements
management tool as additional attributes.
Especially in larger projects where a lot of developers and
test engineers are involved, a global data management platform
might be available providing features like centralized access to
all development and
test artifacts, version and variant
management or the control of access rights. This kind of tool
also has the potential to collect quality metrics for different
components and make them accessible on a project wide level.
Therefore, the test platform should be able to integrate with
such a data management platform in a bi-directional way in
order to obtain information about the SUT and in order to
provide test metrics back to it.
IV.
INTEGRATION OF TEST METHODS
As already mentioned above, the needed confidence for the
development of embedded systems can only be achieved by a
combination of different test methods. Combining different test
methods inside on platform will bring two main benefits. The
first obvious benefit is, that the features described in II can be
accessed and shared by the test methods, increasing efficiency
and avoiding the need for redundant tasks. Being located in the
same environment, some of the relevant test methods also have
the potential to benefit from having information about each
other’s results. Relevant tasks in this context are:
a. Requirements-based Testing: Functional test cases
should be derived from requirements and applied on
model and code level. The creation of these test cases
clearly benefits from the detailed information the
platform has about the SUT including available
interface variables and data ranges. This way, the test
editor can already protect the user against invalid data
entry. Other platform features which are needed for
this task contain the capability to run simulations, the
availability of requirements as well as debugging and
reporting features.
b. Analysis of equivalence classes and boundary values:
Both methods are recommended by ISO 26262 and
target an analysis of different values and value ranges
for interface variables. These tasks will benefit from
the fact that the platform already contains information
about all available functions, their interface signals and
the data ranges. The outcome of this activity should be
a set of test cases which cover the defined variable
ranges and values, therefore it makes sense to combine
this analysis with the Requirements-based Testing
activity.
c. Analysis of model and code coverage: In order to
assess the completeness of the test activities, structural
coverage metrics should be measured on model and
code level. Due to an integration with the
Matlab/Simulink environment, model coverage can
easily be measured via standard mechanisms. For code
coverage, the code needs to be instrumented and all
available tests need to be executed on the instrumented
code. As the platform should have access to a
compileable set of code and header files, this analysis
can be handled fully automatically.
d. Check for Modelling and coding guidelines: These
kind of static analysis methods can be fully automated
in case the list of model and code artifacts is available.
Modelling guidelines for example can check for
prohibited block types, wrong configuration settings or
violations of naming rules. An example for coding
guidelines are the widely used MISRA C rules.
e. Analysis of runtime errors: This static analysis is
typically done on code level by applying the abstract
interpretation method. This methodology requires
access to the list of code and header files and it also
benefits from getting information about data ranges of
variables. If some analysis goals are already covered
by existing tests, it might be able to exclude them from
the analysis to increase efficiency.
f. Resource consumption: This means analyzing the
resource consumption on the target processor regarding
RAM, ROM, stack size and execution time. One
option is to measure these metrics during the test
execution on a real or virtual processor, which the
platform should be able to call. This measurement is of
course only possible, if a sufficient set of test cases is
available, which covers different paths in the software.
g. Structural Test Generation: In order to maximize
structural coverage metrics on model and code level,
test cases can be generated automatically either by
random methods or using model checking. This task
can benefit dramatically from the availability of
requirements-based test cases, as only uncovered parts
need to be analyzed. Structural tests can be used e.g.
for showing robustness of the SUT and for Back-to-
Back as well as regression testing.
h. Back-to-Back Testing: Back-to-Back Testing between
models and code is (highly) recommended by ISO
26262 and it obviously requires test cases (functional
and/or structural), the ability to run them on the
different execution levels and the generation of
corresponding reports.
i. Formal Specification: Textual (or informal)
requirements often leave some room for ambiguities or
misunderstandings. Expressing requirements in semi-
formal or formal notation (as recommended by ISO
26262) does not only improve their quality, it also
allows to use them as a starting point for some highly
automated and efficient verification methods (see
below). The formalization process requires information
about the architecture of the SUT and it should also
provide traceability to existing informal requirements
from which the formal notation is derived. Both are
already provided by the platform concept.
j. Requirements-based Test Generation: As the
previously described formalized requirements are
machine-readable, they can be used as a starting point
for an automatic generation of test cases which will test
and cover the requirements. If these requirements don’t
describe the full behavior of the system, the SUT itself
(available in the platform) can contribute to the
process. If manual test cases already exist, they can be
analyzed regarding their requirements coverage, so that
only missing tests need to be generated.
www.embedded-world.eu
k. Formal Test: In a Requirements-based Testing process,
every test case is usually only evaluated with respect to
the requirement from which it has been derived. A
situation where a particular test case violates a
different requirement typically goes undetected. By
performing a Formal Test, all test cases are evaluated
against all requirements, which dramatically increases
the testing depth without the need to create additional
test data. Obviously, this method benefits from a
platform in which formalized requirements and
functional/structural test cases are managed together
for a particular SUT.
l. Formal Verification: The number of possible value
combinations for input signals and calibration values is
almost infinite for a typical software component. It is
therefore obvious that even a large number of test
cases can never cover all possible paths through the
component. Formal Verification with Model-Checking
technology can automatically provide a complete
mathematical proof that shows a requirement cannot be
violated by the analyzed SUT. This guarantees that
there is no combination of input signals and calibration
values that would drive the system to a state in which
the requirement is violated. The analysis takes the SUT
as well as the formalized requirement(s) as an input. If
a requirement can be violated, a counter example is
provided in form of a test case, which can then be
debugged to find the root cause for the possible
requirement violation.
V. SEPARATION
The growing complexity in todays embedded software
development projects can only be managed by a divide and
conquer approach. This concerns different disciplines including
requirement authoring, software architecture design, software
development and also testing. System requirements need to be
broken down to smaller units as part of a bigger architecture.
Afterwards, these units should be developed and tested
independently before being integrated. This process is also
reflected in the so-called V-Cycle as well as in ISO 26262
which on software level contains a clear separation between
software unit test and software integration test.
The test platform should support this approach mainly in
two ways. First of all, the tool should be flexible enough, to
separate the SUT structure from the model structure. This
means, it should be possible to individually test
subsystems/subcomponents which are managed inside one
single model or code file. Therefore, it is necessary to separate
individual subsystems from their original model and embed
them in a newly created test frame. A similar approach is also
needed on code level. When it comes to the integration testing
phase, the tool should be able focus on the new tasks which are
related to potential integration issues. It should not be
necessary to repeat activities (like importing unit test cases) on
the integration level again. This also means for example, that
metrics like MC/DC Coverage on individual units should be
excluded from the test process, as this has already been shown
in the unit test. This can be achieved by avoiding the code
annotation for the units during the integration testing.
VI. AUTOMATION
As mentioned before, the number of test executions needed
within a project is growing constantly. One obvious reason is
the growing number of functions and features that needs to be
tested. Also, the introduction of model based development with
its different simulation levels MIL, SIL and PIL contributes to
this effect. However, probably the biggest contribution comes
from the fact that agile development methods become more
popular, which leads to tests being created early and more
frequently within a project, up to a situation where tests (at
least for the modified modules) run automatically as part of
nightly builds within a continuous integration approach.
For maximum flexibility in this context, the platform
should provide a complete API, allowing to automate all tool
features including test execution and reporting. An integration
with established continuous integration environments like
Jenkins is also helpful and can reduce the need to manually
script standard workflows.
VII. CONCLUSION
This paper presented a concept for a verification platform
focusing on the software unit test and software integration test
of embedded software as part of an ISO 26262 compliant
model based development process. While software becomes
more and more safety critical in automotive applications, more
test methods need to be combined to achieved a sufficient
confidence, leading to more tools being introduced in the
process. This number of independent tools leads to several
challenges and problems, which were described in section I. As
a solution, we propose a platform concept which provides some
common core features (described in section II) on top of which
the different test methods can be realized. This way they can
benefit from a shared database which provides general and
reusable information about the system under test, avoiding
redundant tasks that would need to be repeated for every test
method in different tool environments. We also described three
key features of this platform: Integration, separation and
automation. Several aspects of this concept are already
implemented in the commercially available product BTC
EmbeddedPlatform, which is also certified for ISO 26262 by
German TÜV Süd. Thanks to an open Eclipse-based
architecture, additional test methods described in this paper
could be added in the future either by BTC Embedded Systems
or by 3rd parties.