There is still some discussion over the definitions of the four terms: Integration, Interoperability, Compatibility and Portability. The aim of this note is to provide an explanation of the four terms as used by the Testing Standards Working Party.
To explain the terms then two basic entities are required: Components are one of the parts that make up a system, while a system is a collection of components organised to accomplish a specific function or a set of functions (both from IEEE 610).
Integration is concerned with the process of combining components into an overall system (after IEEE 610). In software, we are normally concerned with integration at two levels. First there is the integration of components at the module level into a system – sometimes known as component integration testing or integration in the small. Second there is the integration of systems into a larger system – sometimes known as system integration testing or integration testing in the large.
Integration testing is all encompassing in that it is concerned not only with whether the interface between components is correctly implemented, but also with whether the integrated components – now a system - behave as specified. This behavior will cover both functional and non-functional aspects of the integrated system.
Figure 1 shows two components interacting to form an integrated system. Integration testing is concerned with whether the two components when combined (integrated) to form an integrated system behaves as the system as a whole is expected to behave.
Figure 1: Integration
An example of integration testing would be where the aperture control component and the shutter control component were integrated and tested to ensure that together they performed correctly as part of the camera control system.
Interoperability is the ability of two or more systems (or components) to exchange and subsequently use that information (after IEEE 610). So interoperability is concerned with the ability of systems to communicate – and it requires that the communicated information can be understood by the receiving system - but it is not concerned with whether the communicating systems do anything sensible as a whole. The interoperability between two systems could be fine, but whether the two systems as a whole actually performed any useful function would be irrelevant as far as the interoperability was concerned. Interoperability is therefore involved with the interfaces (as is integration) but not with whether the communicating systems as a whole behave as specified. Thus interoperability testing is a subset of integration testing.
Figure 2: Interoperability
Figure 2 shows two systems communicating with an interface in each system to handle the communication. The interface provides the information for use by the receiving system at the point marked ‘X’. Interoperability testing is limited to checking that information is correctly communicated from one system and arrives at the other system at the point marked ‘X’ in a state in which it could be used.
An example of interoperability testing would be where flight information is passed between the (separate) booking systems for two airlines. Interoperability testing would test whether the information reached the target system and still meant the same thing to the target system as the sending system. It would not test whether the target system subsequently used the booking information in a reasonable manner.
Compatibility is concerned with the ability of two or more systems or components to perform their required functions while sharing the same environment (after IEEE 610). The two components (or systems) do not need to communicated with each other, but simply be resident on the same environment – so compatibility is not concerned with interoperability. Two components (or systems) can also be compatible, but perform completely separate functions – so is not concerned with integration which would considers whether the two components together performed a function correctly.
Figure 3: Compatibility
Figure 3 shows two components in the same environment. They are compatible with each other as long as both can run (or simply reside) on the environment without adversely affecting the behaviour of the other.
An example of compatibility testing would be to test whether word processor and calculator applications (two separate functions) could both work correctly on a PC at the same time.
Portability is concerned with the ease of moving components or systems between environments (hardware and/or software environments). In figure 4, component X can be seen in two different environments.
Figure 4: Portability - I
Similarly, in figure 5, system P can be seen in two different environments. As long as component X and system P can work correctly in the different environments then they are considered to be portable components and systems.
Figure 5: Portability - II
An example of portability testing would be when a computer game that worked on a PC running Windows 98 was then tested to determine whether it worked on a PC running Windows XP.
|Comment from Grenville Johns||Compatibility
Testing - Testing that two or more computer applications can run in the
same environment without affecting each other's behavior.
My reason for this suggestion is that the present definition is more akin to interface testing than compatibility testing and this definition was seen as helpful at a recent technique review.
|Comment from Neil Hudson||(1) No
comment on general meaning of each term, they are distinct and fine.
(2.1) IEEE610 refers to 'exchange information'. This terminology implies to me the exchange fo data. This is too limited a scope. Interactions between components takes other forms including:
(a)Triggering actions by another comp/sys.
(b)Invoking a service on another comp/sys that generates either a synchronous or an asynchronous response.
Refering to these types of interactions as an 'exchange of information' is too abstract. It is probably inappropriate to try and list each type of interaction in a definition and so rather than 'exchange information' it is suggested the term 'interact' be used.
(2.2) IEEE610 states 'to use the information that has been exchanged'. Again this is restricted to 'information' and also does not indicate what 'use' the information is put to. Interoperability is an aspect of Intergation and the 'use' should be achieving the correct behaviour of the intergated system.
(2.3) Based on the above two points it is suggested that interoperability should be:
"Interoperability: The ability of two or more systems or components to interact in the ways required to implement the correct operation of the higher level integrated system."
(3) COMPATIBILITY v INTEROPERABILITY
At the last meeting a discussion arose regarding checking for interference due to a mix of versions of the same applications (present during a role) within a large network environment. It could not be agreed whether this was Compatibility or Interoperability testing. A suggested view on this is:
(1) Checking that if version Fred writes data into a database that version Charlie can operate on the data correctly is INTEROPERABILITY or INTEGRATION testing. Because they operate on shared data the different versions of the components are in reality different components of the system. They have an interaction through shared data.
(2) Checking that if version Fred is introduced then version Charlie can still access the database across the network is a COMPATIBILITY issue. The two could be working on totally separate data, possibly even totally separate database instance. There is no intentional interaction between them in this mode.
(3) Perhaps a view is that: Interoperability checks that interactions that it is intended to support work correctly. Compatibility checks for un-intended interactions that disrupt normal operation.
If you have any comments on the above article then please use the feedback form.