de l’information
Figure 1: Theoretical basis for the development of object-oriented product metrics.
In the arena of object-oriented metrics, a slightly more detailed articulation of a theoretical basis for
developing quantitative models relating product metrics and external quality metrics has been provided in
[19], and is summarized in Figure 1. There, it is hypothesized that the structural properties of a software
component (such as its coupling) have an impact on its cognitive complexity. Cognitive complexity is
defined as the mental burden of the individuals who have to deal with the component, for example, the
developers, testers, inspectors, and maintainers. High cognitive complexity leads to a component6exhibiting undesirable external qualities, such as increased fault-proneness and reduced maintainability.
Certain structural features of the object-oriented paradigm have been implicated in reducing the
understandability of object-oriented programs, hence raising cognitive complexity. We describe these
below.
2.1.1.1 Distribution of Functionality
In traditional applications developed using functional decomposition, functionality is localized in specific
procedures, the contents of data structures are accessed directly, and data central to an application is
often globally accessible [110]. Functional decomposition makes procedural programs easier to
understand because it is based on a hierarchy in which a top-level function calls lower level functions to
carry out smaller chunks of the overall task [109]. Hence tracing through a program to understand its
global functionality is facilitated.
In one experimental study with students and professional programmers [11], the authors compared
maintenance time for three equivalent programs (implementing three different applications, therefore we
have nine programs): one consisted of a straight serial structure (i.e., one main function), a program
developed following the principles of functional decomposition, and an object-oriented program (without
inheritance). In general, it took the students more time to change the object-oriented programs, and the
professionals exhibited the same effect, although not as strongly. Furthermore, both the students and
professionals noted that they found that it was most difficult to recognize program units in the object-
oriented programs, and the students felt that it was also most difficult to find information in the object-
oriented programs. Widenbeck et al. [109] make a distinction between program functionality at the local
level and at the global (application) level. At the local level they argue that the object-oriented paradigm’s
concept of encapsulation ensures that methods are bundled together with the data that they operate on,
making it easier to construct appropriate mental models and specifically to understand a class’ individual
functionality. At the global level, functionality is dispersed amongst many interacting classes, making itharder to understand what the program is doing. They support this in an experiment with equivalent small
C++ (with no inheritance) and Pascal programs whereby the subjects were better able to answer
questions about the functionality of the C++ program. They also performed an experiment with larger
programs. Here the subjects with the C++ program (with inheritance) were unable to answer questions
about its functionality much better than guessing. While this study was done with novices, it supports the
general notions that high cohesion makes object-oriented programs easier to understand, and high
coupling makes them more difficult to understand. Wilde et al.’s [110] conclusions based on an interview-
based study of two object-oriented systems at Bellcore implemented in C++ and an investigation of a PC
Smalltalk environment, all in different application domains, are concordant with this finding, in that
programmers have to understand a method’s context of use by tracing back through the chain of calls
that reach it, and tracing the chain of methods it uses. When there are many interactions, this
6 To reflect the likelihood that not only structural properties affect a component’s external qualities, some authors have included
additional metrics as predictor variables in their quantitative models, such as reuse [69], the history of corrected faults [70], and the
experience of developers [72][71]. However, this does not detract from the importance of the primary relationship between product
metrics and a component’s external qualities.
de l’information
exacerbates the understandability problem. An investigation of a C and a C++ system, both developed by
the same staff in the same organization, concluded that “The developers found it much harder to trace
faults in the OO C++ design than in the conventional C design. Although this may simply be a feature of
C++, it appears to be more generally observed in the testing of OO systems, largely due to the distorted
and frequently nonlocal relationships between cause and effect: the manifestation of a failure may be a
‘long way away’ from the fault that led to it. […] Overall, each C++ correction took more than twice as long
to fix as each C correction.” [59].
2.1.1.2 Inheritance Complications
As noted in [43], there has been a preoccupation within the community with inheritance, and therefore
more studies have investigated that particular feature of the object-oriented paradigm.
Inheritance introduces a new level of delocalization, making the understandability even more difficult. It
has been noted that “Inheritance gives rise to distributed class descriptions. That is, the complete
description for a class C can only be assembled by examining C as well as each of C’s superclasses.
Because different classes are described at different places in the source code of a program (often spread
across several different files), there is no single place a programmer can turn to get a complete
description of a class” [77]. While this argument is stated in terms of source code, it is not difficult to
generalize it to design documents. Wilde et al.’s study [110] indicated that to understand the behavior of
a method one has to trace inheritance dependencies, which is considerably complicated due to dynamic
binding. A similar point was made in [77] about the understandability of programs in languages that
support dynamic binding, such as C++.
In a set of interviews with 13 experienced users of object-oriented programming, Daly et al. [40] noted
that if the inheritance hierarchy is designed properly then the effect of distributing functionality over the
inheritance hierarchy would not be detrimental to understanding. However, it has been argued that there
exists increasing conceptual inconsistency as one travels down an inheritance hierarchy (i.e., deeper
levels in the hierarchy are characterized by inconsistent extensions and/or specializations of super-
classes) [45], therefore inheritance hierarchies may not be designed properly in practice. In one study
Dvorak [45] found that subjects were more inconsistent in placing classes deeper in the inheritance
hierarchy when compared to at higher levels in the hierarchy.
An experimental investigation found that making changes to a C++ program with inheritance consumed
more effort than a program without inheritance, and the author attributed this to the subjects finding the
inheritance program more difficult to understand based on responses to a questionnaire [26]. A
contradictory result was found in [41], where the authors conducted a series of classroom experiments
comparing the time to perform maintenance tasks on a ‘flat’ C++ program and a program with three levels
of inheritance. This was premised on a survey of object-oriented practitioners showing 55% of
respondents agreeing that inheritance depth is a factor when attempting to understand object-oriented
software [39]. The result was a significant reduction in maintenance effort for the inheritance program.
An internal replication by the same authors found the results to be in the same direction, albeit the p-
value was larger. The second experiment in [41] found that C++ programs with 5 levels of inheritance
took more time to maintain than those with no inheritance, although the effect was not statistically
significant. The authors explain this by observing that searching/tracing through the bigger inheritance
hierarchy takes longer. Two experiments that were partial replications of the Daly et al. experiments
produced different conclusions [107]. In both experiments the subjects were given three equivalent Java
programs to make changes to, and the maintenance time was measured. One of the Java programs was
‘flat’, one had an inheritance depth of 3, and one had an inheritance depth of 5. The results for the first
experiment indicate that the programs with inheritance depth of 3 took longer to maintain than the ‘flat’
program, but the program with inheritance depth of 5 took as much time as the ‘flat’ program. The authors
attribute this to the fact that the amount of changes required to complete the maintenance task for the
deepest inheritance program was smaller. The results for a second task in the first experiment and the
results of the second experiment indicate that it took longer to maintain the programs with inheritance. To
explain this finding and its difference from the Daly et al. results, the authors showed that the “number of
methods relevant for understanding” (which is the number of methods that have to be traced in order to
perform the maintenance task) was strongly correlated with the maintenance time, and this value was
much larger in their study compared with the Daly et al. programs. The authors conclude that inheritance
百度搜索“77cn”或“免费范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,免费范文网,提供经典小说教育文库The Confounding Effect of Class Size on The Validity of Obje(5)在线全文阅读。
相关推荐: