1. Surveys and critiques of most similarity measures can be found in Castrén (1994), Isaacson (1990, 1992), and Scott and Isaacson (1998).
2. Isaacson (1996) raises some basic conceptual questions. Quinn (2001) is a well-argued critique of several fundamental issues.
3. Rahn (1989) relates an early attempt to apply computation to the analysis of similarity relations; he was defeated by the speeds of then-available processors. Computational complexity has not completely disappeared since then, in that commercially-available software still has limits that prevent, e.g., a single analysis of all ratings for trichords through nonachords (a 208 x 208 matrix); but excessive length of computer runs is no longer an obstacle to such analyses.
4. Correlations (denoted by "r") can vary from a
perfect negative of -1.0 (as quantity A goes up, quantity B goes down by the
same proportion) to a perfect positive of +1.0; for these functions over the
integers 1 to 50, a domain size comparable to the number of pcsets studied here,
y = x/SQRT(x) have r = .983,
y = x/x2 have r = .969, and y = x2/SQRT(x) have r = .911--all close to perfect
5. Narmour's (1990, 1992)
implication-realization model for melodic expectation is a case in point. He
proposes five interacting factors to account for listeners' melodic
expectancies; studies by Krumhansl (1995) and Schellenberg (1996, 1997) indicate
that the model can be simplified to just two factors without significant loss of
6. In doing so, I fulfill the promise in my previous
article in this journal (Samplaski, 2004, fn. 7) to provide "a detailed but
non-technical tutorial" about MDS and some of the issues in its use.
Nonetheless, this overview must perforce still be very superficial and exclude
the theoretical underpinnings necessary to use the techniques appropriately; I
can but hope that it will stimulate some readers to investigate possible
applications to their own areas of music-theoretic research. The best initial
pointer into the MDS literature remains Kruskal and Wish (1978), even though
there have been a number of developments in the field since then.
Quinn (2004) has recently completed a dissertation that analyzes in depth the
mathematics behind pcset genera, and by extension, similarity functions. Since
he considers not only twelve-fold division of the octave but other equal
temperaments, his results clarify issues that remain obscured if one examines
only the 12-ET universe. Interested readers should consult this important work.
8. The ratings can be obtained in any of several ways: as
direct estimates or impressions of similarity or distance; as same-different
confusion rates (non-identical but similar stimuli are more likely to be
mistaken for each other than dissimilar ones, so high confusion rates correlate
with high similarity); etc.
9. Without going deeply into topology, a torus can be
embedded in variously-dimensioned spaces. In a three-dimensional space an
observer on a torus' surface would notice that the surface was non-Euclidean:
the angle sum of a triangle would not be 180 degrees, etc. If a torus is
embedded in a four-dimensional space, its surface will be "flat" in the sense of
Euclidean geometry. This was the nature of the configuration found by Krumhansl
10. Other distance metrics are possible. One, intuitively
used daily by millions, is the "city-block" or "taxi-cab" metric, where to get
from point A to point B you go J units in the first dimension, then K units in
the next, etc. The general formula for such distance metrics is called the "Minkowski
11. Even this assumption fails in the real world depending
on the precision needed: the builders of the Verrazano Narrows Bridge, between
the NYC boroughs of Staten Island and Brooklyn, had to take the earth's
curvature into account.
12. Note that models such as ASCAL and INDSCAL are not
invariant with respect to rescaling, or to rotation or reflection about axes.
They also have a significantly larger number of free parameters to be estimated
than simpler models. Moreover for INDSCAL, because individual biases are not
canceled out by averaging responses into a single matrix, it can produce a much
poorer fit to the data. A researcher should thus not automatically use one of
these models in hopes of obtaining the most general result.
13. In technical terms, as the number of free parameters
being calculated increases relative to the number of data points, there become
too few constraints on the possible configuration.
14. If questions remain, the researcher should examine
several different dimensionalities and be prepared to choose on the bases of
clarity and logical interpretation. There are times where a solution one
dimension above or below "optimal" as indicated by the stress/r2
values might be better: 1) if there is a clear interpretation given an added
dimension; or 2) if one configuration is easier to visualize (e.g., a 3-D vs. a
4-D solution), especially in a situation where it is unclear what can be gained
in explanatory power by using the extra dimension.
15. In an MDS analysis of N objects, one of which is an
exemplar, the only way to minimize distortion (i.e., stress) is to place the
exemplar at the center of the configuration and arrange the other objects around
it, along the rim of a circle/sphere/equivalent higher surface. One can always
draw a circle through three non-colinear points, a sphere through four
non-coplanar points, etc.; so, adding in the exemplar, an N-dimensional MDS
solution can in general accommodate N+2 objects (where one is an exemplar)
without problems. Of course, for some datasets one will be able to fit more
objects than this along the surface's rim; and in other cases the limit can be
somewhat liberalized by locating the exemplar off-center in the
circle/sphere/etc., or by using an ellipse/oblate spheroid/etc. as the surface.
This will, though, only go so far: if you try to fit a set of 23 objects
including an exemplar in two or three dimensions, you must expect a fair amount
of stress in the solution.
16. An example dating back to the great nineteenth century
psychologist William James is: the moon is like a ball because they are both
round; the moon is also like a gas lantern because they both illuminate; but we
do not think of a ball as being like a gas lantern.
17. This study considers only results from MDS, so no
further discussion of CA is necessary; it is mentioned to raise awareness of the
issues that render it appropriate for various situations. Some of Quinn's (1997,
2001) analyses of pcset similarity measures use CA; as discussed in paragraphs
61-63, he obtains results compatible with those reported here.
18. This model underlies MacKay and Zinnes'
(1999) PROSCAL program. A different model underlies an older PMDS program called
MULTISCALE (Ramsay, 1977), but we need not worry about the distinctions.
19. The present article is a case in point: analyses of
the datasets herein took a few seconds on a PC of moderate speed, using the
statistical package SPSS; the
equivalent analyses on the same machine using PROSCAL took up to forty minutes.
20. AMEMB2 is a modification of Rahn's (1979-80) MEMB2
function by Isaacson for inclusion in the latter's Winsims calculator, available
Isaacson applies a normalization factor equivalent to that used by Rahn to
derive ATMEMB from his TMEMB function. For narrative simplicity, it seemed
preferable to refer to it as Rahn's function. Note that while AMEMB2 is
concerned with cardinality-two sets, i.e., interval-classes, it is not an icv-based
function in the sense of ANGLE et al., since it only counts how many
instances of each dyad are mutually imbedded in two pcsets--for example, in
returning a comparison for  and , only one ic6 in the latter is
counted. I thus treat it as a subset-based measure.
21. That is, the icv of any nonachord is a
function of the icv of its trichord complement, and similarly for octachords/tetrachords
and heptachords/pentachords; ratings produced by icv-based similarity functions
are therefore also related. The precise functions involved differ for each pair
of cardinalities, but they are systematic for those pairs.
22. That value is actually a severe understatement; it is
the limit of accuracy reported by the statistical package used. Obtaining an r
of better than .96 (between ANGLE and RECREL) over 3160 observations is so close
to perfection as to be basically unheard-of for a "real-life" dataset.
23. Various additional types of cross-check analyses were
carried out using PROSCAL to the extent possible given the version of the
program available--the 4x5 and 3x5 datasets were too large. These are omitted
for considerations of space, minimization of technical detail, and reader
24. In particular, the 3x3 dataset, with only 12 objects,
is far too sketchy to understand what is going on for most of its configuration.
The same holds true to lesser extents for the other smaller datasets.
25. If one is determined to try to visualize a
four-dimensional object, the best starting place is Edwin Abbott's classic story Flatland
(Abbott, 1885/1952). With modern computer graphics, it is now possible to get a
direct visceral appreciation of such objects via programs that manipulate their
projections onto the screen.
26. Since the Procrustes rotation deals with the derived
configurations, which involve the final relative set-class locations in abstract
space, we need not worry that the latter two functions rate similarity while
RECREL rates dissimilarity.
27. A p-value of .05 is the typical cutoff value
for empirical studies, in that researchers are usually willing to risk a 1 in 20
chance of reporting a false positive result. (Certain types of studies, e.g.,
clinical drug trials, obviously must set far more stringent standards.)
28. There is no possibility of problems due to axial
reflection, since the Procrustes rotation would have handled that.
29. The most surprising of these cases is the
disappearance of the ic1/ic5 dimension for cardinality-three set-classes in the
3x4 dataset. Given the coordinates of the rotated subconfiguration, that
dimension appears to involve an "ic3/anti-ic6 vs. ic6/anti-ic3" opposition, as
seen on the second section of
30. Forte only lists hexachords up to 6-35 in his table on
pp. 264-266 that summarizes the makeup of the genera. From the inclusion rules
on p.192 one can deduce that the remaining 15 hexachords belong to the same
genera as their Z-related counterparts, but it would have been better just to
list them explicitly, since he lists all Z-related tetrachords and pentachords.
31. Harris (1989) follows this same procedure, except that
he draws from a wide body of musical literature, and he is explicitly not
working from a pcset-influenced background. His system of chord families is much
more complicated than Parks', but it has a great deal of solid and thoughtful
musicianship behind it. His proposal deserves closer attention by the
32. If we adjust cutoff values, we can re-include 5-2 and
5-3, although 5-4 and 5-8 ( and ) will also come along.
33. The term refers to a probabilistic process that is
repeatedly carried out (as in, "repeated throws of the dice at Monte Carlo") to
determine a result. The technique is often used to simulate physical processes;
it has some relation to the stochastic algorithms used to generate a number of
34. The speculation two paragraphs ago about Quinn's
hexachord group B, whose members all had fairly high connections to other
clusters (unlike members of the remaining six groups), applies: might an MDS
analysis of hexachords show all his group B hexachords to be "garbage"
35. As an example, if I alternately play 0137/0146 in
closest spacing with the same bass note I feel a definite sense of
tonic/dominant, presumably due to the imbedded minor triad in 0137 and the
semitone neighbor in the soprano in 0146 acting like a leading tone. I could
easily envision exploiting this type of perceived relationship in a composition.
36. Using Tn-equivalence raises a number of
questions; in particular, would the dimensionality of the configurations change?
Since a major priority for this essay was to compare RECREL with several other
functions, all of which use Tn/I-equivalence, it was necessary to eliminate the
B-forms of asymmetrical set-classes from consideration, obviating all such
issues. Morris (1995) lists several other possible levels of abstraction. Most
of these have as yet received little or no attention by music theorists, a
situation which sorely needs correction.
37. To cite one example, Samplaski (2004) found support for grouping interval-classes by category of acoustical dissonance, rather than treating them as separate isolated entities. If other studies confirm this result, that would strongly imply a need for substantial modification of existing similarity measures.
End of footnotes