Talking Scores and the Dismediation of Music Notation*

Floris Schuiling



KEYWORDS: notation, blindness, disability, talking scores, assistive technology, remediation, transcription, performance, Debussy

ABSTRACT: In the 1990s, specialized libraries in the Netherlands developed an audio-based form of notation known as “talking scores” to be used as an assistive technology for blind and visually impaired musicians. A talking score, similar to the audiobooks on which it was based, narrates the information found in staff notation that would otherwise be accessed visually. The spoken instructions are divided into fragments and alternated with audio examples. This article discusses the working of such scores and analyzes the score for Debussy’s “Clair de Lune” to make detailed observations about the transformation from staff notation to talking score. This discussion is theoretically framed by a consideration of the “dismediation” of notation, i.e., of the centrality of questions of (dis)ability to discussions of the interactions between human bodies and media infrastructures, which forms the basis of a critique of Peter Szendy’s theory of musical arrangement.

DOI: 10.30535/mto.29.1.3

PDF text | PDF examples
Received July 2022
Volume 29, Number 1, March 2023
Copyright © 2023 Society for Music Theory


[1.1] In 1989, Paul Houdijk initiated a project for the development of “talking scores” at the Academic Library for the Visually and Otherwise Impaired in Amsterdam. Houdijk was an organist, regularly performing the Saturday evening mass at St Catherine’s Cathedral in Utrecht from 1978 onwards, as well as a musicologist specializing in late-nineteenth-century organs, specifically those of the Maarschalkerweerd family. Although he was born sighted, he gradually lost his sight; when he initiated the development of spoken scores he knew he would be completely blind soon. Unable to read Braille, he required another means of reading music notation, and approached the library to develop an audio-based form of notation, analogous to the audiobooks and other audio-based literature on which he relied. Throughout the 1990s, Houdijk and other employees of specialized libraries in the Netherlands further developed this adapted sheet music format, which is still available in the Netherlands and has also spread to other countries. Initially produced on cassette tapes, talking scores are currently produced in the DAISY format, a technical standard for audio substitutes of print material for people with print disabilities. The Amsterdam library has been shut down, and together with other libraries for the visually impaired fused into Dedicon, the main facility for alternative reading formats in the Netherlands. A talking score reads out all the information contained in a printed score in staff notation, divided into fragments and alternated with audio examples.

[1.2] In this article, I take talking scores as a case study for a consideration of how the material format of music notation conditions its capacity for representation. I draw on interviews with Houdijk as well as others involved in the development and production of talking scores, including Sijo Dijkstra, who designed the format in its early stages, and Fred Mom, who has been the voice heard on most talking scores that have been produced in the Netherlands. I analyze their workings in detail through a discussion of my own experiences in learning to play Claude Debussy’s “Clair de Lune” from his Suite Bergamasque for piano from a talking score. My aim is to highlight how notations, as means of inscription, are embedded in a web of relations between music, bodies, and media technologies, and to argue that the relation between sign and signified is at least partly a function of this embeddedness. In this regard, my argument contributes to and extends the recent movement towards a methodological consideration of music as a form of technological mediation (Born 2005), and particularly of music notation as assembling such technical infrastructures (Magnusson 2019; Schuiling 2019; Devine and Boudreault-Fournier 2021). However, through a critique of Peter Szendy’s arguments on musical arrangement, drawing on Mara Mills and Jonathan Sterne’s concept of dismediation (2017), I highlight the fundamental role of questions of ability and disability in such mediating processes, as such questions are negotiated in the interfaces that enable such processes of assemblage.

[1.3] Talking scores are produced for a very specific audience, namely people who are completely blind or for other reasons have no access to staff notation, yet who do know the principles of staff notation and the terminology required to read it. Since blind musicians are usually able to learn from recordings, users of talking scores commonly also make music (from the score-based repertoire of Western Art Music) at such a professional level that they require access to the various details that are contained in a score and are not generally audible in a recording.(1) Moreover, they are intended for people who do not know Braille music notation, or do not wish to use it—at least, that is the reason that libraries for the visually impaired have taken them into production alongside the already existing format of Braille music. As such, talking scores are used by very few musicians worldwide. Houdijk was the only person I have encountered to use talking scores regularly and consistently, and the format is also barely used in other countries, despite being generally available (at least across Europe). Using talking scores as a basis for rethinking music notation therefore runs the risk of what Nicholas Mathew and Mary Ann Smart (2015) have termed “quirk criticism”—and the ableist resonances of the word “quirk” underline the ethical considerations that must be addressed here. Mathew and Smart argue that such quirks work in two opposite directions; on the one hand, they produce a sense of estrangement that leads us to suspend our common assumptions and open ourselves up to the experiences of others, but on the other their quirkiness simultaneously captures our imagination in such a way that leaves little actual room for a genuine concern for these same others. Blindness, and especially the interaction of blind bodies with assistive technologies, has been the subject of such quirk criticism at least since Descartes’ discussion of a blind man’s perception through his walking stick in 1637. Georgina Kleege (2016) criticizes the philosophical discussions of what she calls the “Hypothetical Blind Man,” who lives such a secluded life that he has supposedly never heard visual terminology before philosophers brought it up, and whose primary purpose is to highlight the importance of sight. However, as Rod Michalko (2010) points out, we might also understand the quirky attraction of blindness as an indication of its transformative potential: “When sight ‘looks blindness in the eye’ it does not see its opposite, it sees itself. Blindness reflects sight and it shows sight to itself, something it cannot see without blindness.” The analysis of an inscription system for visually impaired musicians such as talking scores therefore may help reconsider ideas of normalcy in our understanding of music notation and put into perspective assumptions about the supposed “ocular-centrism” of classical music culture.

[1.4] I hope that the considerations below, based not just on hypothetical speculation but on an actual engagement with blind musicians and the people who work to produce accessible materials for them, go beyond merely rethinking vision, and contribute to a broader awareness of the complex world of blind musicianship. Still, there is an element of my research that must remain hypothetical: early on in my fieldwork I learned about talking scores and met with Houdijk, who agreed to do an interview with me. Several months later, after I had learned much more about talking scores and other music notations for the visually impaired, I hoped to return to him with more questions, and perhaps even to study his process of learning a new piece more closely, but he had passed away. Since I could not find anyone else who used talking scores frequently, and certainly not as much as Houdijk did, I partly rely on my own experience using them, even though I am not visually impaired. I should emphasize that this is not in any way intended as a simulation of blindness or of a blind person’s experience with these scores. Rather, it is an inquiry into what it is like to use this kind of score, a practical method of experiencing it “in action.” At no point did I consider this a meaningful way to simulate blindness, and it should not be read as such.

[1.5] My choice for “Clair de Lune” was motivated by several factors. I had learned this piece before, some ten to fifteen years ago, but had not played it for so long that I could only play the first eight bars from memory, as well as the first two bars of the main theme starting at bar 27. Although this prior knowledge may have influenced the way I learned it from the talking score, it also meant that I knew it was not too difficult for me as an amateur musician, while still being difficult enough to function as a proper test of the talking score. I also knew that the score has particular notational challenges, being notated in 9/8 time, with various duplets, sextuplets, and other rhythmically interesting elements, as well as containing a variety of homophonic, contrapuntal, and pianistic textures. Finally, the piece is so well known that I thought it likely that it would be available in talking score format. Additionally—though I only realized this much later—the title metaphorically expresses some of the aims of this article. According to Marshall McLuhan (2013, 9), “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” The moon does not shine—it has no content—but only reflects the sun’s light; it can be considered the primordial medium. If sighted musicians are “blinded” by staff notation as they are blinded by looking directly into the sun, moonlight offers a reflection on representation in a way that, to use Michalko’s phrase, “shows sight to itself.”

[1.6] In what follows, I will first situate my argument theoretically between music scholarship, media studies, and disability studies. I then illustrate some of these ideas by further describing how talking scores work in general terms, concentrating on how the various intermedial connections that shape its design condition its particular capabilities for representation. Finally, I take a more in-depth look at the talking score of “Clair de Lune” and discuss my experience of learning this piece with it. In my conclusion, I reflect on the concept of literacy and the relation between notation and the performer’s agency and discuss future possibilities for audio-based notation.

Remediation, dismediation, and the “scriptive”

[2.1] Szendy (2008, 59), in a discussion of musical arrangement, cites Liszt as he discusses the importance of recent developments in the construction of the piano for his arrangements of the symphonies of Beethoven; he concludes that it is “a mutation of bodies—of the instrumental body as well as the interpretative body—that opens new possibilities to translate music to the letter.” For Szendy, arrangement is a writing of one’s listening, and a writing that underlines that listening is never full or complete, but is always characterized by displacement and distraction, as he argues throughout his book. Drawing on Walter Benjamin’s theories of translation, he sees arrangements as exposing an incompleteness in the original; both original and arrangement, therefore, share an incompleteness that, to Szendy, is central to the notion of a musical work. Since his topic is the transcription of music from one instrumentation to another, he is mainly interested in the mutation of the instrumental body and sees any adaptations in the interpretative body as a corollary of that. But what if we consider the arrangement of music for non-normative bodies?(2) If “the disabled body changes the process of representation itself” (Siebers 2010, 54), how does this affect the reciprocal relations between instrument, music, and letter alluded to by Szendy?

[2.2] Szendy’s argument is that the content of a musical score is not some disembodied musical idea, but is an effect of the score’s relation to material supports such as musical instruments. His assertion has significant consequences for the way we conceive of accessibility of information for disabled people. One of the central principles in the development of talking scores was premised on the assumption that music, instrument, and letter can be treated independently of each other. As David Crombie and Roger Lenoir (2008, 590) write in their guide to different ways of making music accessible to visually impaired musicians, a central guideline should be that “the print disabled should have access to the same information as the sighted, and that only the format in which the information is presented should change.” This formulation suggests a categorical distinction between disembodied information and material format. Crombie and Lenoir connect it to the aims of the Universal Design (UD) movement, and it underlines the epistemological basis of this movement in an Enlightenment model of subjectivity that is rational, individualistic, and disembodied. Some scholars have criticized UD on this ground, arguing that it is precisely this understanding of subjectivity that disability rights activists have sought to overturn (Imrie 2012). As Tanya Titchkosky (2011, 6) points out, discussions of accessibility for people with impairments often assume that immediate, unquestioned access to everyday life is natural. The myth of universally accessible design thus risks re-inscribing a kind of normalcy into the movement towards greater accessibility. Indeed, the very existence of talking scores underlines this tension between universality and particularity, as only a small percentage of visually impaired musicians can read Braille, let alone Braille music notation, as many lack the necessary finger sensitivity when they become blind later in life—which is why it was necessary for Houdijk to investigate other options. Richard Godden and Jonathan Hsy (2018, 105) argue, however, that UD might also be understood as a way of reconsidering universalism as a “motivating fiction or tantalizing impossibility,” an unavoidable irony that is fundamentally productive. Since disability potentially affects everyone at some point in their lives, “in its association with temporal deferral, UD suggests a close association with the very concept of disability as unrealized futurity” (Godden and Hsy 2018, 105). Similarly, following Szendy’s arguments concerning musical arrangement, the notion of a disembodied musical “content” that is immediately and universally accessible only appears as an effect of its transcription into a different writing system.

[2.3] Szendy’s notion of arrangement thus appears a useful conceptual tool for thinking through the complexities of accessibility. It forms part of a broader recent reconsideration of music as essentially a process of technological mediation, inspired by work in media studies and studies of science and technology, that we might broadly refer to as theories of remediation. Jay David Bolter and Richard Grusin use this term to build on McLuhan’s suggestion that “the content of any medium is always another medium” (2013, 8), arguing that all mediation is remediation (Bolter and Grusin 1999). They theorize the “double logic of remediation,” where the immediate experience of “content” does not come before mediation, but is its effect—an effect of a process of mediation to which, to recall McLuhan’s formulation, we are simultaneously “blinded.” Another inspiration for their argument is the work of Bruno Latour, particularly the actor-network-theoretical premise that scientific truths become disembodied and universal not because of a process of dematerialization but precisely through the multiplication of material mediators. For Latour (1999, 310), “reference” is not a relation between a word and the world, but “the quality of the chain of transformation” of material practices into propositions expressing a worldview. These transformations are dependent on the existence of “immutable mobiles,” which, like Szendy’s arrangements, both fix and displace at the same time.

[2.4] Indeed, from a post-humanist perspective, any body is constituted through a multiplicity of material and discursive formations, not just disabled bodies, as disability scholars and activists have long argued. Szendy’s argument for the “plasticity” of listening puts this social and technological construction of the body at the heart of musical experience, and could be considered a critique of the ableism inherent in idealizations of “structural listening.” Moreover, his account of the history of listening features a critical discussion of the ocularcentrism of classical music as well as an argument in favour of a tactile relation to sound, both of which point to an approach that seems very suitable for a consideration of blind musical experiences. However, even though tropes of disability play an important role throughout his book, there is hardly any serious engagement with disability itself, which is frequently invoked merely to symbolize alienation or otherness. In his discussion of arrangement, he aims to theorize its critical function, as opposed to a purely functional one commonly ascribed to it, which sees arrangement as in decline after the rise of recording technologies, as it was no longer necessary to adapt music to domestic music making. He subsequently discusses definitions of copyright since the nineteenth century, and extends his arguments on the critical function of arrangement to the critical function of sampling. He discusses the rise of mechanical music in the nineteenth century in the decades before the invention of the phonograph (an area which, incidentally, was an important source of income for disabled people, although Szendy does not mention this—see, for instance, Accinno 2016). These mechanical reproductions had initially been excluded from French copyright laws; the perforated cards, cylinders, and similar storage devices were not considered a publication because they could not be “read” except by the machines for which they were designed. As Szendy describes it: “they made up a kind of machine-language, a writing that was too idiomatic—that is to say, speaking etymologically, too idiotic—to be decoded and interpreted” (2008, 78). One lawyer had contested that a limited accessibility and especially an accessibility to other senses than sight is unrelated to its status as a form of writing, citing Braille as his example. Noting this, Szendy (2008, 80) describes the “machine-language” of mechanical instruments as a writing “addressed to a blind, tactile, groping body: that of the ordinary person who, turning a crank or manipulating levers and buttons, still felt in his limbs an unprecedented elasticity of musical time.”

[2.5] Again, Szendy’s position is in principle broadly compatible with a critical disability studies perspective, and this particular argument is even directed against identifications of sight with rationality, or of music with its visual representation. However, his strongly ableist language betrays that his imagined listener, despite all this, is clearly an idealized non-disabled subject. As Alison Kafer (2013, 103–28) has argued, cyborg theory and other theories about the technological constitution of the human body, while theoretically holding much potential for disability politics, often rely on essentialising tropes of disability, reinscribing ideas of normalcy in their very argument for the constructed and heterogeneous nature of the body. Mara Mills and Jonathan Sterne (2017) have suggested the term “dismediation” as a counterpart to the concept of remediation, to address the centrality of disability to theorizations of media technologies. The aforementioned references to blindness as a metaphor for ignorance of mediation is only one example of how disability has been fundamental to media theory, used as a “narrative prosthesis” (Mitchell and Snyder 2000) for purposes of titillation or to symbolize breakdown or otherness, but never in order to consider disability as such—much like Kleege’s hypothetical blind man (Mills and Sterne 2017, 368–70).(3) Indeed, the concept of the prosthesis itself may be the most prominent example of the reliance of media studies on disability tropes, but only functioning to underline that the imagined user of media technologies is usually a hypothetical “undamaged” subject. Identifying and rethinking such narrative prostheses for media theory is the first of Mills and Sterne’s propositions for bringing a disability consciousness to media studies. The second is to describe the actual (rather than hypothetical) centrality of disability to media, and the third is, conversely, to describe the actual centrality of media to disability.

[2.6] If, as William Cheng (2019, 62) suggests, “music studies is a kind of ability studies,” then music literacy, as a marker of “true” and “professional” musicianship, has been one area in which its ableism has been most prominently expressed. This ableism takes many forms but presents particular challenges for people with visual impairments. Shersten Johnson (2009, 2016) has called attention to the ways in which our understanding of notation and literacy privileges visual modes of engagement with music. Obviously, standard Western staff notation is a visual medium, but language about music in general is infused with visual metaphors and the ways they structure our understanding of music have been built into notation itself. Not only is staff notation premised on the visual metaphor of “higher” and “lower” pitches, Johnson argues that sighted people can refer to the “contour” of a melody by visually imagining a path between individual notes, in a way that is not easily translated to Braille or talking scores. Moreover, given the aforementioned metaphorical identification of understanding with seeing and ignorance with blindness, music analysis has also strongly privileged visual perception as a way to achieve an understanding of musical structure.

[2.7] Picking up on the second and third of Mills and Sterne’s propositions, we could in fact argue that the visualization of music through staff notation is itself a kind of assistive technology. Joseph Straus (2011, 179) has suggested as much, arguing from a disability studies perspective against standard accounts of listening as a solitary and autonomous activity, and writes that “notation functions as a technological way of gaining access to a built environment, that is, a work of music that would be relatively inaccessible without it.” This perspective highlights how, since all bodies are technologically and discursively constructed, disability and ability are not simply opposites, but may be co-constitutive and interdependent in unexpected ways (Goodley 2014). In contrast to Crombie and Lenoir’s distinction between format and information, and broadly in line with Szendy’s arguments on arrangement, such a perspective suggests that the original “content” of a score is already shaped by the way this content is made accessible through its formatting. One of Straus’s central ideas is that we may interpret musical works as presenting narratives of disability; given the organicist metaphors of music theory, many works of classical music suggest a living being going through a process of “disruption” of a situation of stability and wholeness. His comment that notation is a kind of prosthesis raises the question to what extent notations enable or disable the capacities, not just of musicians, but of the works themselves: what are the supporting technologies of a musical work, and how do these condition its dis/abilities? Ability is always a relational concept, and a contingent state of affairs. The representation of a disembodied musical “content” in a score, therefore, is a function of the capacity of notation for interaction with different bodies (instrumental, interpretative, or otherwise) and for its mutation when this capacity is impaired. This is a subtly but significantly different argument than Szendy’s; if the object of the critical potential that he finds in arrangement and sampling is still the work, a consideration of disability focuses the attention to the interfaces that provide access to it.(4) If notations are media infrastructures (Devine and Boudreault-Fournier 2021), then their accessibility has ontological implications.

[2.8] In my analysis of the “Clair de Lune” talking score, I draw attention to how the physical shape of a sign is a way of playing into existing protocols of use. In criticizing the distinction between format and information, my point is not just that talking scores mistake the prescriptive nature of staff notation for a descriptive function (to recall Charles Seeger’s [1958] famous argument), although this will be an important aspect of my analysis. Rather, they invite a reflection on what we might call the scriptive aspect of notation, on how the material shape of the sign and its embeddedness in an infrastructure of media technologies and protocols construct its capacity for representation. Madeleine Akrich (1992) argues that the physical shape and construction of any technical object can be understood as a “script,” inscribing a particular user through assumptions about their conduct and abilities. All technology therefore contains a particular vision of what is human, and of what humans can do. Each script (or material system of signs), then, as a form of technology, embodies scripts (or protocols) for its envisioned use. In music, the relation between fixing reality in an inscription and the scripting of future realities, between the capturing of an ephemeral moment and the projection of that moment into the future, is perhaps more obvious than in any other artistic or scientific practice. To clarify, my argument is not merely that there is no such thing as “the music itself,” a musical fact-of-the-matter before interpretation. Critical perspectives in musicology have sufficiently established that the whole idea of such objective musical fact is itself a political move, a social construction made in particular historical contexts. Rather, I am trying to understand how this process of construction is technically achieved, how the musical work emerges through material displacement, and to do so in a way that does not begin with a separation of mind and matter, but that sees meaning and politics permeating technological designs and the constitution of the human body.

Talking scores’ remediations

[3.1] How do these various scriptive properties shape the way in which talking scores represent music? Before diving into the particular example of “Clair de Lune,” I will describe the various remediations that shape talking scores in general terms. The main formats that are remediated by a talking score are staff notation, Braille music, audiobooks, and MIDI. The talking score aims to be a transliteration of the staff notation and makes use of various principles developed for Braille music notation. Houdijk was initially inspired by audiobooks when he conceived of talking scores as an alternate audio-based format for communicating written text, as they are currently produced using the same technical standard used for the development of audiobooks for the print disabled (the DAISY standard, or Digital Accessible Information System), and are thus designed to be played back on a DAISY device. Finally, the audio examples that are interspersed between the spoken text are created with MIDI.

[3.2] As a transliteration of a published score, a talking score contains the same information as its source as far as possible. This means that not only the notes, accidentals, dynamic markings, key signatures, time signatures and other diacritical markings are communicated, but also any other information given in the publication concerning the composer, the composition, the publisher, and so on. If the source publication contains a biographical note about the composer or a brief analysis of the music, this is also included. As described earlier, staff notation relies on its visual nature in the way that it transmits musical information. Considering the aim of exact transliteration, this occasionally raises questions about where the line between information and format should be drawn. For instance, talking scores will usually start with some general information about the key signature, time signature, initial tempo marking, and so on, and this section also includes the clef or clefs used in the piece. The inclusion of clefs in a talking score is a skeuomorph, an unnecessary remainder, since clefs are only a visual orientation point for determining the octave of a given note. Talking scores use spoken octave designations, so the inclusion of clefs is superfluous, yet one could argue whether the users of talking scores should have access to this piece of information. This inclusion is largely symbolic, since clef changes are usually not included, nor are other octave designations such as “8va” or “8vb,” which presumably would only confuse users of the talking score. This is a first indication that the distinction between information and format is not always easily drawn; the two notation systems have different ways of constructing what we might call the “gestalt” of the sign, and the translation from one system to another confronts us with the fact that what we initially consider one sign may actually consist of a number of constituent elements. Another example would be tied notes; when a note is held for two bars, for instance, we would visually perceive that as one note, while the talking score would mention its constituent elements (notes, note durations, tie, bars) separately.

[3.3] The visual nature of staff notation allows for various pieces of information to be transmitted simultaneously; a note is placed on a staff to indicate its pitch, while its shape determines its duration, and dynamic markings indicate its volume. If multiple notes are played at the same time, they are placed in alignment within a musical system or on a staff. In a talking score, such simultaneous presentation is impossible, and the various bits of musical information must be broken up and presented step by step. This same problem occurs with Braille music, and many of the solutions used in talking scores are modelled on (or directly taken from) principles of Braille music. Braille music notation uses the signs for the letters D to J to represent the notes C to B (Krolick 1997). These letters all use only the top four dots of a Braille cell, and the bottom two are used to indicate length; if both dots are present it is a whole note, only the left is a half note, the right is a quarter note, and neither is an eighth note. Depending on context, these may also represent sixteenth to hundred-and-twenty-eighth notes. Further symbols are used to indicate octaves, accidentals, dynamics, ties, and other markings. Since Braille is a completely one-dimensional sign system, there is a specified, logical way of ordering these various signs. This one-dimensionality also means that there must be specific rules for how to notate simultaneously occurring notes and melodies; whereas homophonic textures are dealt with quite easily, by specifying which other notes sound at the same time as a given note, contrapuntal textures and the different parts in keyboard music have to be notated separately. Talking scores use many of the same principles as Braille in turning staff notation into a one-dimensional, serial representation. To some extent, talking scores can be considered a text-to-speech variant of Braille music, and reading aloud Braille notation would yield something quite close to a talking score, even though this is not how talking scores are produced. In fact, in the production of Braille music the reading aloud of scores was already an important step; after a Braille score had been produced, two producers would sit down together and one would read out the score step by step, so that the other could check the Braille music for any mistakes.

[3.4] There are some important differences between Braille and talking scores. Some of these have to do with the specifics of the Braille system vis-à-vis spoken language. For instance, in Braille music, as explained above, one Braille cell contains information about both pitch and note value, but it does not represent accidentals. This is similar to staff notation, where the placement of a note on the staff indicates its pitch while its shape indicates its note value, and any accidentals have to be given separately, either before the note or by a key signature. Conversely, in Dutch (as in German), pitches with an accidental have their own name—a D is a “des,” a D a “dis.” Therefore, whereas in Braille (and in staff notation), one needs one sign for the accidental and one sign for the pitch and note value, a talking score gives one sign for the pitch (including its possible accidental), followed by its note value. This also affects the order in which signs are given: in Braille, one gives the accidental first, then the octave designation if necessary, then the pitch, while talking scores give an octave designation first (to orientate the musician on their instrument) and then the pitch and note value. Sijo Dijkstra, an organist and music psychologist, played an important role in designing such principles, in which he took into account not only the exactitude of translation but also the cognitive demand put on the user (as he discussed in an interview with the author). For instance, whereas in Braille chords are usually written top to bottom, talking scores always write them bottom to top, and where Braille uses intervals, talking scores use note names. A C major chord, for instance, is not written “G with a third and fifth” as the top-down Braille notation would suggest (requiring the reader to have the music-theoretical knowledge to know which third and fifth are meant, especially in scores with many accidentals) but as “C with E and G.” To represent the alignment of multiple melodies, for instance a polyphonic texture within a staff or the alignment of the two staves used in piano music, talking scores use more or less the same technique as Braille. For the former, it signals a “voice division” (stemverdeling), giving the top melody first, followed by the sign “in accord with,” and the melody played in the same stretch of time. For the latter, it divides the two hands into two fragments, which are described separately and have to be brought back together by the musician.

[3.5] The first talking scores, like early audiobooks, were published on cassette tapes. At the end of the 1990s, however, talking scores were produced in the newly developed DAISY format. On a DAISY disc, fragments can be arranged into different levels, so that a user can for example switch between multiple books on one disc, between chapters within a book, and between sections within a chapter. Talking scores are divided into fragments, which are further subdivided into audio examples and spoken text, or in the case of a piano piece like “Clair de Lune,” an audio fragment for the right hand, the text for the right hand, both text and audio fragment for the left hand, and finally an audio fragment of both hands playing the fragment together. This division into fragments creates a possibility for navigation that, for blind musicians, is quite a rare thing. Moreover, they suggest a division of the musical surface into discrete phrases that is a commonplace in music analysis, but which is not usually represented in staff notation. This fragmentation, as it were, builds a form of analysis into the design of the notation.

Example 1. Debussy’s Arabesque No. 1, pp 1–2, with annotations by Fred Mom in preparation of the narration of its contents

Example 1 thumbnail

(click to enlarge)

[3.6] Fred Mom, who has been the main (and almost the sole) narrator heard on talking scores since their invention, told me how he navigated between the division of a piece into structurally logical fragments and the limiting of these fragments so that they remain of a sensible length for a user to remember. “It’s important to make it as easy as possible for the user, because it puts a really heavy demand on their memory, so the fragments should not be too long. . . . It can be quite difficult, it presents music-analytical questions. There can be clues in the text, for instance when there is a marking ‘ritenuto. . . a tempo,’ but still you have to cut up musical phrases in a way that is musically logical.” Example 1 presents the first two pages of a copy of the piece he had just recorded when I visited him (coincidentally also a piece by Debussy) with the annotations that he made in preparation of his recording. He reads the score directly from this annotated copy. Every bar is numbered, and every fragment is marked with green. At the top of the first page, below the title, Mom has written that the piece contains 107 bars, divided into 27 fragments, and everything is read in eighth notes unless specified otherwise. Although the first couple of fragments present no particular challenge, corresponding to short musical phrases of three to five bars, from fragment 5 (bar 17) onwards there is a potential problem, as the next section up to bar 26 (fragment 8) is not easily divided into phrases, and nine bars is too long for a fragment to be memorized. Mom has chosen to start a new fragment (fragment 7) at the “a tempo” marking, as he has done earlier in the piece, in the middle of bar 23, and we can see that there is one more division (the start of fragment 6) of which he was not sure where it should start, notating it over bar 20 first and then later deciding to start it at bar 19. This is not the only way in which this could be solved, but it indicates some of the tensions that may exist between the representation of musical structure and the usability of the talking score, the various scriptive qualities of the score whose transformation into a new format must be negotiated.

[3.7] Finally, the audio fragments are made with MIDI on a digital audio workstation. Early in the 1990s, when they developed the first talking scores, the producers of talking scores used a digital keyboard as a MIDI controller, and played the music they were transmitting to enter it into Cubase. Today, Mom can find MIDI files of staple repertoire online, or he can scan his sheet music using OCR to create a music XML that, usually after some corrections, can be exported to MIDI, saving him the time and effort of playing the piece himself. Although MIDI in principle affords possibilities for more expressive performances, the audio examples used on talking scores are designed to be as “neutral” and objective as possible, with exact timings and no dynamics. It is an aid for comprehending the spoken musical information in a way that makes musical sense, and for checking whether one has understood the information correctly, but it is purposely not intended as an example to be imitated, so that the musician using a talking score can develop their own interpretation of a given piece.

Clair de Lune

[4.1] The talking score for “Clair de Lune” is divided in the following fragments:

  1. Title and composer; including its year of composition, its being part of the Suite Bergamasque, its Lesure number, and a note on François Lesure in French (taken from the published sheet music);

  2. Production details; a standard note on the Dedicon library service, copyright, and a telephone number for questions and comments;

  3. Book details; including the title, its composer, and the details of publisher Henle, as well as details about the fragmentation and navigation of the DAISY disc;

  4. Complete audio example;

  5. Key signature, time signature, tempo marking; including number of bars and number of fragments, a note that it will be read in eighth notes unless stated otherwise, and the clefs for the right-hand and left-hand staves; and

  6. Fragments 1–22, containing:

    1. Bar numbers contained in the fragment;

    2. RH audio;

    3. RH text;

    4. LH audio;

    5. LH text; and

    6. RH + LH audio.

Example 2. “Clair de Lune,” bars 1–4

Example 2 thumbnail

(click to enlarge)

[4.2] As an illustration of how the talking score works, this is a transcription of the text as narrated for the first fragment (bars 1–4) as shown in Example 2.

Fragment 1, bars 1 to 4.

Right hand audio example. [audio plays with a click track, at a slower tempo than the full audio example]

Right hand text.

Bar 1. Pianissimo, con sordina. Two rests. Two-line F with A, first and fifth finger, tied, F dotted quarter note with A, D dotted quarter note with F, second and fourth finger, tied.

Bar 2. D with F, C with E, D with F. C dotted half-note with E, first and third finger, switch to second and fourth finger, tied.

Bar 3. C with E, B with D, C with E. Voice division. One-line B dotted half-note, tied, in accord with duplet two-line D F tied, duplet F D fourth finger, tied. End of voice division.

Bar 4. One-line B with D, A with C, second and third finger, B with D first and fourth finger, A dotted half-note with C, tied.

Left hand audio example. [audio plays again, as for the right hand]

Left hand text.

Bar 1. Rest, one-line F fourth finger with A, tied, F dotted half note with A.

Bar 2. G dotted half note with A, tied, G dotted quarter note with A.

Bar 3. F dotted half note with A, tied, F dotted quarter note with A.

Bar 4. E dotted half note with G, tied, E dotted quarter note with G.

Audio example, right and left hand [audio example is played for both hands].

[4.3] This transcription illustrates that note names are preceded by their octave designations, which makes the inclusion of clefs unnecessary. In fact, the G clef at the start of the first bar for the left hand is not mentioned, even though it is in the score. Almost all information from the score is communicated, with the notable exception of slurs. Mom told me that these are generally left out, since they would probably only confuse the musician using a talking score. That is, with a long melody, one might announce the start of a slur in one fragment, and note its end only two or three fragments later, which would probably require the musician to scroll back to find out where the slur started unless they have exceptionally good memory. The slur beginning in bar 1 spans the first 14 bars, meaning that it would only end at the end of the third fragment. The complete and exact transmission of the information contained in the score is therefore sacrificed for the sake of comprehensibility and user-friendliness. For Mom, this indicated that a talking score should ideally not be used for independent study, but always with a music teacher who can help the visually impaired musician with their phrasing by telling them about the slurs contained in the original score. Incidentally, we can see that ties are mentioned, but only when they start and not when they end. To some extent, this is a logical choice, since ties always only connect to the next note, and to mention their end would seem superfluous information unnecessarily adding to the cognitive load of the user. However, when learning this piece from the talking score, I often had to remind myself of the fact that notes had been tied to notes played previously, especially when a tie crossed a bar line.(5)

Example 3. “Clair de Lune,” bars 5–8

Example 3 thumbnail

(click to enlarge)

[4.4] In the text for this first fragment, the right hand in bar 3 contains polyphonic textures that are communicated by using a so-called “voice division.” In this case, it is a solution to a problem in the transmission from one format to another that is easy enough to understand. In the course of the piece, however, the use of voice divisions led to some discrepancies and difficulties that I wish to highlight. In the next fragment, containing bars 5 to 8, we see some contrapuntal textures in both hands (see Example 3). Essentially, this is a mostly homophonic accompaniment of dotted half notes and dotted quarter notes under a melody of mostly eighth notes. On the talking score, the right hand is consistently communicated using the voice divisions we heard earlier. In the left hand, however, the narrator is faced with a choice. The texture is mostly homophonic (except for the final dotted quarter note with the duplet underneath), but unlike the homophonic texture in bars 1 to 4, it is notated contrapuntally, with stems suggesting two melodic lines. The talking score notates these as “D with E, C with E” and so on, thereby ignoring the implied counterpoint, except for the final dotted quarter note. Again, the complete and exact transmission of information is sacrificed for the sake of user-friendliness. Unlike the example of slurs discussed earlier, however, what is missing here is not a sign, but rather a pragmatic implication afforded by a given disposition of the signs. Here, it is a sensible enough solution: the musician playing this can always discover the stepwise downward motion of the bass note simply by playing it.

Example 4. “Clair de Lune,” bars 27–28

Example 4 thumbnail

(click to enlarge)

[4.5] However, at other points the narrator did decide to mention this kind of counterpoint, leading to some sincere confusion on my part when learning this piece. For instance, in bars 27–28, the bass note of each arpeggio is held (see Example 4). In the talking score, this too is dictated with a voice division, where the D, F and A are first given as dotted quarter notes, and then again as part of the sixteenth-note arpeggios. Throughout the rest of the piece, the many arpeggios are often played with a held note at the bottom, and they are all read using the voice division technique. However, it seems not entirely clear to me that this is indeed a contrapuntal texture. Rather, Debussy has used a notational technique originally designed for notating counterpoint for the purpose of indicating that the bass note should be held—it is as much a kind of tablature as a representation of musical structure. This shows how, as I argued above, notation’s descriptive function is inseparable from its scriptive qualities, the anticipation of the abilities of musicians and instruments to perform it. In this case, it highlights the ambiguities that emerge from the repurposing of a notation system designed for contrapuntal relations between individual voices to represent pianistic textures designed for the interaction between different fingers and hands.

Example 5. “Clair de Lune,” bars 19–24

Example 5 thumbnail

(click to enlarge)

[4.6] If this example seems like something of a borderline case—that is to say, one might argue for either side of the argument—in other places this creates more of a problem. For instance, in bars 19 to 24, the left hand plays a bass note in octaves, after which both hands play a homophonic melody in block chords (see Example 5). However, in bars 21 and 23 for example, the left and right hand play a melody in double octaves, with some notes added here and there to fill out the harmony. Again, Debussy uses contrapuntal notation for a kind of tablature purpose; the notes filling out the harmonies in no way constitute an independent melody. However, the talking score treats these, too, as voice divisions, leading to a three-part texture in the left hand, one for the octave in the bass, one for the melody in sextuplets, and one for the harmonic filler.

[4.7] This tension between descriptive and prescriptive properties of the notation points to another pitfall, namely that the two staves of a piano score do not always correspond to the right and left hands in a completely logical way. The ambiguity of this correspondence also occurs in the very first bar, where the attentive reader may have noticed that the first notes in the left hand are read in the talking score as “F with A” with no further indication of note value, suggesting they are eighth notes. However, with the eighth-note rest before it and the dotted half note that follows, this bar is technically one eighth note too short as notated in the talking score, resulting in an incomplete bar. In the staff notation, this presents no problem as the beam is carried over to the right hand “F with A” an octave above to suggest the phrasing of this opening gesture—as well as filling in the missing eighth note in the bottom staff—but this subtle use of notation is not easily transmitted in the talking score.

Example 6. “Clair de Lune,” bars 39–40

Example 6 thumbnail

(click to enlarge)

[4.8] There is no uniform answer to the question of how to translate such problems. A large part of this piece consists of a typical pianistic texture of a melody and bass line, with a third layer of arpeggios in between, and this texture presents particular problems considering this discrepancy between staffs and hands. In bars 27 and 28, the text for the right hand only gives the notes of the melody (with the accompanying thirds) while the arpeggios are all given in the “left hand text” fragment, even the notes that are notated for the right hand, although it is mentioned in that fragment that they are to be played in the right hand. The audio examples for the right and left hand for these two bars, however, follow the notation—likely because they were generated automatically from the sheet music. In bars 39 and 40, there is also an arpeggio spread out over two hands (see Example 6). Here, the notes are read in the hand for which they are written. However, since both hands contain a voice division and the arpeggio is thus considered as a contrapuntal line, rests are inserted in the first three sixteenths of the right hand and the second three sixteenths of the left hand (and so on), even though these are never indicated in the score. At moments like these, with multiple contrapuntal textures that did not always relate to each other logically, and with a multitude of sixteenth notes and rests spread across different octaves, using the talking score became particularly frustrating, and I frequently had to take breaks to give my mind some rest. For the most difficult sections, I had to return to the same couple of bars a few days later, so that I could fully visualize, memorize, and perform the music.(6)

[4.9] The audio examples, in such cases, played a crucial role for me. Without the audio examples (and especially if I had never heard the piece before), I would probably not have been able to learn this piece from a talking score. I said earlier that these audio examples serve mainly to illustrate the notation, to present the occasionally very large amount of information in a way that makes musical sense, and that combines the various elements and presents them together in a way that the step-by-step nature of a talking score cannot. This is certainly true, but I would like to suggest that they do more than that, namely, that their function approaches a kind of notation. They presented the musical content in a way that frequently was more immediately understandable than the spoken text, for instance how different “voice divisions” were actually related, or (because of its click-track) how certain rhythms were placed within the meter.

[4.10] This idea needs some qualification, however; certainly, as sonic samples of the music, they exemplify the music in a way that notation does not. If notation may be defined as “interfaces for imagining virtual musical relations” (Schuiling 2019, 432), then these samples seemingly lack the property of virtuality, being themselves actual performances of (part of) the music. With some ear training we can listen to any performance as a kind of “notation” that we can subsequently perform ourselves—any performance of music is also a sample of that music.(7) However, that does not mean that any performance is itself a notation; with regard to these audio examples, what is especially significant about them is that they are performances of yet another transcription of the music, namely the MIDI file that was created for this purpose.(8) As a transcription, it makes particular use of Szendy’s double logic of representation and transformation. By translating the musical content to a different material format, it exposes something that is at once inherent to and lacking in the original, such as the relations between the different parts of a voice division, while necessarily (and paradoxically) doing away with other features such as dynamics in order to achieve a more accurate representation. In fact, the latter is not entirely true, and this points yet again to the reciprocity between transformation and accuracy of representation; Mom told me that he does include slight dynamic differences in the audio examples, not to express the dynamic indications given in the score (which they do not match), but rather to highlight the different textures within the music, making accompanying harmonies slightly softer than the accompanied melody, or making a melody hidden within a contrapuntal texture louder so that it is more easily noticeable. Where dynamics in staff notation are prescriptive aspects directing the musician in their performance, in this case they serve a descriptive function, distinguishing between various elements of the musical structure. Again, the representational function of notation depends on its scriptive anticipation of the abilities of its user to distinguish tones and melodic lines

Conclusion: rethinking access and musical literacy

[5.1] All in all, using the talking score, while an interesting experiment, was a frustrating experience for me. In my interview with Houdijk, after he played me an example of a talking score on his DAISY device, I remarked that “it goes quite quickly.” He firmly replied: “It does not go quickly.” At the time I did not really pursue this remark further, but it came back to mind when I used a talking score myself. Whereas I had meant that the spoken text was read quite quickly, in such a way that I was completely unable to form a mental image (let alone a sonic image) of what I had just heard, Houdijk’s remark must have expressed a sense of frustration that was an unavoidable part of his everyday musical activity. The complete talking score has a duration of about 1 hour and 45 minutes, and that does not include all the repetitions and rewinding in the course of learning the music, while the audio example for the entire piece is only 4 minutes and 29 seconds (though most pianists would probably play it slightly more slowly). To be sure, having worked with talking scores for nearly three decades, Houdijk had probably developed a more efficient way of working with them than I was able to do in this limited time, training himself to quickly form a mental image of the music based on the spoken text. Indeed, many blind users of text-to-speech technologies such as computer screen readers speed up the spoken text to such an extent that it will be unintelligible for untrained ears, and it is possible that Houdijk also adjusted the speed of his talking scores. Still, acquiring repertoire for an hour-long concert performance must have required a vast investment of time and effort—especially considering, being an organist, Houdijk would have to memorize many contrapuntal lines divided over multiple manuals and pedals. He made other similar remarks, about the fact that the first thing his DAISY device said when he turned it on was “Please wait,” and then proceeded to play what Houdijk called “new age jingles” while it loaded the information on the CD. Learning a piece of music in this manner, he explained, takes time. “The audio examples are a great help, and of course it helps that I know a thing or two about music theory. I have also become more adept at it than I was in the beginning, but it is not easy. You need to take time for it, and be patient.” This is probably a fact that can never be fully resolved; as David Baker and Lucy Green remark in their book on the lives of visually impaired musicians, the question of time “threads in and out of many aspects of visually impaired music-making, in ways that are different for sighted musicians,” and this applies much more broadly than to music alone (2017, 67).

[5.2] The demand that a talking score puts on its user invites reflection on the hierarchical nature of much classical music practice, and how this creates particular problems in the case of musicians with an impairment. Carolyn Abbate writes that musical performance “upends assumptions about human subjectivity by invoking mechanism,” and speaks of “human bodies wired to notational prescriptions” (2004, 508). This emphasis on the dehumanizing character of much classical music discourse, with its requirement of complete faithfulness to the score, is a painfully accurate description of the experience of using a talking score, more so perhaps than of using staff notation. Listening to a talking score makes one aware of the wealth of information that is communicated in staff notation, and while practicing I frequently came across aspects of the piece that I was certain I had not been aware of the first time I learned it from staff notation. To some extent, then, the technology enforces a kind of faithfulness to the score that is less pronounced when reading staff notation. Much work in musical performance studies has been done to criticize this hierarchical way of working, and Abbate’s phrase is a particularly resonant formulation of that critique. Daniel Leech-Wilkinson has written extensively about classical music’s obsession with the “one true performance” of a musical work, which functions not only as a questionable aesthetic ideal but equally as a pernicious moral obligation. His website Challenging Performance collects his various arguments about the influence of this discourse on performance, criticism, musicology, but especially music education and musicians’ wellbeing, as well as some of his experiments of alternative forms of interpretation in classical music.

[5.3] This raises a complex and delicate issue in the context of accessibility and universal design. Houdijk was certainly a performer who valued this sense of Werktreue. He emphasized that the inability to read and play at the same time required him to memorize the music extremely well, and prided himself on the fact that he knew every detail about the pieces he played—indeed, unlike sighted performers, it was impossible for him to perform music at all without memorizing it in detail. Baker and Green, in their book on blind musicianship, quote recorder player James Risdon expressing his bemusement that rehearsals with sighted musicians often start with long stretches of what he calls “note-bashing,” where they just try to get to the end of their parts without really listening to each other (2017, 131–32). Talking scores or other forms of assistive technology therefore allow blind performers to meet existing ideals of excellent musicianship more than sighted performers; indeed, like other forms of universal design, talking scores could have positive side effects by helping sighted musicians train their memorization of detailed information, as was the case in my own experience. On the one hand, then, it is vital for blind and visually impaired musicians to retain access to music notation, and accommodating them does not just allow them to excel according to particular standards in classical music discourse, but also may improve such access to sighted musicians. For the few musicians like Houdijk, or other musicians who use Braille music or have other means of reading music, it is not only a question of basic rights that they have access to music notation, but it also provides them with an autonomy and creative agency in making interpretive decisions that they would not have when learning only by ear from recordings.

[5.4] On the other hand, it is undeniable, as some have argued in recent debates about musical literacy in music education, that demands of reading skills have a potentially exclusionary effect and can be harmful to the pleasures that musicians derive from making music. The conditionality of “true musicianship” in classical music on a high level of musical literacy is a deeply ableist idea, and the role of music notation in music education for the blind and visually impaired should be considered with care. In a radio documentary on talking scores, Houdijk expresses some of the anxiety that this reliance on his memory instilled in him: “Dear me—when you’re up there during a concert, and you lose it, then it’s just blackness. You have nowhere to look, not even on the keyboard. That is an image that haunts me, every single performance. You are literally clutching at a black hole.” Of course, many sighted performers play from memory, without a score in front of them, and so they have essentially the same problem. For Houdijk, however, and for other blind performers in classical music, the demand for memorization can become part of the social stigmas that surround their impairment.

[5.5] One potential response to this dilemma might be a more pluralistic conception of musical literacy. Arguments for remediation can help us attend to the ways in which information and formats are entwined. If Abbate bleakly describes musicians as “wired to notational prescriptions,” then we might counter this with Bernhard Siegert’s argument, that “only what remains stable in the ‘wiring’ of the immutable mobiles, is at all” (2015, 86).(9) Indeed, there is no music without the coupling of humans and technology (De Souza 2014). However, in its emphasis on the translations from one system to another, it can permeate a rather monadic and uniform conception of what a format is—as is evident, for example, in Szendy’s conception of arrangement as a writing of one’s listening. Even staff notation, however, is not just one interface but in fact consists of multiple interacting interfaces(10)—we saw this in the ambiguity between contrapuntal and homophonic notation, but more broadly we can point to the amalgamate of diatonic arrangements of tonal space, divisive conceptions of rhythm and meter, ideas about accents and dynamics partly drawn from linguistics, and many other formats that have historically coalesced into what we now consider a uniform notation system. Even restricting ourselves to staff notation, we have to acknowledge its dependence on a multiplicity of interfaces—and therefore of literacies—and we can also recognize this in the reliance on both spoken instructions and audio examples when learning from a talking score, where the literacy involved in its use relies on the successful integration of both interfaces.

[5.6] In fact, this monadic conception of formats is also what lies at the heart of the aim of talking scores to transmit all of the information of staff notation into a different format, and it constitutes a potential problem. Talking scores seem to be a failed (or possibly a failing) technology. This is not to say they are worthless; Houdijk was able to sustain a lifelong career as a professional performing musician because of them, and Mom has spent most of his working life producing them. Indeed, as Braille literacy seems to be declining because of the fewer people who are blind from birth and the rapidly increased accessibility of text-to-speech technologies thanks to mobile phones, talking scores might be more necessary than ever. However, as audiobooks increasingly rely on streaming, it seems unlikely that talking scores can continue to rely on the technologies used in their transmission, as streaming impedes the possibility for navigation necessary to use them successfully. Siegert points out that, with the rise of digital technology, there has been a dissolution of the concept of “a medium” (2015, 85). Individual pieces of technology (such as a television, or a DAISY player) have become far less significant than the digital codes that we may access through these various interfaces. Future forms of music notation for the blind and visually impaired will likely make use of such codes (such as music XML) that can be accessed through refreshable Braille displays, text-to-speech technologies, or other interfaces (Baker and Green 2017, 169–72). More ambitiously, in the spirit of Universal Design, the ideal solution would be for sheet music publishers to create source files that can be exported into various formats, including their own editions of staff notation, which would erase the need for “piggyback solutions” like talking scores. The key consideration here is the flexibility in adapting the information to a wide variety of interfaces.

[5.7] Another reason that more flexible formats are desirable has to do less with developments in media technologies than with the changing nature of blindness. As most blind people now become blind later in life, fewer are learning Braille or Braille music.(11) Considering the rapidly increased accessibility of text-to-speech technologies thanks to mobile phones, an audio-based format for communicating musical information is potentially more relevant to blind musicians than ever. At the same time, these are also people who may start to struggle with their memory besides their visual impairments, and the high cognitive demand of talking scores, at least in their current format, makes using them too frustrating an experience for many to enjoyably continue to make music. The adaptability of music notation to different interfaces, by definition, should also entail an adaptability to the many different ways in which individual musicians may perform and incorporate their blindness. Media technologies are a crucial part of the ways that disability is constructed; assistive technologies are not developed for a generic class (or even subset) of blind people, but to people who have each individually already developed habits as well as creative practices in interaction with particular interfaces as part of their social interactions in everyday life. In Houdijk’s case, and that of others in his generation, this was the audiobook; today, it includes the many ways in which blind people have appropriated computers and smartphones to match their individual needs.

[5.8] This brings me to a final point. In a discussion of the importance of skill, including an uncharacteristically positive nod to Bourdieu’s habitus, Latour describes skills in terms of plug-ins, “a bit of software which, once installed on your system, will allow you to activate what you were unable to see before” (2005, 207). This metaphor leads Latour to imagine the human body as a cyborg, composed of elements and layers, patches and applications that together constitute a person. The metaphor, in its close affinity to Abbate’s image of a mechanistic “wiring” of the body in musical performance, clearly shows where ideas of the social and technical construction of the body can clash with considerations of disability. The plug-in metaphor casts the body as a blank slate, highlighting its tacit assumptions of normalcy in its conception of the human body. Moreover, losing a skill is fundamentally not the same as losing a body part or bodily function, and the metaphor risks erasing this difference. Tim Ingold, who has frequently criticized actor-network theory on its handling of skill (2011), cites blind author and theologian John Hull, writing: “Blind and deaf people, like everyone else, sense the world with their whole body, and like everyone else, too they have to cope with the resources available to them. . . . It is not like a round cake from which a substantial slice has been cut out. It is more like a smaller cake” (2000, 270). As I have suggested in my discussion of talking scores, assistive technologies are not simply “plugged in,” but have to be woven into a musical practice, joining with the other technologies and skills that are already in play. Accessibility is not simply a question of opening a door or a window, but requires a sustained attention to the development of skills in the negotiation of social and technological structures. It is this process of transformation that partly constitutes musical literacy. If using talking scores takes a considerable amount of time and effort, they do ultimately afford their users with an artistic and creative autonomy that is not easily achieved otherwise. As Houdijk put it, “There is an essential difference—not gradual, but categorical—between very difficult and impossible. As long as it’s very difficult, I’m still dancing in the streets, you see?”

    Return to beginning    



Floris Schuiling
Utrecht University
f.j.schuiling@uu.nl

    Return to beginning    



Works Cited

Abbate, Carolyn. 2004. “Music—Drastic or Gnostic?” Critical Inquiry 30 (3): 505–36. https://doi.org/10.1086/421160.

Abbate, Carolyn. 2004. “Music—Drastic or Gnostic?” Critical Inquiry 30 (3): 505–36. https://doi.org/10.1086/421160.

Abbate, Carolyn. 2016. “Sound Object Lessons.” Journal of the American Musicological Society 69 (3): 793–829. https://doi.org/10.1525/jams.2016.69.3.793.

—————. 2016. “Sound Object Lessons.” Journal of the American Musicological Society 69 (3): 793–829. https://doi.org/10.1525/jams.2016.69.3.793.

Abbate, Carolyn, and Michael Gallope. 2020. “The Ineffable (And Beyond).” In The Oxford Handbook of Western Music and Philosophy, ed. Tomás McAuley, Nanette Nielsen, Jerrold Levinson, and Ariana Phillips-Hutton, 741–61. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199367313.013.36.

Abbate, Carolyn, and Michael Gallope. 2020. “The Ineffable (And Beyond).” In The Oxford Handbook of Western Music and Philosophy, ed. Tomás McAuley, Nanette Nielsen, Jerrold Levinson, and Ariana Phillips-Hutton, 741–61. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199367313.013.36.

Accinno, Michael. 2016. “Disabled Union Veterans and the Performance of Martial Begging.” In The Oxford Handbook of Music and Disability Studies, ed. Blake Howe, Stephanie Jensen-Moulton, Neil William Lerner, and Joseph Nathan Straus, 403–22. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199331444.013.20.

Accinno, Michael. 2016. “Disabled Union Veterans and the Performance of Martial Begging.” In The Oxford Handbook of Music and Disability Studies, ed. Blake Howe, Stephanie Jensen-Moulton, Neil William Lerner, and Joseph Nathan Straus, 403–22. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199331444.013.20.

Akrich, Madeleine. 1992. “The De-Scription of Technical Objects.” In Shaping Technology/Building Society: Studies in Sociotechnical Change, 205–24. MIT Press.

Akrich, Madeleine. 1992. “The De-Scription of Technical Objects.” In Shaping Technology/Building Society: Studies in Sociotechnical Change, 205–24. MIT Press.

Baker, David, and Lucy Green. 2017. Insights in Sound: Visually Impaired Musicians’ Lives and Learning. Routledge. https://doi.org/10.4324/9781315266060.

Baker, David, and Lucy Green. 2017. Insights in Sound: Visually Impaired Musicians’ Lives and Learning. Routledge. https://doi.org/10.4324/9781315266060.

Bolter, Jay David, and Richard Grusin. 1999. Remediation: Understanding New Media. MIT Press. https://doi.org/10.1108/ccij.1999.4.4.208.1.

Bolter, Jay David, and Richard Grusin. 1999. Remediation: Understanding New Media. MIT Press. https://doi.org/10.1108/ccij.1999.4.4.208.1.

Born, Georgina. 2005. “On Musical Mediation: Ontology, Technology and Creativity.” Twentieth-Century Music 2 (1): 7–36. https://doi.org/10.1017/S147857220500023X.

Born, Georgina. 2005. “On Musical Mediation: Ontology, Technology and Creativity.” Twentieth-Century Music 2 (1): 7–36. https://doi.org/10.1017/S147857220500023X.

Cheng, William. 2019. Loving Music Till It Hurts. Oxford University Press. https://doi.org/10.1093/oso/9780190620134.001.0001.

Cheng, William. 2019. Loving Music Till It Hurts. Oxford University Press. https://doi.org/10.1093/oso/9780190620134.001.0001.

Crombie, David, and Roger Lenoir. 2008. “Designing Accessible Music Software for Print Impaired People.” In Assistive Technology for Visually Impaired and Blind People, ed. Marion A. Hersh and Michael A. Johnson, 581–613. Springer. https://doi.org/10.1007/978-1-84628-867-8_16.

Crombie, David, and Roger Lenoir. 2008. “Designing Accessible Music Software for Print Impaired People.” In Assistive Technology for Visually Impaired and Blind People, ed. Marion A. Hersh and Michael A. Johnson, 581–613. Springer. https://doi.org/10.1007/978-1-84628-867-8_16.

De Souza, Jonathan. 2014. “Voice and Instrument at the Origins of Music.” Current Musicology 97: 21–36. https://doi.org/10.7916/cm.v0i97.5322.

De Souza, Jonathan. 2014. “Voice and Instrument at the Origins of Music.” Current Musicology 97: 21–36. https://doi.org/10.7916/cm.v0i97.5322.

Devine, Kyle, and Alexandrine Boudreault-Fournier. 2021. “Making Infrastructures Audible.” In Audible Infrastructures: Music, Sound, Media, ed. Kyle Devine and Alexandrine Boudreault-Fournier, 3–55. Oxford University Press. https://doi.org/10.1093/oso/9780190932633.003.0001.

Devine, Kyle, and Alexandrine Boudreault-Fournier. 2021. “Making Infrastructures Audible.” In Audible Infrastructures: Music, Sound, Media, ed. Kyle Devine and Alexandrine Boudreault-Fournier, 3–55. Oxford University Press. https://doi.org/10.1093/oso/9780190932633.003.0001.

Galloway, Alexander R. 2013. The Interface Effect. Wiley.

Galloway, Alexander R. 2013. The Interface Effect. Wiley.

Godden, Richard, and Jonathan Hsy. 2018. “Universal Design and Its Discontents.” In Disrupting the Digital Humanities, ed. Dorothy Kim and Jesse Stommel, 91–112. Punctum Books. https://doi.org/10.2307/j.ctv19cwdqv.9.

Godden, Richard, and Jonathan Hsy. 2018. “Universal Design and Its Discontents.” In Disrupting the Digital Humanities, ed. Dorothy Kim and Jesse Stommel, 91–112. Punctum Books. https://doi.org/10.2307/j.ctv19cwdqv.9.

Goodley, Dan. 2014. Dis/Ability Studies: Theorising Disablism and Ableism. Routledge. https://doi.org/10.4324/9780203366974.

Goodley, Dan. 2014. Dis/Ability Studies: Theorising Disablism and Ableism. Routledge. https://doi.org/10.4324/9780203366974.

Goodman, Nelson. 1976. Languages of Art: An Approach to a Theory of Symbols. Hackett. https://doi.org/10.5040/9781350928541.

Goodman, Nelson. 1976. Languages of Art: An Approach to a Theory of Symbols. Hackett. https://doi.org/10.5040/9781350928541.

Howe, Blake. 2010. “Paul Wittgenstein and the Performance of Disability.” The Journal of Musicology 27 (2): 135–80. https://doi.org/10.1525/jm.2010.27.2.135.

Howe, Blake. 2010. “Paul Wittgenstein and the Performance of Disability.” The Journal of Musicology 27 (2): 135–80. https://doi.org/10.1525/jm.2010.27.2.135.

Imrie, Rob. 2012. “Universalism, Universal Design and Equitable Access to the Built Environment.” Disability and Rehabilitation 34 (10): 873–82. https://doi.org/10.3109/09638288.2011.624250.

Imrie, Rob. 2012. “Universalism, Universal Design and Equitable Access to the Built Environment.” Disability and Rehabilitation 34 (10): 873–82. https://doi.org/10.3109/09638288.2011.624250.

Ingold, Tim. 2000. The Perception of the Environment: Essays on Livelihood, Dwelling and Skill. Routledge.

Ingold, Tim. 2000. The Perception of the Environment: Essays on Livelihood, Dwelling and Skill. Routledge.

Ingold, Tim. 2011. Being Alive: Essays on Movement, Knowledge and Description. Routledge.

—————. 2011. Being Alive: Essays on Movement, Knowledge and Description. Routledge.

Johnson, Shersten. 2009. “Notational Systems and Conceptualizing Music: A Case Study of Print and Braille Notation.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.11.

Johnson, Shersten. 2009. “Notational Systems and Conceptualizing Music: A Case Study of Print and Braille Notation.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.11.

Johnson, Shersten. 2016. “Understanding Is Seeing: Music Analysis and Blindness.” In The Oxford Handbook of Music and Disability Studies, ed. Blake Howe, Stephanie Jensen-Moulton, Neil Lerner, and Joseph Straus. https://doi.org/10.1093/oxfordhb/9780199331444.013.7.

—————. 2016. “Understanding Is Seeing: Music Analysis and Blindness.” In The Oxford Handbook of Music and Disability Studies, ed. Blake Howe, Stephanie Jensen-Moulton, Neil Lerner, and Joseph Straus. https://doi.org/10.1093/oxfordhb/9780199331444.013.7.

Kafer, Alison. 2013. Feminist, Queer, Crip. Indiana University Press.

Kafer, Alison. 2013. Feminist, Queer, Crip. Indiana University Press.

Kleege, Georgina. 2016. “Blindness and Visual Culture: An Eyewitness Account.” Journal of Visual Culture 4 (2): 179–90. https://doi.org/10.1177/1470412905054672.

Kleege, Georgina. 2016. “Blindness and Visual Culture: An Eyewitness Account.” Journal of Visual Culture 4 (2): 179–90. https://doi.org/10.1177/1470412905054672.

Kochavi, Jon. 2009. “How Do You Hear That? Autism, Blindness, and Teaching Music Theory.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.10.

Kochavi, Jon. 2009. “How Do You Hear That? Autism, Blindness, and Teaching Music Theory.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.10.

Krolick, Bettye. 1997. Music Braille Code. Braille Authority of North America.

Krolick, Bettye. 1997. Music Braille Code. Braille Authority of North America.

Latour, Bruno. 1999. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press.

Latour, Bruno. 1999. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press.

Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

—————. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

Leech-Wilkinson, Daniel. n.d. “Challenging Performance—Re-Thinking Creativity in Classical Performance.” Accessed February 1, 2020. https://challengingperformance.com/.

Leech-Wilkinson, Daniel. n.d. “Challenging Performance—Re-Thinking Creativity in Classical Performance.” Accessed February 1, 2020. https://challengingperformance.com/.

Magnusson, Thor. 2019. Sonic Writing: Technologies of Material, Symbolic, and Signal Inscriptions. Bloomsbury. https://doi.org/10.5040/9781501313899.

Magnusson, Thor. 2019. Sonic Writing: Technologies of Material, Symbolic, and Signal Inscriptions. Bloomsbury. https://doi.org/10.5040/9781501313899.

Mathew, Nicholas, and Mary Ann Smart. 2015. “Elephants in the Music Room: The Future of Quirk Historicism.” Representations 132 (1): 61–78. https://doi.org/10.1525/rep.2015.132.1.61.

Mathew, Nicholas, and Mary Ann Smart. 2015. “Elephants in the Music Room: The Future of Quirk Historicism.” Representations 132 (1): 61–78. https://doi.org/10.1525/rep.2015.132.1.61.

McLuhan, Marshall. 2013. Understanding Media: The Extensions of Man. Gingko Press.

McLuhan, Marshall. 2013. Understanding Media: The Extensions of Man. Gingko Press.

Michalko, Rod. 2010. “What’s Cool about Blindness?” Disability Studies Quarterly 30 (3/4). https://doi.org/10.18061/dsq.v30i3/4.1296.

Michalko, Rod. 2010. “What’s Cool about Blindness?” Disability Studies Quarterly 30 (3/4). https://doi.org/10.18061/dsq.v30i3/4.1296.

Mills, Mara. 2011a. “Deafening: Noise and the Engineering of Communication in the Telephone System.” Grey Room (43): 118–43. https://doi.org/10.1162/GREY_a_00028.

Mills, Mara. 2011a. “Deafening: Noise and the Engineering of Communication in the Telephone System.” Grey Room (43): 118–43. https://doi.org/10.1162/GREY_a_00028.

Mills, Mara. 2011b. “On Disability and Cybernetics: Helen Keller, Norbert Wiener, and the Hearing Glove.” Differences 22 (2–3): 74–111. https://doi.org/10.1215/10407391-1428852.

—————. 2011b. “On Disability and Cybernetics: Helen Keller, Norbert Wiener, and the Hearing Glove.” Differences 22 (2–3): 74–111. https://doi.org/10.1215/10407391-1428852.

Mills, Mara. 2011c. “Do Signals Have Politics? Inscribing Abilities in Cochlear Implants.” In The Oxford Handbook of Sound Studies, ed. Trevor Pinch and Karin Theda Bijsterveld, 320–47. https://doi.org/10.1093/oxfordhb/9780195388947.013.0077.

—————. 2011c. “Do Signals Have Politics? Inscribing Abilities in Cochlear Implants.” In The Oxford Handbook of Sound Studies, ed. Trevor Pinch and Karin Theda Bijsterveld, 320–47. https://doi.org/10.1093/oxfordhb/9780195388947.013.0077.

Mills, Mara, and Jonathan Sterne. 2017. “Dismediation: Three Propositions and Six Tactics (Afterword).” In Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 365–78. NYU Press.

Mills, Mara, and Jonathan Sterne. 2017. “Dismediation: Three Propositions and Six Tactics (Afterword).” In Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 365–78. NYU Press.

Mitchell, David T., and Sharon L. Snyder. 2000. Narrative Prosthesis: Disability and the Dependencies of Discourse. University of Michigan Press. https://doi.org/10.3998/mpub.11523.

Mitchell, David T., and Sharon L. Snyder. 2000. Narrative Prosthesis: Disability and the Dependencies of Discourse. University of Michigan Press. https://doi.org/10.3998/mpub.11523.

Pacun, David. 2009. “Reflections on and Some Recommendations for Visually Impaired Students.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.13.

Pacun, David. 2009. “Reflections on and Some Recommendations for Visually Impaired Students.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.13.

Patteson, Thomas. 2015. Instruments for New Music: Sound, Technology, and Modernism. University of California Press. https://doi.org/10.1525/luminos.7.

Patteson, Thomas. 2015. Instruments for New Music: Sound, Technology, and Modernism. University of California Press. https://doi.org/10.1525/luminos.7.

Rijksinstituut voor Volksgezondheid en Milieu. 2020. ‘Gezichtsstoornissen’. Volkgsgezondheidenzorg.Info. https://www.volksgezondheidenzorg.info/onderwerp/gezichtsstoornissen.

Rijksinstituut voor Volksgezondheid en Milieu. 2020. ‘Gezichtsstoornissen’. Volkgsgezondheidenzorg.Info. https://www.volksgezondheidenzorg.info/onderwerp/gezichtsstoornissen.

Saslaw, Janna. 2009. “‘Teaching Blind’: Methods for Teaching Music Theory to Visually Impaired Students.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.14.

Saslaw, Janna. 2009. “‘Teaching Blind’: Methods for Teaching Music Theory to Visually Impaired Students.” Music Theory Online 15 (3–4). https://doi.org/10.30535/mto.15.3.14.

Schuiling, Floris. 2019. “Notation Cultures: Towards an Ethnomusicology of Notation.” Journal of the Royal Musical Association 144 (2): 429–58. https://doi.org/10.1080/02690403.2019.1651508.

Schuiling, Floris. 2019. “Notation Cultures: Towards an Ethnomusicology of Notation.” Journal of the Royal Musical Association 144 (2): 429–58. https://doi.org/10.1080/02690403.2019.1651508.

Schuiling, Floris. 2022. “Music As Extended Agency: On Notation and Entextualization IN Improvised Music.” Music and Letters 103 (2): 322–43. https://doi.org/10.1093/ml/gcab109.

—————. 2022. “Music As Extended Agency: On Notation and Entextualization IN Improvised Music.” Music and Letters 103 (2): 322–43. https://doi.org/10.1093/ml/gcab109.

Seeger, Charles. 1958. “Prescriptive and Descriptive Music-Writing.” The Musical Quarterly 44 (2): 184–95. https://doi.org/10.1093/mq/XLIV.2.184.

Seeger, Charles. 1958. “Prescriptive and Descriptive Music-Writing.” The Musical Quarterly 44 (2): 184–95. https://doi.org/10.1093/mq/XLIV.2.184.

Siebers, Tobin. 2010. Disability Aesthetics. University of Michigan Press. https://doi.org/10.3998/mpub.1134097.

Siebers, Tobin. 2010. Disability Aesthetics. University of Michigan Press. https://doi.org/10.3998/mpub.1134097.

Siegert, Bernhard. 2015. “Media after Media.” In Media after Kittler, ed. Eleni Ikoniadou and Scott Wilson, 79–91. Rowman and Littlefield.

Siegert, Bernhard. 2015. “Media after Media.” In Media after Kittler, ed. Eleni Ikoniadou and Scott Wilson, 79–91. Rowman and Littlefield.

Sterne, Jonathan. 2003. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press. https://doi.org/10.1515/9780822384250.

Sterne, Jonathan. 2003. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press. https://doi.org/10.1515/9780822384250.

Sterne, Jonathan. 2012. Mp3: The Meaning of a Format. Duke University Press. https://doi.org/10.2307/j.ctv1131dh6.

—————. 2012. Mp3: The Meaning of a Format. Duke University Press. https://doi.org/10.2307/j.ctv1131dh6.

Stichting Vision 2020 Netherlands. ‘Situatie in Nederland.’ Stichting Vision 2020 Netherlands. https://www.vision2020.nl/situatie-in-nederland/.

Stichting Vision 2020 Netherlands. ‘Situatie in Nederland.’ Stichting Vision 2020 Netherlands. https://www.vision2020.nl/situatie-in-nederland/.

Straus, Joseph. 2011. Extraordinary Measures: Disability in Music. Oxford University Press.

Straus, Joseph. 2011. Extraordinary Measures: Disability in Music. Oxford University Press.

Szendy, Peter. 2008. Listen: A History of Our Ears. Fordham University Press.

Szendy, Peter. 2008. Listen: A History of Our Ears. Fordham University Press.

Titchkosky, Tanya. 2011. The Question of Access: Disability, Space, Meaning. University of Toronto Press.

Titchkosky, Tanya. 2011. The Question of Access: Disability, Space, Meaning. University of Toronto Press.

    Return to beginning    



Footnotes

* An early version of this article was presented at the musicology research groups of Utrecht University and Potsdam University in January 2020. It has benefited from feedback from many people, including Stephanie Probst, Christian Thorau, Wouter Capitain, Madelynn Hart, Nicolas Donin, Clément Canonne, and Pierre Saint-Germier. I am especially grateful to the anonymous reviewers of Music Theory Online for some very useful criticisms and suggestions for improvement.
Return to text

An early version of this article was presented at the musicology research groups of Utrecht University and Potsdam University in January 2020. It has benefited from feedback from many people, including Stephanie Probst, Christian Thorau, Wouter Capitain, Madelynn Hart, Nicolas Donin, Clément Canonne, and Pierre Saint-Germier. I am especially grateful to the anonymous reviewers of Music Theory Online for some very useful criticisms and suggestions for improvement.

1. To be clear, I would not want to equate professionalism with musical literacy; this is, at the very least, an obviously ableist assumption. However, it is such a ubiquitous value in Western Art Music that blind people often uphold it themselves, and also take professional pride in conforming to it—for that reason, good access to notated music should be provided to them. I return to this question in the conclusion.
Return to text

2. In this context, see also Blake Howe’s (2010) excellent discussion of the music written or arranged for pianist Paul Wittgenstein, who lost his right arm as the result of an injury incurred in the First World War. Where Howe mainly describes the politics of such arrangements, variously embodying narratives of compensation, overcoming, ableism, and normalization, my interest here lies more with the ontological questions explored by Szendy—although, as will become clear, such ontological questions are by no means apolitical. Shersten Johnson (2009) pursues a similar aim in her discussion of Braille music notation; that same issue of Music Theory Online also contains useful articles by Jon Kochavi (2009), David Pacun (2009), and Janna Saslaw (2009) on the practicalities of teaching music theory and analysis to visually impaired students.
Return to text

3. One of his most intriguing arguments (2008, 100–128) concerns Beethoven’s deafness. Discussing the descriptions, by Wagner and others, of the deaf Beethoven as the ideal structural listener (somewhat analogous to the “blind seer” trope explicitly invoked by Wagner on page 121), Szendy argues that, by implication, the ideal structural listener is deaf. Although it is a fascinating argument, it uses deafness purely as a crutch for theoretical argument and shows no interest in deafness itself (Beethoven’s or anyone else’s).
Return to text

4. Alexander Galloway distinguishes his argument about interfaces from arguments concerning remediation on this ground, that it focuses attention on practical, physical engagement rather than ideological questions about representations of truth and reality (Galloway 2013, 20–21).
Return to text

5. When a tie crosses a fragment division, the end of the tie is mentioned again at the start of the next fragment.
Return to text

6. As mentioned earlier, talking scores require a familiarity with the principles of staff notation. My visualizing the music in the learning process is obviously not possible for musicians who have never seen staff notation, which may or may not make the process more difficult. Houdijk stated he still visualized the music when using talking scores, even though he had been fully blind for over twenty years.
Return to text

7. Nelson Goodman (1976, 236) writes that a performance of a musical work by definition exemplifies the work or score. We can accept this notion without necessarily accepting Goodman’s notoriously strict conditions for what constitutes a performance. Freely improvised music might be thought to constitute a counter example, but this question lies beyond the scope of this article (though see Schuiling 2022 for a discussion of related considerations).
Return to text

8. Of course, any form of recording, as a means of material inscription, can be considered a form of notation, and will meet the condition of virtuality. As a protocol for the digital transmission of values for certain musical parameters, MIDI underlines Thor Magnusson’s argument that digital music blurs the lines between instrument, notation, and recording (Magnusson 2019). It should be noted that this fact is not unique to digital music: Thomas Patteson (2015) describes how European composers in the late 1920s experimented with the use of phonographs as musical instruments, which led not only to a consideration of phonograph records as a form of notation, but also to the exploration of new notational techniques that play into the workings of this particular means of inscription. Mara Mills and Jonathan Sterne have also each documented how technologies of sound reproduction can be considered as what Szendy calls a “writing of listening,” in that they inscribe particular ideals about listening, and questions of (dis)ability have been central to their studies (Mills 2011a, 2011b, 2011c; Sterne 2003, 2012).
Return to text

9. Indeed, Abbate herself has attended closely to the ways that music is shaped by processes of technological mediation in her more recent work, see Abbate 2016; and especially Abbate and Gallope 2020.
Return to text

10. Galloway coins the term “intraface” to refer to these interfaces inside interfaces, and argues that they are necessarily ambiguous (2013, 40).
Return to text

11. In 2005 in the Netherlands, the percentage of visually impaired people who were between ages 0 and 14 was only 0.9%, and only 6.1% were between ages 14 and 49 (Stichting Vision 2020 Netherlands). Moreover, the most important causes of blindness since 2005 (cataracts, diabetic retinopathy, glaucoma, and macular degeneration) largely affect elderly people (Rijksinstituut voor Volksgezondheid en Milieu 2020).
Return to text

To be clear, I would not want to equate professionalism with musical literacy; this is, at the very least, an obviously ableist assumption. However, it is such a ubiquitous value in Western Art Music that blind people often uphold it themselves, and also take professional pride in conforming to it—for that reason, good access to notated music should be provided to them. I return to this question in the conclusion.
In this context, see also Blake Howe’s (2010) excellent discussion of the music written or arranged for pianist Paul Wittgenstein, who lost his right arm as the result of an injury incurred in the First World War. Where Howe mainly describes the politics of such arrangements, variously embodying narratives of compensation, overcoming, ableism, and normalization, my interest here lies more with the ontological questions explored by Szendy—although, as will become clear, such ontological questions are by no means apolitical. Shersten Johnson (2009) pursues a similar aim in her discussion of Braille music notation; that same issue of Music Theory Online also contains useful articles by Jon Kochavi (2009), David Pacun (2009), and Janna Saslaw (2009) on the practicalities of teaching music theory and analysis to visually impaired students.
One of his most intriguing arguments (2008, 100–128) concerns Beethoven’s deafness. Discussing the descriptions, by Wagner and others, of the deaf Beethoven as the ideal structural listener (somewhat analogous to the “blind seer” trope explicitly invoked by Wagner on page 121), Szendy argues that, by implication, the ideal structural listener is deaf. Although it is a fascinating argument, it uses deafness purely as a crutch for theoretical argument and shows no interest in deafness itself (Beethoven’s or anyone else’s).
Alexander Galloway distinguishes his argument about interfaces from arguments concerning remediation on this ground, that it focuses attention on practical, physical engagement rather than ideological questions about representations of truth and reality (Galloway 2013, 20–21).
When a tie crosses a fragment division, the end of the tie is mentioned again at the start of the next fragment.
As mentioned earlier, talking scores require a familiarity with the principles of staff notation. My visualizing the music in the learning process is obviously not possible for musicians who have never seen staff notation, which may or may not make the process more difficult. Houdijk stated he still visualized the music when using talking scores, even though he had been fully blind for over twenty years.
Nelson Goodman (1976, 236) writes that a performance of a musical work by definition exemplifies the work or score. We can accept this notion without necessarily accepting Goodman’s notoriously strict conditions for what constitutes a performance. Freely improvised music might be thought to constitute a counter example, but this question lies beyond the scope of this article (though see Schuiling 2022 for a discussion of related considerations).
Of course, any form of recording, as a means of material inscription, can be considered a form of notation, and will meet the condition of virtuality. As a protocol for the digital transmission of values for certain musical parameters, MIDI underlines Thor Magnusson’s argument that digital music blurs the lines between instrument, notation, and recording (Magnusson 2019). It should be noted that this fact is not unique to digital music: Thomas Patteson (2015) describes how European composers in the late 1920s experimented with the use of phonographs as musical instruments, which led not only to a consideration of phonograph records as a form of notation, but also to the exploration of new notational techniques that play into the workings of this particular means of inscription. Mara Mills and Jonathan Sterne have also each documented how technologies of sound reproduction can be considered as what Szendy calls a “writing of listening,” in that they inscribe particular ideals about listening, and questions of (dis)ability have been central to their studies (Mills 2011a, 2011b, 2011c; Sterne 2003, 2012).
Indeed, Abbate herself has attended closely to the ways that music is shaped by processes of technological mediation in her more recent work, see Abbate 2016; and especially Abbate and Gallope 2020.
Galloway coins the term “intraface” to refer to these interfaces inside interfaces, and argues that they are necessarily ambiguous (2013, 40).
In 2005 in the Netherlands, the percentage of visually impaired people who were between ages 0 and 14 was only 0.9%, and only 6.1% were between ages 14 and 49 (Stichting Vision 2020 Netherlands). Moreover, the most important causes of blindness since 2005 (cataracts, diabetic retinopathy, glaucoma, and macular degeneration) largely affect elderly people (Rijksinstituut voor Volksgezondheid en Milieu 2020).
    Return to beginning    



Copyright Statement

Copyright © 2023 by the Society for Music Theory. All rights reserved.

[1] Copyrights for individual items published in Music Theory Online (MTO) are held by their authors. Items appearing in MTO may be saved and stored in electronic or paper form, and may be shared among individuals for purposes of scholarly research or discussion, but may not be republished in any form, electronic or print, without prior, written permission from the author(s), and advance notification of the editors of MTO.

[2] Any redistributed form of items published in MTO must include the following information in a form appropriate to the medium in which the items are to appear:

This item appeared in Music Theory Online in [VOLUME #, ISSUE #] on [DAY/MONTH/YEAR]. It was authored by [FULL NAME, EMAIL ADDRESS], with whose written permission it is reprinted here.

[3] Libraries may archive issues of MTO in electronic or paper form for public access so long as each issue is stored in its entirety, and no access fee is charged. Exceptions to these requirements must be approved in writing by the editors of MTO, who will act in accordance with the decisions of the Society for Music Theory.

This document and all portions thereof are protected by U.S. and international copyright laws. Material contained herein may be copied and/or distributed for research purposes only.

    Return to beginning    


                                                                                                                                                                                                                                                                                                                                                                                                       
SMT

Prepared by Michael McClimon, Senior Editorial Assistant

Number of visits: 2070