BAIN ATMI 2022

Mutational Music Project

TUTORIAL

Parameter Mapping Sonification
of Genetic Data using Max & MIDI

Reginald Bain
Professor of Composition & Theory
University of South Carolina
School of Music
Columbia, SC 29208
rbain@mozart.sc.edu


Return to: BAIN ATMI 2022

This tutorial was created for the University of South Carolina (USC) course BIOL 599 Topics in Biology: Chords and Codons (Instructor: Jeff Dudycha; Collaborating Instructor: Reginald Bain). To date, BIOL 599 has been jointly offered three times with the USC School of Music course MUSC 540 Projects in Computer Music to form an interdisciplinary research experience where composers and biologists work in interdisciplinary research teams to design of sonification projects that address the following question:

In what way(s) can basic processes of genetics and evolutionary biology (especially mutation)
be effectively represented through musical processes?

All of the biologists are upper-division undergraduate Biology majors – or related majors such as Pre-Med., Marine Biology, Microbiology, etc. As such, they posses the necessary general background in biology (Clark et al. 2018) and genetics (e.g. Brooker 2009), and are interviewed during the previous semester to be sure they have a background in music. To provide the student with an appropriate introduction to geneticagenetics review and musical inteBIOL 599 begins with six introductory lecture/discussion sessions that alternate between biology (Dudycha) and music (Bain) as follows:
  1. Course Introduction (Biology 1)
  2. Music as Organized Sound (Music 1)
  3. Genetics Review (Biology 2)
  4. Sonification and Data-Driven Music (Music 2)
  5. Mutation (Biology 3)
  6. Mutational Music Project Ideas (Music 3)
Note: Links to Wikipedia articles are included in the main body of the tutorial's text to provide the reader with convenient access to definitions of technical terms and concepts. For genetic terminology, the reader may also wish to consult the NIH National Human Genome Research Institute's:

National Human Genome Research Institute, Talking Glossary of Genomic and Genetic Terms
https://www.genome.gov/genetics-glossary



TUTORIAL


Table of Contents

  1. Introduction
  2. Music from DNA
  3. Genetic Data
  4. Working with FASTA files
  5. The "zika41.txt" file
  6. Max & MIDI
  7. Parameter Mapping Sonification
  8. Sonification Design & Aesthetics
  9. Two Model Experiments
    • Experiment 1: Zika Melody
    • Experiment 2: Zika Rhythm
  10.  Conclusion



1. Introduction

In order to demonstrate how genetic data may be mapped to musical parameters, I created two model experiments using Cycling '74's Max & MIDI: Experiment 1: Zika Melody (see Example 2), and Experiment 2: Zika Rhythm (see Example 3). Scientists in the field of auditory display refer to this approach as parameter mapping sonification (Grond and Berger 2011). In addition to introducing the data, software tools, and sonification approach, this tutorial briefly engages issues of design and aesthetics in the context of these two model sonification experiments, providing recommendations for further reading along the way. It is assumed the the reader has an undergraduate-level understanding of biology, genetics, and music theory.

2. Music from DNA

Some of the earliest attempts to create music from DNA were executed in the MIDI domain, so that is where we begin our journey. The early writings and experiments that have directly informed this musico-scientific work include: Hofstadter 1979, Hayashi and Munakata 1984, Munakata and Hayashi 1995, Dunn and Clark 1999, and Takahashi and Miller 2007. For a survey of other early research literature, see Dunn and Clark 1999, Jensen 2008, and Temple 2017.

3. Genetic Data

To get started making music from DNA, we'll need some data. One place you can download publicly-available genetic data is the:

National Center for Biotechnology Information (NCBI)
https://www.ncbi.nlm.nih.gov


In the experiments below we will sonify a nucleic acid sequence, but be sure to keep in mind that the sonifications you create for the course may be built around any type of genetic data; e.g., a protein sequence, gene expression data, epigenetic data, etc. You may even design a sonification around a biological model such as protein folding (see Taylor 2017).

To make the experiments easier to follow, we will use a short initial segment of a DNA sequence rather than an entire gene. The fragment we will use is from the Zika virus isolate Z110606033 polyprotein gene whose complete sequence is available in NCBI's open-access sequence database GenBank:

Zika virus isolate Z1106033 polyprotein gene
GenBank: KU312312.1

Clicking on the GenBank link above will take you directly to the genetic data.

4. Working with FASTA files

DNA sequences are commonly encoded using the text-based FASTA format for bioinformatics data. Learning how to work with FASTA files now will help prepare you to work with other types of data in the future. To store, edit, and save text files, you'll need a text editor. In the figures below, I used a free text editor for Mac OS by Bare Bones called BBEdit. I recommend the cross-platform text editor Atom for Windows users, or you may prefer one of the programs in this list of text editors available on Wikipedia.

The first four lines of the Zika virus isolate Z1106033 polyprotein gene (heretofore Zika) FASTA file are shown in Figure 1.

Figure 1.  FASTA data

Zika virus FASTA data

(Data credit: NCBI, GenBank: KU312312.1)

The first line is a header that describes the data. Notice that the header is marked by the special leading character symbol ">". The DNA sequence data begins in line 2. Each line of data in the FASTA file contains 70 characters, except for the last line which may contain fewer than 70 characters. It is also important to know that each line in a FASTA file ends with a line break. Figure 2 shows the "hidden" line breaks at the end of each 70-character line.

Figure 2.  Line break characters

Line breaks

Here is the complete FASTA file saved in a standard plain text format with the filename extension ".txt":

zika.txt

If you right-click on the "zika.txt" link above, you will be presented with a menu that allows you to download the file to your computer (Save Linked File As...),  or you may alternatively copy and paste the sequence into a new text file.

Download the "zika.txt" file now and open it in your text editor of choice. When I opened the file in BBEdit, I was able to quickly determine that the file contains 150 lines of data (in addition to the header line) and 10,522 nucleotide base symbols by highlighting various parts of the data. I then used BBEdit to delete the header information and remove all of the line breaks. Finally, I saved the new file (File > Save As...) to a file named "zika_data_only.txt" so you can see what the data looks like without the header and line breaks:

zika_data_only.txt

Although it is possible to process the header information and line breaks in the Max programming language, pre-processing the data file in this manner has the following advantages in the context of our experiments: (1) It greatly simplifies the Max code that will be required to implement the mapping; and (2) It makes it possible to sonify the data in real time.

5. The "Zika41.txt" data file

In my initial attempts to sonify this data, I used the complete Zika DNA sequence and made my aesthetic goal to "create interesting music." However, this did not yield satisfying results. So I began to search for an initial, short sequence segment that might be more manageable in size and refined the aesthetic goal to:

Create an interesting melody and rhythm from the data that demonstrates
the essential components of parameter mapping sonification.

As described in musical detail in section 9 (below), this approach led me to the initial sequence fragment shown in Figure 3:

Figure 3.  The initial DNA sequence fragment (Zika41) that is used in the two sonification experiments

ACAGGTTTTATTTTGGATTTGGAAACGAGAGTTTCTGGTCA

Finally, I stored the sequence data in a text file named:

zika41.txt

6. Max & MIDI

Max is a graphical programming language for music and media that is commonly used by composers, performers, and sound artists working in the field of experimental music (Nyman 1999). It is an object-oriented language that is optimized for real-time human computer interaction and device-control mapping in the MIDI, audio, and video domains. Moreover, it can create standalone apps for the Mac and Windows operating systems. Of course, a wide variety of tools may be used to sonify data. For a comprehensive survey of design approaches and tools, see Bovermann et al. 2011 and Worrall 2019.

The Musical Instrument Digital Interface (MIDI) communications protocol was invented in 1983 by a consortium of music instrument manufacturers. It allowed digital synthesizers to talk to each other. MIDI encoded musical performance data as a sequence of time-based events. With the advent of of personal computers, MIDI data could be easily edited using programs called MIDI sequencers and scorewriters (or music notation programs). Since MIDI and DNA may both be encoded as a sequence, we have a common metaphorical ground on which two otherwise totally distinct disciplines – genetics and music may meet. To learn more about the history MIDI and its technical implementation, I recommend that you read Chapter 3 MIDI of Jeffery Haas' open-access book Introduction to Computer Music:

Haas, Chapter 3 MIDI, from Introduction to Computer Music (Hass 2021)
https://cmtext.indiana.edu/MIDI/chapter3_MIDI.php

7. Parameter Mapping Sonification

The NSF Sonification Report (Kramer et al. 1999) defines sonification as:

"...the use of nonspeech audio to convey information. More specifically, sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation."

This Sonification Report describes the status of the emerging field of auditory display with the goal of setting an agenda for future research. Open access resources including

The Sonification Handbook (Hermann et al. 2011)
https://sonification.de/handbook/

and

Proceedings of the International Conference on Auditory Display (ICAD)
https://smartech.gatech.edu/handle/1853/49750

provide us with convenient online access to everything we need to get start designing sonification experiments.

In the experiments below, we use a sonification design approach researchers call parameter mapping sonficiation (Grond and Berger 2011). Specifically, we map the nucleotide base symbols (A, G, C & T) in the"zika41.txt" DNA sequence file to MIDI parameters in real time with the aesthetic goal of creating an interesting melody and rhythm. The MIDI event parameters we focus on are:

(1) Pitch number
(2) Velocity
(3) Duration
(4) Instrument
(5) Pan

These five MIDI parameters roughly correspond to the traditional parametric conception of the electronic music surface (Stockhausen 1962) employed in the MUSIC-N family of programming languages (Mathews 1963). In this model, the essential parameters of a musical tone typically include:

(1) Frequency
(2) Amplitude
(3) Duration
(4) Timbre
(5) Spatialization

For more detailed information about PMSon, read:

"Chapter 15. Parameter Mapping Sonification," in The Sonification Handbook (Grond and Berger 2011)
<https://sonification.de/handbook/chapters/chapter15/>.

For an alternative point of view of the electronic musical surface, see Smalley 1997. For a systematic review of mapping strategies, see Dubus and Breson 2013.

8. Sonification Design & Aesthetics

In the Mutational Music Project, we seek to understand genetic principles better through their realization in sound. However, our primary goal in the experiments below is to create sonic output that is recognizable as "interesting music." When the latter goal is emphasized, the term musification may be more appropriate than sonification (Grond and Berger 2011). When designing such experiments in sound, one must be keenly aware of where the line between sonification and music (science and art) should be drawn. On this topic, I have found Vickers 2016 and Scaletti 2018 to be incredibly illuminating. One must also keep in mind that music is a time-based art. And unlike other types of data – e.g., astronomical data, earthquake data, stock market data, etc. – a DNA sequence is not a time series.

As one begins to map genetic data to sound in order to create music from a DNA sequence, one will certainly be confronted with the question: "What makes music sound good?" In this project, I have searched for answers to this question in the disciplines of music theory and algorithmic composition. Jan LaRue's Guidelines for Style Analysis (1972) provides a general conceptional framework for the basic elements of music, but I have also employed models from music psychology (Deutsch 2013) and geometrical music theory (Hall 2008). I have found that the latter provides an analytical view that is immediately accessible to many scientists. Its reliance on rigorous mathematics and visualization can provide student scientists with an easy entre into advanced music theory. I have also relied heavily on the work of computer/theorist Dmitri Tymoczko (Tymoczko 2011) and computer scientist Godfried Toussiant (Toussiant 2013). In the context of the two experiments below, I recommend the following article on Tymoczko's companion website:

Dmitri Tymoczko, "What makes music sound 'good?'"
https://dmitri.mycpanel.princeton.edu/whatmakesmusicsoundgood.html

For those readers seeking an a brief introduction to algorithmic composition, I recommend Edwards 2011. For field guides to algorithmic composition and electronic music composition, I recommend Nierhaus 2008 and Roads 2015, respectively.

One of the best ways I have found to put the two experiments below into an appropriate musico-scientific context is the Ars Musica – Ars Informatica Aesthetic Perspective Space by Paul Vickers and Bennett Hogg (Vickers and Hogg 2006). This space is described in chapter 7 of the Sonification Handbook (Hermann et al. 2010):

Barrass and Vickers, "Sonification Design and Aesthetics" (Barrass and Vickers 2011)
<https://sonification.de/handbook/chapters/chapter7/>.

It situates sonifications in a two-dimensional space whose horizontal axis runs from Ars Informatica to Ars Musica (left-to-right) and whose vertical axis runs from Concrete to Abstract (bottom-to-top). Regarding matters of creativity and aesthetics in the context of sonifications and its impact on the broader public, I recommend: Ben-Tall and Berger 2004, Ballora 2014, and Supper 2014.

9. Two Model Experiments

As explained above, the two experiments below were created by mapping the four DNA nucleotide bases in the "zika41.txt" file to MIDI parameters in real time. In the first experiment, the aesthetic goal was to create an interesting melody. In the the second experiment, the aesthetic goal was to create an interesting rhythm.

Experiment 1: Zika Melody

In Experiment 1, I mapped the nucleotide bases in the "zika41.txt" data file

ACAGGTTTTATTTTGGATTTGGAAACGAGAGTTTCTGGTCA

to MIDI pitch numbers (Wolf 1997) using Mapping 1 (Figure 4).

Figure 4.  Mapping 1

(a)  DNA nucleotide base-to-pitch mapping

DNA base
  MIDI Pitch
A
-> 69
C
-> 60
G
-> 67
T
-> 70


(b)  The same using traditional music notation

This mapping was chosen, primarily, because it is easy to memorize. Notice that A maps to the note A, C maps to the note C, and G maps to the note G. Since our musical alphabet does not contain a T, T was mapped to the note Bb (a common mapping in post-tonal music theory, where T = 10 in a C = 0 pitch-class system). One reason this mapping is harmonically interesting is because the pitch classes G–A–Bb–C form a subset of a diatonic collection on F. One reason it is melodically interesting is because the pitches G4–A4–Bb4 group together in pitch space, leaving the C4 is isolated in a lower register. This creates a C4–Bb4 minor seventh boundary interval within which the mapping unfolds, and somewhat ambiguously implies a mixolydian mode. The Max app that implements the mapping above, called ACGT Melody, is shown in Figure 5.

Figure 5.  ACGT Melody Max app

ACGT melody


A recording of the app's sonic output may be heard in Example 1.

Example 1. Default sonic output of the Max app ACGT Melody


MIDI Realization


The instrument used in Example 1 is the piano timbre 1 Acoustic Grand. This timbre is from the Apple Audio Unit DLS (Down-Loadable-Sounds) synthesizer – Max's built-in General MIDI (GM) synthesizer on the Mac OS. The recording in Example 1 includes two complete 41-note cycles of the mapping and then fades out over the beginning of the third cycle. To focus the listener's attention on the variation in pitch, the following non-pitch MIDI parameters were normalized (i.e., held constant):

It should be mentioned that a 500 ms. duration is equivalent to a constant quarter-note pulse at a tempo of 120 beats per minute (abbr. b.p.m.).

The Zika Melody notated in Example 2 was composed using a real-time interactive compositional process; i.e., running the ACGT Melody app numerous times, I tweaked the duration, timbre, and initial DNA sequence length to taste. The DNA sequence length of 41 was chosen so that the melody would: (1) Have a length that is a prime number of bases – to achieve a rhythmic complexity in the pitch grouping structure upon repeated cycles and would wrap around smoothly; and (2) Imply an imperfect authentic cadence when a single cycle is stated;. The non-pitch parameters were chosen as described below:

In the music notation, please note that the time signature (4/4) is arbitrary and was chosen simply to make the melody easy to read.



Example 2.  Sonification Experiment 1: Zika Melody


Bain, Zika Melody (© 2018 Reginald Bain)

MIDI Realization



Experiment 2: Zika Rhythm

In Experiment 2, I mapped the DNA nucleotide bases in the "Zika41.txt" DNA sequence to durations in real time as shown in Figure 7.

Figure 5.  Mapping 2

(a)  Nucleotide base-to-duration mapping

DNA base
  Duration
(ms.)

A
-> 1000
C
-> 500
G
-> 250
T
-> 125


(b)  Mapping 2 using traditional music notation

DNA Base to Duration Mapping


Durations are expressed in milliseconds (ms.). This mapping strategy was inspired by a technique employed in integral serialism where a duration series is derived from a subharmonic series of proportions (Stockhausen 1959). Here the series 1, 1/2, 1/3, 1/4 is proportional to 1000, 500, 250, 125.  Figure 6b shows the corresponding traditional music notation and fractional divisions of a whole note that result when 1000 ms. is assigned to the quarter note. Figure 6 shows the Max app ACGT Rhythm that implements Mapping 2.


Figure 6.  The Max app ACGT Rhythm

ACGT Rhythm


Example 3 shows the traditional music notation equivalent for the output of the Max app. To focus the listener's attention on the variation in duration, the following MIDI parameters were normalized:

The tempo (eighth note equals 120 b.p.m.) and changing meters in the traditional music notation are arbitrary. They were chosen to make the traditional music notation  in Example 3 as readable as possible. 



Example 3. Sonification Experiment 2: Zika Rhythm

Bain, Zika Rhythm (© 2018 Reginald Bain)


MIDI Realization


10. Conclusion

The sonification of a DNA sequence provides a simple generative model for experimentation in algorithmic composition. Students must choose the data, mapping strategy, and musical parameters in a manner that achieves aesthetically interesting results. In making such choices, students must balance both scientific and artistic concerns. I have found this to be an efficient way to introduce the basic principles of parameter mapping sonification to students that provides them with a conceptual model that allows them to design their own more rigorous scientific experiments that map genetic data to sound.



Software Links

Cycling '74, Max

MakeMusic, Finale

Reason Studios, Reason

References

Ballora, Mark. 2014. “Sonification, Science and Popular Music: In search of the ‘wow’.” Organized Sound 19/1: 30–40.

Barrass, Stephen and Paul Vickers. 2011. “Sonification Design and Aesthetics”. In The Sonification Handbook, edited by T. Hermann, A. Hunt, J. G. Neuhoff. Berlin: Logos Verlag, pp. 363–397.

Ben-Tal, Oded and Jonathan Berger. 2004. “Creative Aspects of Sonification.” Leonardo 37/3 (June 2004): 229–233.

Bovermann, Till, Julian Rohrhuber and Alberto de Campo. 2011. "Chapter 10. Laboratory Methods for Experimental Sonfication." In The Sonification Handbook, T. Hermann, A. Hunt and J. G. Neuhoff, eds. Berlin: Logos Publishing House. Available online at: <https://sonification.de/handbook/chapters/chapter10/>.

Brooker, Robert J. 2009. Genetics: Analysis & Principles, 3rd ed. New York: McGraw Hill.

Clark, Mary Ann, and John Dunn. 1999. “Life Music: The Sonification of Proteins,” Leonardo 32/1 (February 1999): 25–32. {Leonardo Online}

Clark, Mary Ann, Jung Ho Choi, Matthew M. Douglas. 2018. Biology, 2nd ed. Houston: OpenStax. Available online at: <https://openstax.org/details/books/biology-2e>.

Cycling '74. Max 8 Documentation. Palo Alto, CA: Cylcing '74. Available online at: <https://docs.cycling74.com/max8>.

Deamer, David. 1982. “Music: The Arts.” Omni Magazine (August 1982): 28 & 120.

Deutsch, Diana. 2013. The Psychology of Music, 3rd. ed. Cambridge, MA: Academic Press.

Dubus, Gaél and Roberto Bresin. 2013. "A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities." PLOS ONE 8/12 (December 17, 2013). {PLOS One}

Edwards, Michael. 2011. "Algorithmic Composition: Computational Thinking in Music." Communications of the ACM 54/7: 58–67.

Grond, Florian, and Jonathan Berger. 2011. "Chapter 15. Parameter Mapping Sonification." In The Sonification Handbook, T. Hermann, A. Hunt and J. G. Neuhoff, eds. Berlin: Logos Publishing House. Available online at: <https://sonification.de/handbook/chapters/chapter15/>.

Hall, Rachel Wells. 2008. "Geometrical Music Theory." Science 320/5874: 328–329. {JSTOR

Hass, Jeffery. 2021. Introduction to Computer Music: An Electronic Textbook, 2nd 3d. Bloomington, IN: Indiana University. Available online at: <https://cmtext.indiana.edu>.

Hayashi, Kenshi and Nobuo Munakata. 1984. "Basically musical." Nature 310 (July 12, 1984): 96. {Nature}

Hermann, Thomas, A. Hunt and J. G. Neuhoff, eds. The Sonification Handbook. Berlin: Logos Publishing House. Available online at: <https://sonification.de/handbook/>.

Jensen, Marc. 2008. "Composing DNA Music Within the Aesthetics of Chance." Perspectives of New Music 46/2 (Summer 2008): 243–259.

Kramer, et al. 1999. "Sonification Report: Status of the Field and Research Agenda." Available online at: <http://www.icad.org/websiteV2.0/References/nsf.html>.

LaRue, Jan, 1992/1970. Guidelines for Style Analysis, 2nd ed. Sterling Heights, MI: Harmonie Park Press. {GB}

Mathews, Max. 1963. "The Digital Computer as a Musical Instrument." Science 142/3592 (Nov. 1, 1963): 553–557.

Munakata, Nubuo. 2002. "Individuality, creativity and genetic information: a prelude to gene music." Available online at: <http://www.toshima.ne.jp/edogiku/InCrGehtml/AwhtmlInCrGe.html>.

NCBI. 2016. Zika virus isolate Z1106033 polyprotein gene. Available online at: <https://www.ncbi.nlm.nih.gov/nuccore/KU312312.1>.

Nierhaus, Gerhard. 2008. Algorithmic Composition: Paradigms of Automated Music Generation. New York: Springer.

Nyman, Michael. 1999. Experimental Music: Cage and Beyond, 2nd ed. Cambridge: Cambridge University Press.

Manzo, V.J. 2016. Max/MSP/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More, 2nd ed. New York, Oxford.

Roads, Curtis. 2015. Composing Electronic Music: A New Aesthetic. New York: Oxford University Press.

Scaletti, Carla. 2018. “Sonification (is not equal to) Music.” In The Oxford Handbook of Algorithmic Music, edited by Roger T. Dean and Alex McLean. New York: Oxford.

Smalley, Denis. 1997. “Spectromorphology: explaining sound-shapes.” Organised Sound 2/2: 107–126. {Semantic Scholar}

Stockhausen, Karlheinz and Elaine Barkin. 1962. "The Concept of Unity in Electronic Music." Perspectives of New Music 1/1 (Autumn, 1962): 39–48.

Stockhausen, Karlheinz. 1959. "How Time Passes," translated by Cornelius Cardew. Die Reihe 3 (Musical Craftsmanship): 10-40.

Supper, Alexandra. 2014. "Sublime Frequencies: The construction of sublime listening experience in the sonification of scientific data." Social Studies of Science 44/1: 34–58.

Taylor, Steven. 2017. L13 Protein Folding Sonification. Available online at: <http://www.stephenandrewtaylor.net/genetics.html>.

Takahashi, Rie and Jeffrey H. Miller. 2007. "Conversion of amino-acid sequence in proteins to classical music: search for auditory patterns." Genome Biology 8/5, Article 405 (2007).

Temple, Mark D. 2017. "An auditory display tool for DNA sequence analysis." BMC Bioinformatics 18/221. Available online at: <https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-017-1632-x>.

Toussaint, Godfried. 2013. The Geometry of Musical Rhythm: What Makes a "Good" Rhythm Good?, 1st ed. Boca Raton, FL: CRC Press.

Tymoczko, Dmitri. 2011a. A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice. New York: Oxford University Press.

_______________. 2011b. "What makes music sound 'good?'" Available online at: <https://dmitri.mycpanel.princeton.edu/whatmakesmusicsoundgood.html>.
Winkler, Todd. 1998. Composing Interactive Music: Techniques and Ideas Using Max. Cambridge, MA: MIT Press.

Wolfe, Joe. 1997. “Note names, MIDI numbers and frequencies,” from UNSW Music Acoustics. Available online at: <https://newt.phys.unsw.edu.au/jw/notes.html>.

Worrall, David. 2019. Sonification Design: From Data to Intelligible Soundfields. New York: Springer.

Vickers, Paul. 2016. "Sonification and Music, Music and Sonification."The Routledge Companion to Sounding Art, edited by Marcel Cobussen, Vicent Meelberg, and Barry Truax. New York: Routledge.

Vickers, Paul and Bennett Hogg. 2006. “Sonification Abstraite/Sonification Concre?te: An ‘Aesthetic Perspective Space’ for Classifying Auditory Displays in the Ars Musica Domain.” Proceedings of the 12th International Conference on Auditory Display (2006).

© 2022 Reginald Bain
All rights reserved


Updated: April 7, 2023

Reginald Bain | University of South Carolina | School of Music
https://www.reginaldbain.com