prototyping (first stages) of Genome AudioPlayer
In order to start materializing on the idea of building a Genome AudioPlayer, I focused my attention on prototyping all the stages involved first, with emphasis on having the technology work in phase 1, and focusing later on user testing during phase 2, and finally interface design and construction during phase 3 (Figure 1).
Figure 1. Diagram depicting intended approach to develop version 1 of Genome AudioPlayer
As starting point, I choose to work on the project example 'Keyboard Instrument' from the Arduino's Project Book (page 79) to get the backbone of a musical instrument composed of momentary switches connected in parallel. The important point in this example is the practical use of resistor ladders, that allows for several digital inputs into a system by plugging them into a single analog input and thus maximize the space/output ration of a single Arduino board. In this circuit configuration, four momentary switches are read as analog input and connected to power through a resistor. When each button is pressed, a different voltage level is passed to the input pin (Figure 2 and Video 1).
Figure 2. Resistor ladder as strategy to build a keyboard instrument. Each momentary switch when pressed produces a musical tone through a piezo element connected to digital pin 8. The tone function can only create square waves for the synthesis of sound.
Video 1. Musical tones generated with a keyboard instrument. Tones are different from the original ones described in the Arduino Project's book.
The Arduino code used can be found HERE_
Synthetic music from DNA segments
From the example above, the steps to be implemented are:
1 create synthetic sound from strings of DNA sequence
Using Processing's sound library and oscillator objects synthetize sound compositions and save them as audio files (.wav) or (.mp3)
2 Establish a dialogue between Arduino and a Processing sketch on a multimedia computer
Using serial communication, send the status of momentary switches (are they pressed or not) on the Arduino board to a Processing sketch on a multimedia computer. When a momentary switch on the Arduino board is pressed, an audio file created in the previous step (step 1) will be played.
I could successfully implement PHASE 1 (Video 2) in the sense that synthetic music from a Processing sketch was altered by pressing momentary switches on the Arduino board. This time the multimedia computer was responsible for generating the sound and not the piezo element on the Arduino board. The sound that can be heard is a sonification of CG sequence context occurrences at four different microRNA genes from rice (miR172a, miR172b, miR172c, and miR172d respectively) generated with a Triangle wave and envelope.