Browse Source

Updates README-en.md

Auto commit by GitBook Editor
Rafael Arce Nazario 7 years ago
parent
commit
fede454e18
1 changed files with 268 additions and 265 deletions
  1. 268
    265
      README-en.md

+ 268
- 265
README-en.md View File

@@ -1,265 +1,268 @@
1
-# Arrays - Sound Processing
2
-
3
-![main1.png](images/main1.png)
4
-![main2.png](images/main2.png)
5
-![main3.png](images/main3.png)
6
-
7
-[Verano 2016 - Ive - Coralys]
8
-
9
-Arrays help us store and work with groups of data of the same type. The data is stored in consecutive memory spaces, which can be accessed by using the name of the array with indexes or subscripts that indicate the position where the data is stored. Repetition structures provide us a simple way of accessing the data within an array. In this laboratory experience, you will be exposed to simple sound processing algorithms in order to practice the use of loops to manipulate arrays.  
10
-
11
-This laboratory experience is an adaptation of the nifty assignment presented by Daniel Zingaro in [1].  
12
-
13
-
14
-## Objectives
15
-
16
-1. Practice the use of loops to manipulate arrays.
17
-
18
-2. Learn simple algorithms to process sound.
19
-
20
-3. Practice modular programming.
21
-
22
-
23
-## Pre-Lab:
24
-
25
-Before coming to the laboratory you should have:
26
-
27
-1. Reviewed the basic concepts related to arrays and loops.
28
-
29
-2. Studied the `left` and `right` attributes of the `QAudioBuffer::S16S` class in the `Qt` multimedia library.
30
-
31
-3. Studied the concepts and instructions for the laboratory session.
32
-
33
-4. Taken the Pre-Lab quiz, available in Moodle.
34
-
35
----
36
-
37
----
38
-
39
-## Digital Sound Processing
40
-
41
-Sounds are vibrations that propagate through elastic media such as air, water, and solids. The sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker [2]. Sound waves consist of areas of high and low pressure called compressions and rarefractions, respectively. 
42
-
43
-Microphones turn sound waves into electrical signals. These electrical signals can be digitized, i.e. converted to a stream of numbers, where each number is the intensity of the electrical signal at an instant in time. The *sample rate* is the number of samples of a sound signal taken per second. For example, a *sample rate* of 44,100 samples per second is used in CD-quality recordings. This means that every second, for each of the channels (left and right), 44,100 samples of the audio signal are taken and converted to numbers.
44
-
45
----
46
-
47
-![image1.jpg](images/image1.jpg)
48
-
49
-**Figure 1**: Illustration of the steps involved in sound digitalization. The pressure wave is converted to a voltage signal by the microphone. The voltage signal is sampled and digitized by the analog to digital converter to obtain a number value for each sample.  The stream of numbers constitutes the *digitized* sound. Taken from [3].
50
-
51
----
52
-
53
-**Question:**
54
-
55
-How many bytes would it take to store a song that is exactly 180 seconds long and is recorded in stereo CD-quality? Assume that each sample is converted to a number of 2 bytes.
56
-
57
-**Answer:**
58
-
59
-180 seconds * 44,100 samples/second * 2 bytes/sample * 2 channels = 31,752,000 bytes = 31.75 MBytes.
60
-
61
-Fortunately, there is sound data compression techniques such as *MP3* and *Ogg*, that reduce the amount of memory required to store CD-quality music.
62
-
63
-----
64
-
65
-**Digital sound processing** techniques can be used to enhance sound quality by removing noise and echo, to perform data compression, and to improve transmission. Digital sound processing also plays an important role in voice recognition applications and in scientific research such as in biodiversity recognition using sound sensors [4]. Digital sound can also be easily manipulated to produce special effects.  
66
-
67
-Since digital sound recordings are in essence, a collection of numeric values that represent a sound wave, digital sound processing can be as simple as applying arithmetic operations over those values. For example, let's say that you are given a digital sound recording; the louder the recording, the higher the absolute values of the numbers that it contains. To decrease the volume of the whole recording we could multiply each value by a positive number smaller than 1. 
68
-
69
----
70
-
71
-![image2.png](images/image2.png)
72
-
73
-**Figure 2.** One of the simplest sound processing tasks: changing the volume of a sound wave by multiplying each point by a positive number smaller than 1 (in this case 0.5).
74
-
75
----
76
-
77
-## Libraries
78
-
79
-For this laboratory experience you will use the multimedia libraries of `Qt`. To complete the exercises, you will need to understand the `left` and `right` members of the `QAudioBuffer::S16S` class. For the purpose of this laboratory experience, we use the name `AudioBuffer` to refer to `QAudioBuffer::S16S`.
80
-
81
-Each object of the class `AudioBuffer` will have the variable members `left` and `right` that contain the left and right values of the stereo sound sample. These variables are public and you can access their content by writing the name of the object, followed by a period and the name of the variable. To represent a sound signal, we use an array of `AudioBuffer` objects. Each element in the array is an object that contains the left and right values of the signal at an instant in time (remember that each second contains 44,100 samples). For instance, if we have an array of `AudioBuffer` objects, called `frames`, then `frames[i].left` refers to the left channel value of the sound at sample `i`.
82
-
83
----
84
-
85
-![image3.png](images/image3.png)
86
-
87
-**Figure 3.** In the figure, `frame` is an array of `AudioBuffer` objects. During this laboratory experience, sound signals will be represented by an array of `AudioBuffer` objects. An object with index `i` stores the values of the left and right channels of sample `i`. 
88
-
89
----
90
-
91
-The `HalfVolume` function in the following example illustrates how to read and modify an array of `AudioBuffer` objects:
92
-
93
-
94
-```cpp
95
-void HalfVolume(AudioBuffer frames[], int N){
96
-
97
-    // for each sample in the signal, reduce its value to half
98
-
99
-    for (int i=0; i < N; i++) {
100
-        frames[i].left  = frames[i].left / 2;
101
-        frames[i].right = frames[i].right / 2; 
102
-    }
103
-}
104
-
105
-```
106
-
107
-
108
----
109
-
110
----
111
-
112
-!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-01.html"
113
-<br>
114
-
115
-!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-02.html"
116
-<br>
117
-
118
-!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-03.html"
119
-<br>
120
-
121
-!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-04.html"
122
-<br>
123
-
124
----
125
-
126
----
127
-
128
-
129
-
130
-## Laboratory Session:
131
-
132
-The `SoundProcessing` project contains the skeleton of an application to process stereo sound. The application you will complete will allow the user to apply four different algorithms to process sound. The sub-directory called `WaveSamples` contains sound files for you to test your implementations.
133
-
134
-
135
-### Exercise 1 - Remove Vocals on a Recording
136
-
137
-A cheap (but many times ineffective) way to remove the vocals from a recording is by taking advantage of the fact that voice is commonly recorded in both left and right channels, while the rest of the instruments may not. If this is the case, then we can remove vocals from a recording by subtracting the left and right channels.
138
-
139
-
140
-#### Instructions
141
-
142
-1. Load the project `SoundProcessing` into `QtCreator`. There are two ways to do this:
143
-
144
-    * Using the virtual machine: Double click the file `SoundProcessing.pro` located in the folder `/home/eip/labs/arrays-soundprocessing` of your virtual machine.
145
-    * Downloading the project’s folder from `Bitbucket`: Use a terminal and write the command `git clone http:/bitbucket.org/eip-uprrp/arrays-soundprocessing` to download the folder `arrays-soundprocessing` from `Bitbucket`. Double click the file `SoundProcessing.pro` located in the folder that you downloaded to your computer.
146
-
147
-2. Compile and run the program. You will see a graphical interface to process sound and recordings.
148
-
149
-3. Load any of the wave files `love.wav`, `cartoon.wav`, or `grace.wav` by clicking the `Search` button in the right side of the `Audio In` label, and play it clicking the `Play Audio In` button.
150
-
151
-4. In this exercise, your task is to complete the function `RemoveVocals` in file `audiomanip.cpp` so it can remove voices from a recording. The function receives an array of objects of the class `AudioBuffer`, and the size of the array.
152
-
153
-**Algorithm:**
154
-
155
-For each sample in the array, compute the difference of the sample's left channel minus its right channel, divide it by 2, and use this value as the new value for both the left and right channels of the corresponding sample.
156
-
157
-Play the output sound file with the application by clicking the `Play Audio Out` button.
158
-
159
-### Exercise 2 - Fade In
160
-
161
-A common sound effect is the gradual intensification of the recording's volume, or fade in. This is the result of constantly increasing the value of consecutive samples in the array of sound samples.
162
-
163
-#### Instructions
164
-
165
-1. Load and play any of the wave files `rain.wav`, or `water.wav` just as in Exercise 1.  
166
-
167
-2. Your task is to complete the function `AudioFadeIn` in file `audiomanip.cpp` so it gradually intensifies the volume of a recording up to a certain moment. The function receives an array of objects of the class `AudioBuffer`, the size of the array, and a fade in length that will be applied to the `AudioBuffer`. For example, if `fade_length` is `88200`, the fade in should not affect any sample in position `88200` or higher. 
168
-
169
-3. Reproduce the following recordings from the `WaveSamples` folder:
170
-
171
-* `rain-fi.wav`
172
-* `water-fi.wav`
173
-  
174
-The recordings were created using the fade in filter with `fade_length` set to `88200`. You should be able to listen how the water and the rain linearly fades in over the first two seconds, and then remains at the same volume throughout the recording.  Notice that, since we are using sounds recorded at `44100` samples per second, `88200` corresponds to two seconds of the recording.
175
-
176
-**Algorithm:**
177
-
178
-To apply a fade in to a sound, we multiply successive samples by constantly increasing fractional numbers between `0` and `1`. Multiplying samples by `0` silences them, and multiplying by `1` keeps them the same; multiplying by a factor between `0` and `1` scales their volume by that factor. It's important to mention that both channels of the samples should be multiplied by the same factor.
179
-
180
-For instance, if `fade_length` is 4, the filter will be applied to the first 4 samples:
181
- 
182
-| Sample Number | Multiply by factor |
183
-|---|---|
184
-| 0 | 0 |
185
-| 1 | 0.25 |
186
-| 2 | 0.5 |
187
-| 3 | 0.75 |
188
-| >= 4 | 1 (Do not modify the sample) |
189
-
190
-Notice that we have 4 samples and the factor used to multiply the sample in each channel stars at `0` and increases `0.25` each time until reaching `1`.
191
-
192
-
193
-### Exercise 3 - Fade Out
194
-
195
-Another common sound effect is the gradual decrease of volume in a recording. This is the result of constantly decreasing the value of consecutive samples in the array of sound samples.
196
-
197
-#### Instructions
198
-
199
-1. Load and play any of the wave files `rain.wav`, or `water.wav` just like in the previous exercises.  
200
-
201
-2. Your task in this exercise is to complete the function `AudioFadeOut` in the file `audiomanip.cpp` so it will fade out the volume starting from a sample up to the end of the recording. The function receives an array of objects of the class `AudioBuffer`, the size of the array, and a fade out length that will be applied to the `AudioBuffer`. For example, if `fade_length` is `88200`, the fade-out should not affect any sample numbered `88200` or lower. 
202
-
203
-3. Reproduce the following recordings from the `WaveSamples` folder:
204
-
205
-* `rain.fo.wav`
206
-* `water.fo.wav`
207
-  
208
-The recordings were created using the fade out filter with `fade_length` set to `88200`. You should be able to listen how the water and the rain is played at maximum volume and then in the last two seconds the sound starts to linearly fade out.
209
-
210
-**Algorithm:**
211
-
212
-The multiplicative factors for `fade_out` are the same as for `fade_in`, but are applied in the reverse order. For example, if `fade_length` were `4`, the samples in the fourth-before-last positions would be multiplied by `0.75` (in both channels), the samples in the third-before-last positions would be multiplied by `0.5`, the samples in the penultimate positions would be multiplied by `0.25`, the samples in the final positions would be multiplied by `0.0`.
213
-
214
-
215
-### Exercise 4 - Panning from Left to Right
216
-
217
-The sound effect we want to produce in this exercise is to start hearing sound from the left channel, then fading from that channel, intensifying in the right channel, and ending up completely on the right channel.
218
-
219
-
220
-#### Instructions
221
-
222
-1. Load and play the `airplane.wav` just like in the previous exercises.
223
-
224
-2. Your task is to complete the function `LeftToRight` in file `audiomanip.cpp` so the sound "moves" from the left channel to the right channel. The function receives an array of objects of class  `AudioBuffer`, the size of the array, and a pan length that will be applied to the `AudioBuffer`. For example, if `pan_length` is `88200`, the pan should not affect any sample in position `88200` or higher.
225
-
226
-3. Play the `airplane.out.wav` recording. You should be able to listen how the airplane sound starts completely at the left, then slowly moves to the right, reaching the extreme right by the final sample. In this example the panning finishes in the last sample. This will not happen if the panning length is not equal to the number of samples; in this case, after reaching the panning length you will listen the normal sound in both channels.
227
-
228
-
229
-**Algorithm:**
230
-
231
-Getting a sound to move from left to right like this requires a fade-out on the left channel and a fade-in on the right channel. For instance, if `pan_length` is `4`, the filter will be applied to the first 4 samples:
232
-
233
- 
234
-| Sample Number | Multiply left channel by factor | Multiply right channel by factor |
235
-|---|---|---|
236
-| 0 | 0.75 | 0 |
237
-| 1 | 0.5 | 0.25 |
238
-| 2 | 0.25 | 0.5 |
239
-| 3 | 0 | 0.75 |
240
-| >= 4 | (Do not modify the sample) | (Do not modify the sample) | 
241
-
242
-
243
-### Deliverables
244
-
245
-Use "Deliverable" in Moodle to upload the `audiomanip.cpp` file. Remember to use good programming techniques, include the names of the programmers involved, and document your program.
246
-
247
----
248
-
249
----
250
-
251
-### References
252
-
253
-[1] Daniel Zingaro, http://nifty.stanford.edu/2012/zingaro-stereo-sound-processing/
254
-
255
-[2] http://en.wikipedia.org/wiki/Sound
256
-
257
-[3] http://homepages.udayton.edu/~hardierc/ece203/sound_files/image001.jpg.
258
-
259
-[4] Arbimon, A web based network for storing, sharing, and analyzing acoustic information. http://arbimon.com/
260
-
261
-[5] https://somnathbanik.wordpress.com/2012/10/22/digital-signal-processing-featured-project/
262
-
263
-[6] http://www.hearingreview.com/2013/03/designing-hearing-aid-technology-to-support-benefits-in-demanding-situations-part-1/
264
-
265
-[7] http://diveintodotnet.com/2014/12/02/programming-basics-what-are-strings/
1
+# Arrays - Sound Processing
2
+
3
+![main1.png](images/main1.png)
4
+![main2.png](images/main2.png)
5
+![main3.png](images/main3.png)
6
+
7
+[Verano 2016 - Ive - Coralys]
8
+
9
+Arrays help us store and work with groups of data of the same type. The data is stored in consecutive memory spaces, which can be accessed by using the name of the array with indexes or subscripts that indicate the position where the data is stored. Repetition structures provide us a simple way of accessing the data within an array. In this laboratory experience, you will be exposed to simple sound processing algorithms in order to practice the use of loops to manipulate arrays.  
10
+
11
+This laboratory experience is an adaptation of the nifty assignment presented by Daniel Zingaro in [1].  
12
+
13
+
14
+## Objectives
15
+
16
+1. Practice the use of loops to manipulate arrays.
17
+
18
+2. Learn simple algorithms to process sound.
19
+
20
+3. Practice modular programming.
21
+
22
+
23
+## Pre-Lab:
24
+
25
+Before coming to the laboratory you should have:
26
+
27
+1. Reviewed the basic concepts related to arrays and loops.
28
+
29
+2. Studied the `left` and `right` attributes of the `QAudioBuffer::S16S` class in the `Qt` multimedia library.
30
+
31
+3. Studied the concepts and instructions for the laboratory session.
32
+
33
+4. Taken the Pre-Lab quiz, available in Moodle.
34
+
35
+---
36
+
37
+---
38
+
39
+## Digital Sound Processing
40
+
41
+Sounds are vibrations that propagate through elastic media such as air, water, and solids. The sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker [2]. Sound waves consist of areas of high and low pressure called compressions and rarefractions, respectively. 
42
+
43
+Microphones turn sound waves into electrical signals. These electrical signals can be digitized, i.e. converted to a stream of numbers, where each number is the intensity of the electrical signal at an instant in time. The *sample rate* is the number of samples of a sound signal taken per second. For example, a *sample rate* of 44,100 samples per second is used in CD-quality recordings. This means that every second, for each of the channels (left and right), 44,100 samples of the audio signal are taken and converted to numbers.
44
+
45
+---
46
+
47
+![image1.jpg](images/image1.jpg)
48
+
49
+**Figure 1**: Illustration of the steps involved in sound digitalization. The pressure wave is converted to a voltage signal by the microphone. The voltage signal is sampled and digitized by the analog to digital converter to obtain a number value for each sample.  The stream of numbers constitutes the *digitized* sound. Taken from [3].
50
+
51
+---
52
+
53
+**Question:**
54
+
55
+How many bytes would it take to store a song that is exactly 180 seconds long and is recorded in stereo CD-quality? Assume that each sample is converted to a number of 2 bytes.
56
+
57
+**Answer:**
58
+
59
+$$180 \,\text{seconds} \times 44,100 \,\text{samples/second} \times 2 \,\text{bytes/sample} \times 2 \,\text{channels} $$
60
+
61
+$$= 31,752,000 \,\text{bytes} = 31.75 \,\text{MBytes}$$
62
+
63
+
64
+Fortunately, there is sound data compression techniques such as *MP3* and *Ogg*, that reduce the amount of memory required to store CD-quality music.
65
+
66
+----
67
+
68
+**Digital sound processing** techniques can be used to enhance sound quality by removing noise and echo, to perform data compression, and to improve transmission. Digital sound processing also plays an important role in voice recognition applications and in scientific research such as in biodiversity recognition using sound sensors [4]. Digital sound can also be easily manipulated to produce special effects.  
69
+
70
+Since digital sound recordings are in essence, a collection of numeric values that represent a sound wave, digital sound processing can be as simple as applying arithmetic operations over those values. For example, let's say that you are given a digital sound recording; the louder the recording, the higher the absolute values of the numbers that it contains. To decrease the volume of the whole recording we could multiply each value by a positive number smaller than 1. 
71
+
72
+---
73
+
74
+![image2.png](images/image2.png)
75
+
76
+**Figure 2.** One of the simplest sound processing tasks: changing the volume of a sound wave by multiplying each point by a positive number smaller than 1 (in this case 0.5).
77
+
78
+---
79
+
80
+## Libraries
81
+
82
+For this laboratory experience you will use the multimedia libraries of `Qt`. To complete the exercises, you will need to understand the `left` and `right` members of the `QAudioBuffer::S16S` class. For the purpose of this laboratory experience, we use the name `AudioBuffer` to refer to `QAudioBuffer::S16S`.
83
+
84
+Each object of the class `AudioBuffer` will have the variable members `left` and `right` that contain the left and right values of the stereo sound sample. These variables are public and you can access their content by writing the name of the object, followed by a period and the name of the variable. To represent a sound signal, we use an array of `AudioBuffer` objects. Each element in the array is an object that contains the left and right values of the signal at an instant in time (remember that each second contains 44,100 samples). For instance, if we have an array of `AudioBuffer` objects, called `frames`, then `frames[i].left` refers to the left channel value of the sound at sample `i`.
85
+
86
+---
87
+
88
+![image3.png](images/image3.png)
89
+
90
+**Figure 3.** In the figure, `frame` is an array of `AudioBuffer` objects. During this laboratory experience, sound signals will be represented by an array of `AudioBuffer` objects. An object with index `i` stores the values of the left and right channels of sample `i`. 
91
+
92
+---
93
+
94
+The `HalfVolume` function in the following example illustrates how to read and modify an array of `AudioBuffer` objects:
95
+
96
+
97
+```cpp
98
+void HalfVolume(AudioBuffer frames[], int N){
99
+
100
+    // for each sample in the signal, reduce its value to half
101
+
102
+    for (int i=0; i < N; i++) {
103
+        frames[i].left  = frames[i].left / 2;
104
+        frames[i].right = frames[i].right / 2; 
105
+    }
106
+}
107
+
108
+```
109
+
110
+
111
+---
112
+
113
+---
114
+
115
+!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-01.html"
116
+<br>
117
+
118
+!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-02.html"
119
+<br>
120
+
121
+!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-03.html"
122
+<br>
123
+
124
+!INCLUDE "../../eip-diagnostic/sound-processing/en/diag-sound-processing-04.html"
125
+<br>
126
+
127
+---
128
+
129
+---
130
+
131
+
132
+
133
+## Laboratory Session:
134
+
135
+The `SoundProcessing` project contains the skeleton of an application to process stereo sound. The application you will complete will allow the user to apply four different algorithms to process sound. The sub-directory called `WaveSamples` contains sound files for you to test your implementations.
136
+
137
+
138
+### Exercise 1 - Remove Vocals on a Recording
139
+
140
+A cheap (but many times ineffective) way to remove the vocals from a recording is by taking advantage of the fact that voice is commonly recorded in both left and right channels, while the rest of the instruments may not. If this is the case, then we can remove vocals from a recording by subtracting the left and right channels.
141
+
142
+
143
+#### Instructions
144
+
145
+1. Load the project `SoundProcessing` into `QtCreator`. There are two ways to do this:
146
+
147
+    * Using the virtual machine: Double click the file `SoundProcessing.pro` located in the folder `/home/eip/labs/arrays-soundprocessing` of your virtual machine.
148
+    * Downloading the project’s folder from `Bitbucket`: Use a terminal and write the command `git clone http:/bitbucket.org/eip-uprrp/arrays-soundprocessing` to download the folder `arrays-soundprocessing` from `Bitbucket`. Double click the file `SoundProcessing.pro` located in the folder that you downloaded to your computer.
149
+
150
+2. Compile and run the program. You will see a graphical interface to process sound and recordings.
151
+
152
+3. Load any of the wave files `love.wav`, `cartoon.wav`, or `grace.wav` by clicking the `Search` button in the right side of the `Audio In` label, and play it clicking the `Play Audio In` button.
153
+
154
+4. In this exercise, your task is to complete the function `RemoveVocals` in file `audiomanip.cpp` so it can remove voices from a recording. The function receives an array of objects of the class `AudioBuffer`, and the size of the array.
155
+
156
+**Algorithm:**
157
+
158
+For each sample in the array, compute the difference of the sample's left channel minus its right channel, divide it by 2, and use this value as the new value for both the left and right channels of the corresponding sample.
159
+
160
+Play the output sound file with the application by clicking the `Play Audio Out` button.
161
+
162
+### Exercise 2 - Fade In
163
+
164
+A common sound effect is the gradual intensification of the recording's volume, or fade in. This is the result of constantly increasing the value of consecutive samples in the array of sound samples.
165
+
166
+#### Instructions
167
+
168
+1. Load and play any of the wave files `rain.wav`, or `water.wav` just as in Exercise 1.  
169
+
170
+2. Your task is to complete the function `AudioFadeIn` in file `audiomanip.cpp` so it gradually intensifies the volume of a recording up to a certain moment. The function receives an array of objects of the class `AudioBuffer`, the size of the array, and a fade in length that will be applied to the `AudioBuffer`. For example, if `fade_length` is `88200`, the fade in should not affect any sample in position `88200` or higher. 
171
+
172
+3. Reproduce the following recordings from the `WaveSamples` folder:
173
+
174
+* `rain-fi.wav`
175
+* `water-fi.wav`
176
+  
177
+The recordings were created using the fade in filter with `fade_length` set to `88200`. You should be able to listen how the water and the rain linearly fades in over the first two seconds, and then remains at the same volume throughout the recording.  Notice that, since we are using sounds recorded at `44100` samples per second, `88200` corresponds to two seconds of the recording.
178
+
179
+**Algorithm:**
180
+
181
+To apply a fade in to a sound, we multiply successive samples by constantly increasing fractional numbers between `0` and `1`. Multiplying samples by `0` silences them, and multiplying by `1` keeps them the same; multiplying by a factor between `0` and `1` scales their volume by that factor. It's important to mention that both channels of the samples should be multiplied by the same factor.
182
+
183
+For instance, if `fade_length` is 4, the filter will be applied to the first 4 samples:
184
+ 
185
+| Sample Number | Multiply by factor |
186
+|---|---|
187
+| 0 | 0 |
188
+| 1 | 0.25 |
189
+| 2 | 0.5 |
190
+| 3 | 0.75 |
191
+| >= 4 | 1 (Do not modify the sample) |
192
+
193
+Notice that we have 4 samples and the factor used to multiply the sample in each channel stars at `0` and increases `0.25` each time until reaching `1`.
194
+
195
+
196
+### Exercise 3 - Fade Out
197
+
198
+Another common sound effect is the gradual decrease of volume in a recording. This is the result of constantly decreasing the value of consecutive samples in the array of sound samples.
199
+
200
+#### Instructions
201
+
202
+1. Load and play any of the wave files `rain.wav`, or `water.wav` just like in the previous exercises.  
203
+
204
+2. Your task in this exercise is to complete the function `AudioFadeOut` in the file `audiomanip.cpp` so it will fade out the volume starting from a sample up to the end of the recording. The function receives an array of objects of the class `AudioBuffer`, the size of the array, and a fade out length that will be applied to the `AudioBuffer`. For example, if `fade_length` is `88200`, the fade-out should not affect any sample numbered `88200` or lower. 
205
+
206
+3. Reproduce the following recordings from the `WaveSamples` folder:
207
+
208
+* `rain.fo.wav`
209
+* `water.fo.wav`
210
+  
211
+The recordings were created using the fade out filter with `fade_length` set to `88200`. You should be able to listen how the water and the rain is played at maximum volume and then in the last two seconds the sound starts to linearly fade out.
212
+
213
+**Algorithm:**
214
+
215
+The multiplicative factors for `fade_out` are the same as for `fade_in`, but are applied in the reverse order. For example, if `fade_length` were `4`, the samples in the fourth-before-last positions would be multiplied by `0.75` (in both channels), the samples in the third-before-last positions would be multiplied by `0.5`, the samples in the penultimate positions would be multiplied by `0.25`, the samples in the final positions would be multiplied by `0.0`.
216
+
217
+
218
+### Exercise 4 - Panning from Left to Right
219
+
220
+The sound effect we want to produce in this exercise is to start hearing sound from the left channel, then fading from that channel, intensifying in the right channel, and ending up completely on the right channel.
221
+
222
+
223
+#### Instructions
224
+
225
+1. Load and play the `airplane.wav` just like in the previous exercises.
226
+
227
+2. Your task is to complete the function `LeftToRight` in file `audiomanip.cpp` so the sound "moves" from the left channel to the right channel. The function receives an array of objects of class  `AudioBuffer`, the size of the array, and a pan length that will be applied to the `AudioBuffer`. For example, if `pan_length` is `88200`, the pan should not affect any sample in position `88200` or higher.
228
+
229
+3. Play the `airplane.out.wav` recording. You should be able to listen how the airplane sound starts completely at the left, then slowly moves to the right, reaching the extreme right by the final sample. In this example the panning finishes in the last sample. This will not happen if the panning length is not equal to the number of samples; in this case, after reaching the panning length you will listen the normal sound in both channels.
230
+
231
+
232
+**Algorithm:**
233
+
234
+Getting a sound to move from left to right like this requires a fade-out on the left channel and a fade-in on the right channel. For instance, if `pan_length` is `4`, the filter will be applied to the first 4 samples:
235
+
236
+ 
237
+| Sample Number | Multiply left channel by factor | Multiply right channel by factor |
238
+|---|---|---|
239
+| 0 | 0.75 | 0 |
240
+| 1 | 0.5 | 0.25 |
241
+| 2 | 0.25 | 0.5 |
242
+| 3 | 0 | 0.75 |
243
+| >= 4 | (Do not modify the sample) | (Do not modify the sample) | 
244
+
245
+
246
+### Deliverables
247
+
248
+Use "Deliverable" in Moodle to upload the `audiomanip.cpp` file. Remember to use good programming techniques, include the names of the programmers involved, and document your program.
249
+
250
+---
251
+
252
+---
253
+
254
+### References
255
+
256
+[1] Daniel Zingaro, http://nifty.stanford.edu/2012/zingaro-stereo-sound-processing/
257
+
258
+[2] http://en.wikipedia.org/wiki/Sound
259
+
260
+[3] http://homepages.udayton.edu/~hardierc/ece203/sound_files/image001.jpg.
261
+
262
+[4] Arbimon, A web based network for storing, sharing, and analyzing acoustic information. http://arbimon.com/
263
+
264
+[5] https://somnathbanik.wordpress.com/2012/10/22/digital-signal-processing-featured-project/
265
+
266
+[6] http://www.hearingreview.com/2013/03/designing-hearing-aid-technology-to-support-benefits-in-demanding-situations-part-1/
267
+
268
+[7] http://diveintodotnet.com/2014/12/02/programming-basics-what-are-strings/