Skip to content

Commit 0b5b147

Browse files
committed
combined tutorials 2 and 3 as they were redundant
1 parent 7a6316a commit 0b5b147

8 files changed

Lines changed: 395 additions & 220 deletions

File tree

README.md

Lines changed: 334 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,5 +37,339 @@ interactive interface allows the simulation to be explored freely; users can
3737
examine the signals both visually through numerous graphs, or by listening to
3838
the test signals directly.
3939

40+
## Implementation
4041

42+
Since our demonstration takes place purely in the digital domain, we
43+
unfortunately cannot use real continuous time analog inputs and outputs.
44+
Instead, we simulate the ADC-DAC processes in the discrete time domain. The
45+
analog input and output are represented as discrete time signals with a high
46+
sampling rate; at the time of writing, the maximum sampling rate supported
47+
by WebAudio is 96 kHz.
4148

49+
The ADC process consists of several steps, including antialiasing, sampling,
50+
and quantization. All of these are simulated in our model: antialiasing is
51+
achieved with a windowed sinc FIR lowpass filter of order specified by the
52+
user; sampling is approximated by downsampling the input signal by an
53+
integer factor; and quantization is achieved by multiplying the sampled
54+
signal (which ranges from -1.0 to 1.0) by the maximum integer value possible
55+
given the requested bit depth (e.g. 255 for a bit depth of 8 bits), and then
56+
rounding every sample to the nearest integer. The DAC process is simulated
57+
in turn by zero stuffing and lowpass filtering the sampled and quantized
58+
output of the ADC simultion.
59+
60+
In summary, the continuous time input is simulated by a 96 kHz discrete time
61+
signal, the sampled output of the ADC process is simulated by a downsampled
62+
and quantized signal, and the continuous time reconstruction output by the
63+
DAC is simulated by upsampling the "sampled" signal back to 96 kHz. In our
64+
tests we have found this model to be reasonable; many key concepts, such as
65+
critical sampling, aliasing, and quantization noise are well represented in
66+
our simulation.
67+
68+
For more details, the reader is encouraged to peruse the rest of the source
69+
code in this document. Many comments have been included to aid readers who
70+
are unfamiliar with javascript. Any questions you may have about the
71+
implementation of the simulation can only be definitively answered by
72+
understanding the source code, but please feel free to contact the project
73+
maintainers if you have any questions.
74+
75+
```javascript
76+
*/
77+
78+
// `renderWavesImpl` returns an anonymous function that is bound in the widget
79+
// constructor. This is done in order to seperate the implementation of the
80+
// simulation from the other implementation details so that this documentation
81+
// can be more easily accessed.
82+
83+
const soundTimeSeconds = 1.5;
84+
const fadeTimeSeconds = 0.125;
85+
function renderWavesImpl(settings, fft, p) { return (playback = false) => {
86+
87+
// if we are not rendering for playback, we are rendering for simulation
88+
let simulation = !playback;
89+
90+
// select the buffer to render to; playback buffer, or simulation buffer
91+
var original = playback ? settings.original_pb : settings.original;
92+
var reconstructed = playback ? settings.reconstructed_pb : settings.reconstructed;
93+
var stuffed = settings.stuffed;
94+
95+
// calculate harmonics ------------------------------------------------------
96+
97+
// The signal is generated using simple additive synthesis. Because of this,
98+
// the exact frequency content of the signal can be determined a priori based
99+
// on the settings. We generate this information here so that it can be used
100+
// not only by the synthesis process below, but also by several of the graphs
101+
// used to illustrate the frequency domain content of the signal.
102+
103+
// We only calculate the harmonics for the simulation; it is assumed they will
104+
// already have been calculated earlier when rendering for playback
105+
106+
if (simulation) {
107+
let harmonic_number = 1;
108+
let harmonic_amplitude = 1;
109+
let invert = 1;
110+
let harmInc = (settings.harmType =="Odd" || settings.harmType == "Even") ? 2 : 1;
111+
112+
for (let i = 0; simulation && i < settings.numHarm; i++) {
113+
114+
// the amplitude of each harmonic depends on the harmonic slope setting
115+
if (settings.harmSlope == "lin") harmonic_amplitude = 1 - i/settings.numHarm;
116+
else if (settings.harmSlope == "1/x") harmonic_amplitude = 1/harmonic_number;
117+
else if (settings.harmSlope == "1/x2") harmonic_amplitude = 1/harmonic_number/harmonic_number;
118+
else if (settings.harmSlope == "flat") harmonic_amplitude = 1;
119+
else if (settings.harmSlope == "log") {harmonic_amplitude = Math.exp(-0.1*(harmonic_number-1));
120+
console.log(harmonic_amplitude)}
121+
122+
// In case the harmonic slope is 1/x^2 and the harmonic type is "odd",
123+
// by inverting every other harmonic we generate a nice triangle wave.
124+
if (settings.harmSlope =="1/x2" && settings.harmType == "Odd") {
125+
harmonic_amplitude = harmonic_amplitude * invert;
126+
invert *= -1;
127+
}
128+
129+
// the frequency of each partial is a multiple of the fundamental frequency
130+
settings.harmonicFreqs[i] = harmonic_number*settings.fundFreq;
131+
132+
// The harmonic amplitude is calculated above according to the harmonic
133+
// slope setting, taking into account the special case for generating a
134+
// triangle.
135+
settings.harmonicAmps[i] = harmonic_amplitude;
136+
137+
// With harmonic type set to "even" we want the fundamental and even
138+
// harmonics. To achieve this, we increment the harmonic number by 1 after
139+
// the fundamental and by 2 after every other partial.
140+
if (i == 0 && settings.harmType == "Even") harmonic_number += 1;
141+
else harmonic_number += harmInc;
142+
}
143+
}
144+
145+
// render original wave -----------------------------------------------------
146+
147+
// initialize the signal buffer with all zeros (silence)
148+
original.fill(0);
149+
150+
// For the sample at time `n` in the signal buffer `original`,
151+
// generate the sum of all the partials based on the previously calculated
152+
// frequency and amplitude values.
153+
original.forEach( (_, n, arr) => {
154+
for (let harmonic = 0; harmonic < settings.numHarm; harmonic++) {
155+
156+
let fundamental_frequency = settings.harmonicFreqs[0];
157+
let frequency = settings.harmonicFreqs[harmonic];
158+
let amplitude = settings.harmonicAmps[harmonic];
159+
160+
// convert phase offset specified in degrees to radians
161+
let phase_offset = Math.PI / 180 * settings.phase;
162+
163+
// adjust phase offset so that harmonics are shifted appropriately
164+
let phase_offset_adjusted = phase_offset * frequency / fundamental_frequency;
165+
166+
let radian_frequency = 2 * Math.PI * frequency;
167+
let phase_increment = radian_frequency / WEBAUDIO_MAX_SAMPLERATE;
168+
let phase = phase_increment * n + phase_offset_adjusted;
169+
170+
// accumulate the amplitude contribution from the current harmonic
171+
arr[n] += amplitude * Math.sin( phase );
172+
}
173+
});
174+
175+
// linearly search for the maximum amplitude value (easy but not efficient)
176+
let max = 0;
177+
original.forEach( (x, n, y) => {if (x > max) max = x} );
178+
179+
// normlize and apply amplitude scaling
180+
original.forEach( (x, n, y) => y[n] = settings.amplitude * x / max );
181+
182+
// apply antialiasing filter if applicable ----------------------------------
183+
184+
// The antialiasing and reconstruction filters are generated using Fili.js.
185+
// (https://github.com/markert/fili.js/)
186+
let firCalculator = new Fili.FirCoeffs();
187+
// Fili uses the windowed sinc method to generate FIR lowpass filters.
188+
// Like real antialiasing and reconstruction filters, the filters used in the
189+
// simulation are not ideal brick wall filters, but approximations.
190+
191+
// apply antialiasing only if the filter order is set
192+
if (settings.antialiasing > 1) {
193+
194+
// specify the filter parameters; Fs = sampling rate, Fc = cutoff frequency
195+
196+
// The cutoff for the antialiasing filter is set to the Nyquist frequency
197+
// of the simulated sampling process. The sampling rate of the "sampled"
198+
// signal is WEBAUDIO_MAX_SAMPLERATE / the downsampling factor. This is
199+
// divided by 2 to get the Nyquist frequency.
200+
var filterCoeffs = firCalculator.lowpass(
201+
{ order: settings.antialiasing
202+
, Fs: WEBAUDIO_MAX_SAMPLERATE
203+
, Fc: (WEBAUDIO_MAX_SAMPLERATE / settings.downsamplingFactor) / 2
204+
});
205+
206+
// generate the filter
207+
var filter = new Fili.FirFilter(filterCoeffs);
208+
209+
// apply the filter
210+
original.forEach( (x, n, y) => y[n] = filter.singleStep(x) );
211+
212+
// time shift the signal by half the filter order to compensate for the
213+
// delay introduced by the FIR filter
214+
original.forEach( (x, i, arr) => arr[i - settings.antialiasing/2] = x );
215+
}
216+
217+
// downsample original wave -------------------------------------------------
218+
219+
// zero initialize the reconstruction, and zero stuffed buffers
220+
reconstructed.fill(0);
221+
stuffed.fill(0);
222+
223+
// generate new signal buffers for the downsampled signal and quantization
224+
// noise whose sizes are initialized according to the currently set
225+
// downsampling factor
226+
if (playback) {
227+
settings.downsampled_pb = new Float32Array(p.round(original.length / settings.downsamplingFactor));
228+
settings.quantNoise_pb = new Float32Array(p.round(original.length / settings.downsamplingFactor));
229+
} else {
230+
settings.downsampled = new Float32Array(p.round(original.length / settings.downsamplingFactor));
231+
settings.quantNoise = new Float32Array(p.round(original.length / settings.downsamplingFactor));
232+
}
233+
var downsampled = playback ? settings.downsampled_pb : settings.downsampled;
234+
var quantNoise = playback ? settings.quantNoise_pb : settings.quantNoise;
235+
var quantNoiseStuffed = settings.quantNoiseStuffed;
236+
quantNoiseStuffed.fill(0);
237+
238+
// calculate the maximum integer value representable with the given bit depth
239+
let maxInt = p.pow(2, settings.bitDepth) - 1;
240+
241+
let stepSize = (settings.quantType == "midTread") ? 2/(maxInt-1) : 2/(maxInt);
242+
243+
// generate the output of the simulated ADC process by "sampling" (actually
244+
// just downsampling), and quantizing with dither. During this process, we
245+
// also load the buffer for the reconstructed signal with the sampled values;
246+
// this allows us to skip an explicit zero-stuffing step later
247+
248+
downsampled.forEach( (_, n, arr) => {
249+
250+
// keep only every kth sample where k is the integer downsampling factor
251+
let y = original[n * settings.downsamplingFactor];
252+
y = y > 1.0 ? 1.0 : y < -1.0 ? -1.0 : y; // apply clipping
253+
254+
// if the bit depth is set to the maximum, we skip quantization and dither
255+
if (settings.bitDepth == BIT_DEPTH_MAX) {
256+
257+
// record the sampled output of the ADC process
258+
arr[n] = y;
259+
260+
// sparsely fill the reconstruction and zero stuffed buffers to avoid
261+
// having to explicitly zero-stuff
262+
reconstructed[n * settings.downsamplingFactor] = y;
263+
stuffed[n * settings.downsamplingFactor] = y * settings.downsamplingFactor;
264+
return;
265+
}
266+
267+
// generate dither noise
268+
let dither = (2 * Math.random() - 1) * settings.dither;
269+
270+
let quantized;
271+
// Add dither signal and quantize. Constrain so we dont clip after dither
272+
switch(settings.quantType) {
273+
case "midTread" :
274+
quantized = stepSize*p.floor(p.constrain((y+dither),-1,0.99)/stepSize + 0.5);
275+
break;
276+
case "midRise" :
277+
quantized = stepSize*(p.floor(p.constrain((y+dither),-1,0.99)/stepSize) + 0.5);
278+
break;
279+
}
280+
281+
// record the sampled and quantized output of the ADC process with clipping
282+
arr[n] = quantized;
283+
284+
285+
// sparsely fill the reconstruction buffer to avoid having to zero-stuff
286+
reconstructed[n * settings.downsamplingFactor] = quantized;
287+
stuffed[n * settings.downsamplingFactor] = quantized * settings.downsamplingFactor;
288+
289+
// record the quantization error
290+
quantNoise[n] = quantized - y;
291+
quantNoiseStuffed[n * settings.downsamplingFactor] = quantNoise[n];
292+
});
293+
294+
// render reconstructed wave by low pass filtering the zero stuffed array----
295+
296+
// specify filter parameters; as before, the cutoff is set to the Nyquist
297+
var filterCoeffs = firCalculator.lowpass(
298+
{ order: 200
299+
, Fs: WEBAUDIO_MAX_SAMPLERATE
300+
, Fc: (WEBAUDIO_MAX_SAMPLERATE / settings.downsamplingFactor) / 2
301+
});
302+
303+
// generate the filter
304+
var filter = new Fili.FirFilter(filterCoeffs);
305+
306+
// apply the filter
307+
reconstructed.forEach( (x, n, arr) => {
308+
let y = filter.singleStep(x);
309+
310+
// To retain the correct amplitude, we must multiply the output of the
311+
// filter by the downsampling factor.
312+
arr[n] = y * settings.downsamplingFactor;
313+
});
314+
315+
// time shift the signal by half the filter order to compensate for the delay
316+
// introduced by the FIR filter
317+
reconstructed.forEach( (x, n, arr) => arr[n - 100] = x );
318+
319+
// render FFTs --------------------------------------------------------------
320+
// TODO: apply windows?
321+
322+
// The FFTs of the signals at the various stages of the process are generated
323+
// using fft.js (https://github.com/indutny/fft.js). The call to
324+
// `realTransform()` performs the FFT, and the call to `completeSpectrum`
325+
// fills the upper half of the spectrum, which is otherwise not calculated
326+
// since it is a redundant reflection of the lower half of the spectrum.
327+
328+
if (simulation) {
329+
fft.realTransform(settings.originalFreq, original);
330+
fft.completeSpectrum(settings.originalFreq);
331+
332+
fft.realTransform(settings.stuffedFreq, stuffed)
333+
fft.completeSpectrum(settings.reconstructedFreq);
334+
335+
fft.realTransform(settings.reconstructedFreq, reconstructed)
336+
fft.completeSpectrum(settings.reconstructedFreq);
337+
338+
fft.realTransform(settings.quantNoiseFreq, quantNoiseStuffed)
339+
fft.completeSpectrum(settings.quantNoiseFreq);
340+
}
341+
342+
// fade in and out and suppress clipping distortions ------------------------
343+
344+
// Audio output is windowed to prevent pops. The envelope is a simple linear
345+
// ramp up at the beginning and linear ramp down at the end.
346+
347+
if (playback) {
348+
// This normalization makes sure the original signal isn't clipped.
349+
// The output is clipped during the simulation, so this may reduce its peak
350+
// amplitude a bit, but since the clipping adds distortion the perceived
351+
// loudness is relatively the same as the original signal in my testing.
352+
let normalize = settings.amplitude > 1.0 ? settings.amplitude : 1.0;
353+
354+
// Define the fade function
355+
let fade = (_, n, arr) => {
356+
let fadeTimeSamps = Math.min(fadeTimeSeconds * WEBAUDIO_MAX_SAMPLERATE, arr.length / 2);
357+
// The conditional ensures there is a fade even if the fade time is longer than the signal
358+
if (n < fadeTimeSamps)
359+
arr[n] = (n / fadeTimeSamps) * arr[n] / normalize;
360+
else if (n > arr.length - fadeTimeSamps)
361+
arr[n] = ((arr.length - n) / fadeTimeSamps) * arr[n] / normalize;
362+
else arr[n] = arr[n] / normalize;
363+
};
364+
365+
// Apply the fade function
366+
original.forEach(fade);
367+
reconstructed.forEach(fade);
368+
quantNoise.forEach(fade);
369+
}
370+
371+
372+
}}
373+
/*
374+
```
375+
*/

tutorials/tutorial2.html

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
<!DOCTYPE html>
2+
<!-- TODO: add the ability to adjust the sampling rate values/frequency rate-->
23
<html>
34
<head>
45
<link rel="stylesheet" type="text/css" href="styles_tutorial.css" media="screen" />
@@ -18,7 +19,7 @@
1819
<body>
1920
<center>
2021
<div class="tab">
21-
<a href="index.html" class="button_round">&laquo; Prev </a>
22+
<a href="tutorial1.html" class="button_round">&laquo; Prev </a>
2223
<a href="../index.html" class="button_round">Home </a>
2324
<a href="tutorial3.html" class="button_round">Next &raquo;</a>
2425
</div>
@@ -29,7 +30,7 @@
2930
</center>
3031
<div id = "content-wrap" class="title qs">
3132
<H1>
32-
Tutorial 2: Sampling
33+
Tutorial 3: Sampling a Waveform
3334
</H1>
3435
<hr>
3536
</div>
@@ -74,17 +75,20 @@ <H1>
7475
<main>
7576
<script>
7677
const widgetset = new_widget(
77-
[
78-
new impulsePanel()
79-
],
80-
[
81-
new sampleRateSlider()
82-
, new phaseSlider()
83-
, new timeZoomSlider()
84-
]
85-
,["original"]
86-
,"Question" //question div id
87-
,"qs" //questions class
78+
[ new inputSigPanel()
79+
,new impulsePanel()
80+
, new inputPlusSampledPanel_no_binary()
81+
],
82+
[ new freqSlider()
83+
, new numHarmSlider()
84+
, new sampleRateSlider()
85+
, new phaseSlider()
86+
, new timeZoomSlider()
87+
]
88+
,["original"]
89+
,"Question" //question div id
90+
,"qs" //questions class
91+
8892
);
8993
</script>
9094
<br>

0 commit comments

Comments
 (0)