_UML2 signal processing models

The content or the technology discussed here is HISTORICAL or ARCHIVAL

I present here, as adapted Unified Modeling Language™ (UML™) structure diagrams, a simplified model of the hardware, data acquisition, and JSyn audio synthesis circuits of an earlier version of DranceWare for Drancing.

A composite JSyn software component that performs real-time audio synthesis with AM or VFO

A composite JSyn software component that performs real-time audio synthesis with AM or VFO

So now the acceleration signals are acquired and conditioned let's use them to modulate some synthesised sound with JSyn audio:

To really understand this audio synthesis one should study the JSyn API and tutorial. A brief explanation follows:

  • A slight smoothing lag is required in the modulating acceleration signal otherwise one can get sharp "clicks" caused by a jump in the modulation, since the data acquisition loop is fast enough for bumps to be heard as audio spikes !
  • The modulating signal is distributed (fan-out) and conditioned separately (scaled and offset) for VFO and AM. Of course one can play many other games like trimming and such, too, however scale and offset is sufficient for proof-of-concept.
  • One can choose between "fixed" amplitudes and frequencies or modulated amplitudes and frequences, to achieve: pure AM, AM + VFO, or pure VFO. The idea of this prototype is to demonstrate these pure modes, however of course many other syntheses are possible. The "fixed" amplitudes and frequencies can actually be varied by the user as system parameters using UI controls, so they are named here "varAmplitude" and "varFrequency". They are effectively "fixed" on the timescale of the synthesis engine compared with the slow GUI cycle.
  • The modulating or "fixed" frequency and amplitude are then fed to an oscillator (in this case a simple sine oscillator).
  • A further gain is applied, which achieves the mix into the final bus.
  • The modulated oscillation is panned to stereo bus writers for combination with other stereo syntheses in the final output mix.

With NACC=5 triaxial accelerometers (15 channels) the effect is to synthesis body movements, postures, and gestures into a rich composite sound that promotes strong aural biofeedback !

Groups of three user parameters can be master-slaved to a single UI control to impose "triaxial accelerometer" structure for the user experience. Or one can group all X, Y, or Z channels and act on those, or one can group all channels for global UI control.

Syndicate content