Messing with Processing: Part 2
8 Jun 20120 Comment
Aka What’s more fun than just sound? Video+Audio!
In the last post, I spoke about creating a visualizations using ambient sounds and how that can be used as a live reactive projection behind a performing musician.
But what if YOU could be the visualization?
Instead of creating abstract shapes and animating them from the sound frequencies, what if you and the sound could be the visualization?
That was my thought when I started messing with the video capture libraries in Processing. I started out with an example from ‘Learning Processing’ (a tutorial website that helps you understand the basics of processing), which looked at ‘Simple Motion Detection’. The sample program looked at any movement and gave that movement a black color.
But whats the fun with just one input (movement) and one color (black). What I did was change the color of the movement on screen according to how fast someone moved in front of a camera and the variation of music behind him/her. For the sound input, I’ve used the ‘minim’ library, that you can find here (some great examples there).
What it creates is a motion+sound visualization which shows your motions in color streaks, which change with the music. Doesnt make sense? Have a look at the video:
But whats the use of this?
The previous example could be used when a musician performs live. This one, at least in my mind, can be used in two circumstances: first, when a dancer performs, this could be played him/her on stage, second (and much more fun) would be to place this on screens around a dancing area in a club/lounge. In a club/lounge projections on the walls would change as the music varied and people moved to it. It could create one very interesting and intense media experience. Of course, once branded, it could be used for all sorts of brand promotion.
Would love to know what you think of that!