Tag Archives: openFrameworks

RhythmSynthesis Gallery Installation


This Friday was the first day we could begin installing our pieces in the Kellen Gallery at The Sheila C. Johnson Design Center, so I decided to get my project in there early and test it.

A few classmates and I tried it out once I got the amp plugged in. I invited the folks that run the gallery and build most of the displays for the work in Kellen Gallery to test it as well. Watching everyone have such a good time made the long hours over the past few weeks and months all worth it.



Here is a video of a few of the folks who used the piece while I was setting up:

The Gallery show runs from May 7 – 23, 2011 at Parsons The New School for Design (The Sheila C. Johnson Design Center, 2 W 13th Street, Kellen Gallery), and it is open to the public so come on by. The opening party is on the evening of May 8th (7pm), and Symposium talks are on May 14 and 15.

For more information on the Thesis show, timing of the talks, and other related events, head to the MFA DT Thesis site here.

RhythmSynthesis Interface tests


I invited Brett Tieman, J. Nicholas Moy, and Jeremy Gough (who I played with in the band, The Mugs) as well as fellow MFA DT students Haeyoung Kim and Matt Ruby to test the interface of my final project.

I incorporated a number of updates to both the software and hardware, which included:

  • play head movement tied to computer clock for more accurate timing
  • “sliders” along the right and left edges to control speed and volume
  • updated oscillators that generate sounds within openFrameworks application
  • new acrylic top that diffuses light more evenly
  • purposefully designed shapes and colors for acrylic pieces used to make sounds

The comments centered specifically on the sounds that were generated by the application, and they also reaffirmed my belief that when the sounds provide more distinctive characteristics (such as range of octaves and speed of attack), users are able to compose without obvious or deliberate visual feedback.

Below is a video demonstration I did of this updated interface as well as images from the final tests. Headphones are recommended (low notes do not play well on computer speakers):


AV Systems – Music Video

For this week’s assignment, we were asked to make a music video that used code in an interesting way. It’s been a crazy thesis week, so I figured drum and bass music would be an appropriate genre to explore.

I took the street video by mounting a camera to my bike and riding home from school. I did a series of blob detection on different aspects of the video, and the alpha level of the blob colored visuals are based on the value of FFT ranges. The song is “Urban Shakedown ft. DBO General – Some Justice 95 aka Arsonist (Vocal mix)”.

This video is dedicated to Monday Night Vinyl Club.

RhythmSynthesis Sound Tests

This past weekend I invited fellow designers, artists, musicians, programmers, friends and family to Parsons to try out the latest hardware and software version of my tangible color music instrument. It had been a few weeks since the last set of tests, and I made a big push to get as many updates included in the hardware and software for this weekends Sound Tests and for final presentations at school.


On the night before the testing, I finished the hardware portion improvements which includes a new aluminum arm for the camera mount and an enclosure for the camera. I removed the PS3 eye from its original encasing and reduced the size of weight and space the camera took up.

The main goals for this weekend was to rigorously test the software to see if it was stable enough for long performances and to see how musicians and artists dealt with the interface. I truly feel that the audio is enough feedback for performers to know where the playhead is, but I needed to put the new hardware in front of people and see. Also, up to this point, the majority of the people testing my project were from the MFA Design and Technology program, and I wanted a fresh set of eyes and ears on the project.

Here are a set of images from both days:

SATURDAY


SUNDAY


The results were great. I received a lot of feedback (and some encouragement) that will help me with the next steps of this project. The primary comment dealt with how best to provide the musician feedback as to what is actually happening and to help them make decisions on how to use the pieces. Some people preferred more audio feedback, such as more drastic changes between the sounds when pieces are rotated or differ in shape. Others preferred to have more visual feedback so the user would know the location of the rotating playhead.

I put together a brief sample of the performances, and here is the video:

I also put together an extended version that includes all of the performances from the weekend, so if your still feeling like exploring the results from the test, you can see the long player here.

AV Systems – Granular Synthesis Recorder

For the advanced AvSys homework, we were asked to build an app that uses live audio input to record data for a granular synthesizer.

I spent most of spring break hammering this out, and the current version works but is far from perfect. Here is a video demonstration:

Basically I am taking input buffer data, storing it to dynamic float arrays, and then sending the information to a version of Zach Lieberman’s granular synthesis code that I hacked up quite a bit.

I plan on updating this so that the parameters of the synthesizer can be changed after the sounds are recorded, and I’ll do a post (with code) in the coming weeks.

AV Systems – White squares and graphical score

This week’s assignment we were asked to manipulate a white square using sonic information. Using sonic qualities, like pitch, and frequency information, and we were to think about how to “perform” the white square.

We were split up into groups, and Lara Warman, Basak Haznedaroglu, and I came up with 3 scenarios for our white squares, each taking the lead on writing the code for the project.

My scenario involved a “ski slalom” using pitch to direct the white-square skier around the appropriate flags. The higher pitches moves the skier to the right and lower pitches moves the skier to the left. Here is a video demonstration:

Lara coded the pitch competition (which divides the square into two triangles), and here is a demonstration for that (on the left). Basak was coding a carnival “bell ringing” competition, but she didn’t make it too far with it. She sent along code, so here is a demonstration of what she sent (on the right).

Code for these projects can be found here.

We were also asked to make a graphical score to a piece of music / sound for which traditional music notation isn’t necessary. Think about what visual languages you can use to represent sound.

I chose Terry Riley’s “A Rainbow In A Curved Air” as the musical piece to visually score. I imagined the piece being a spiraling combination of different tones, shapes, and patterns, so I first chose my color palate and created a series of “sound strips” by shredding the paper selected.



I then placed varying amounts of the sound strips on my scanner. I rotated the strips as I added and removed the pieces, and eventually added bits of shredded cd’s to embody the portions of high range sound glitter that Riley adds throughout the piece.

Here is my visual score of Terry Riley’s “A Rainbow In A Curved Air”:

RhythmSynthesis User Testing

This past weekend, I ran a series of user tests with the latest prototype of my thesis project. I reserved a classroom on the 10th floor of 2 W 13th Street (room 1013) and set up the prototype along with a large guitar amp, various effects pedals (delay, distortion, etc), and a drum machine. The group of people who tested the project included fellow students, friends, musicians, and Parsons Faculty.


I included a few updates in the program, which included a full 12 tone scale matched to specific hue values within the full visible light color spectrum, tempo/speed changes, and the use of object area to determine appropriate scale (meaning the larger the piece the lower the octave, the smaller the piece the higher the octave). I also updated the light platform to provide a better color for camera detection. The PS3 Eyesight is biased a little on the cooler color side.




The reactions and results were great, and it provided me plenty of feedback for taking the next steps with the interface and the software portion of the project. In general, there was a wide array of approaches to how to create musical pieces. Some were methodical, other’s used the space between the platform and the camera to manipulate the sounds, and others just went crazy.

Here is a video of some of the folks testing this prototype:

Thanks to Ivy, Greg Climber (and his friend), Katherine Moriwaki, Jonah Brucker-Cohen, Matt Ruby, Steven Sclafani, Cara Lynn, Lou Sclafani, Zach Lieberman, George Bixby, Rob Ramirez, Shane Lessa, and Manuel Rueda Iragorri for coming in to test and offering up your valuable feedback. Looking forward to tackling the next set of updates and changes.

The next user testing session will be…
2 W 13th Street, Room 1013 (map)
3/26 (Saturday) 1 – 7pm
3/27 (Sunday) 4 – 10pm

All are welcome, so if you plan on coming, please send me an e-mail so I can make sure you’re all set with security downstairs.

For more information on my thesis project, RhythmSynthesis, you can check here.

Thesis Prototype – Additive Synthesis and Controls

New musical interfaces are necessary to further explore the complexities of rhythm. RhythmSynthesis proposes a new instrument for composition and performance to continue such exploration. Originating as an investigation into the relationships between rhythm and technology, RhythmSynthesis applies color, shape, and sound to demonstrate how our understanding of visual music, computation, and tangible, audio-visual interactions can be applied as considerations in musical expression.

The goal for this prototype was to introduce updated sound generation and controls in the application. Using additive synthesis code I developed in the course Audio Visual Systems with Zach Lieberman and Zach Gage using openFrameworks, I am analyzing the shape and size of each colored object to determine the sounds that should be made. I also included sliders for changing the rate at which the objects are sensed and played.

Here is a brief demonstration:

You can see other prototypes and relevant write-ups here.

Orbit Player Prototype

In researching for my Thesis on rhythm, I became fascinated with the front cover of the Wire Magazine book “Undercurrents“. It depicts a record player arm on the rings of a tree, so I wanted to do a piece that had the reference of how the needle of the turntable orbits the sounds as well as how sounds change over time.

The general idea was to provide the understanding that our position (or position of objects within a space) have a rhythmic relationship to each other. I understand how far a table might be from me or how close a person walks past me, but I have difficulty fully understanding the simple relationship that all the pieces have together. This piece illustrates how the position of pieces have relationships over time.

The colors of each of the notes depicted in the piece are inspired by I. J. Belomont’s understanding of the scale and each note’s association with color.

Here is a demo of the application:

In this sketch, I have several instruments that age as they are played. Currently they have two state changes (young and old). I hope in future iterations to continue this idea of aging and rhythm, as well as include the idea of movement within a space. I also want to improve performance and program this for the iPad.

If you would like to try it out, you can download the application here. Once you launch it, just press “f” (make sure it’s lower case), to make it full screen. If you do test it out, it would be awesome to hear your feedback, so please post comments. Remember this is a working prototype.