Composer | Performer | Maker | Teacher | Researcher

This page is under (re)construction.

 

Hello, and welcome to another episode of my thesis appendices*, but make it public forum. Below is a link to a time-lapse video of me recording the live effects and audio elements for a piece called MOOD:2020 in collaboration with a bunch of the wonderful and lovely people from Little Songs of the Mutilated (run by Justin Ashworth). It's mid-November for me, by the time you read this it will have been released and is available <strong>here</strong> with all of the submitted works. Below the video are some notes about the process and the piece. What it is, and why it exists.

 

*She said only slightly jokingly.

 

 

https://www.youtube.com/embed/OCBy0Q2RT7k
Time-lapse, you can thank GoPro for it's auto-camera-time-lapse settings.

 

It's still (unfortunately) 2020. I'm tired. You're tired. We all scream for ice-cream. The background noise of my brain when writing this piece has been the ever-present buzz of uncertainty, vague-worry for friends and family, and being sick to death of staring at a computer screen for what's passing as human contact.

 

Let's start off with the elephant in the room of the video (not a joke about weight, although I'm tempted). It's the cardinal performance sin, I turned my back to the camera/audience. I had just realised that if I face the opposite direction from where I calibrate the gloves, all the numbers work nice and linearly inside of the layers of data communication with no weird jumps like I'd been trying to mentally work through and eradicate. A much simpler solution than I'd been working on. I didn't, however, think to change the camera position. As the kids say - sorry, not sorry. I've been having an issue where I keep accidentally swiping over to the time-lapse setting when I record on the GoPro, too. One last problem that you might notice is that have one sticky flex sensor channel on each hand. Occasionally they need a little assistance to return to '0' once they've been flexed because those two channels are slightly too firm in different spots causing it to falsely read flexed. You can see that ever now and then as I use the other hand to press down on either the ring finger (LH) or the index finger (RH). I've been putting off fixing this in favour of other tasks, and I keep telling myself that I need to rejig the batteries and switch to 90-degree connectors to the Arduino before I worry about the channels. Yes, I know this is silly.

 

Mood:2020 was written in collaboration with Little Songs of the Mutilated. "One more time with Fingers" post to learn more about them (very briefly, it's a sound version of the surrealist game, Exquisite Corpse). Any member of the group was able to supply a (maximum) 5 second audio sample that would then be used by the other participants to create a new work with. There are around 130-140 people that have played Little Songs of the Mutilated over the years, 42 got samples in on time. I used about 18 of them.

 

I started working through this piece by listening to all of the samples a few times in isolation and then sketching ideas for the ones I liked the most. I then created a table of sounds that I could group together to make an instrument or a loop. I curated four main groups comprised of about 15 samples and then processed them in a variety of ways: binaurally spatialising them, time-stretching, EQ-ing, and using other effects to meld the sounds into cohesive groups. Each loop was two to four bars long. Lastly, I processed three of the samples for use as "solo" instruments (Miaowy, Piddledeedoo, and Zoom Greetings).

 

I arranged the loops in Ableton as an interactive backing track, fed the solo samples into the Looper effect, and assigned my glove controls through Max/MSP. I then made a series of test recordings where I manipulated the audio live, trying different effects, different settings, and different physical orientations. I made eight recordings that I kept for reference, plus others that I deleted straight away for varying reasons (didn't like it, made too many mistakes, something misfired badly, etc.). These are the settings I used in the end:

 

<figcaption>MIDI CC routing for MOOD:2020</figcaption>

 

Most interestingly for me, this piece had made me wish that I was working in Max/MSP instead of Ableton. An aspect of my creative process that I am well aware of (and try to exploit) is how the process of making inside of a computer versus making acoustically, or on paper, affects the types of sounds that I will make in a major way. This factored into how I structured my experimentation for my master's thesis. DAW's are like different physical spaces to me, in the way that other people are influenced by being near the sea or in the CBD. In Ableton, I feel compelled towards looping and heavy use of effects. Whereas, if I compose directly into an engraving program (MuseScore, Sibelius, etc.) they end up mechanical sounding, even when performed by a purely acoustic instrument. In Logic, I tend towards free time, or by the second compositions. Also, in Logic, you cannot reassign MIDI CC's by project, it changes the assignments for the entire DAW system.

 

The overarching plan/score concept I made initially was not what I ended up using. Originally, I wanted to play with glitching vocal speech as the main foreground feature - the kind you hear when the audio in Zoom/Skype/Teams/mobile phones goes off the rails. The original plan with the digitally marred audio was conceptualised in line with my Songs in Isolation project which is documenting our new ways of living and working and the globally synchronous social consiousness issues that have to arisen (think sourdough and black humour). I wanted to make something that felt like the plodding uncertainty of this year in general. This was inspired by the name of one of the samples provided, Mood20. I laughed and thought 2020: The Mood. A secondary goal was to straddle the line between full-on experimental and the fringe areas of "normal" Western music. So I sent this off to a few different people for critique, one that does a lot of experimental music that fits into the "normal" realm, one that is in a shoegaze band (very contemporary music), and a person that I have worked with in both veins (they gave me a great narrative version of the event where it was an underwater diver stomping through the deep sea and seeing all the weird and wonderful creatures flit about through the depths). The feedback, in general, told me that I hadn't gone too far either way, which was satisfying. Partly because sometimes I wonder how far my music-barometer has swung to the experimental side after the last two years. 

 

I like how this piece works together. I prefer listening to it in headphones because I found that the fast panning in the beginning comes across a tad janky through the speakers, whereas it doesn't on the laptop, or in headphones (more people will probably be using headphones to listen to this piece anyway). It’s noise and plodding. Noisy, plodding, heavy, constantly shifting, and a smidge absurd. That is 2020 to me. You're struggling through the mire, constantly hoping for a little bit of light, but every time you think it's getting better something happens, something goes wrong. Honestly, parts of this year I've been wondering if I'm hallucinating it's been so bananas.

 

As related to designing for embodied engagement (designing to deepen the humanness of experience when using a computer interface to interact with the music), this has been a testing and experimentation piece. Because this design is still new and the controls are slightly arbitrary, there's a steep learning curve to each piece/use - which is a little distracting. When you're playing on a guitar or a piano, the components of those instruments do one (maybe three) thing(s). These gloves are a controller - they can be assigned to do whatever you want. This makes them versatile, but I have to stop and search my memory for the assignments frequently. I'm experimenting with what effects should go where and what sonic parameters are the most useful (for me) to control (for my purposes). In January/February, I thought that reverb would for sure be one of those settings. But I feel like I've had my fill of that for now and nothing I was doing with the reverb was adding that much of what I wanted. I can see use cases, but not in this. My space is too small to get the context of moving in space and turning that into piece. 50cm x 130cm is not a room. It's the kind of size where you're in a metaphorical closet. (Again, heeeey 2020, there you are, we're locked in the house).

 

I have begun to think of my use of effects and use of space as an interactive grid. This mental image has been aiding my internal conceptualisation and perception of space and movement as a means of control. With the finger control only gloves, my movement was mostly unimpeded, but now that I'm capturing all movement data (in this patch, at least), all movement becomes sonically meaningful. In this patch, this grid-like assignment arrangement gives me pitch on the X-axis and filter (and panning) on the Y-axis. The roll is speed (and panning). Where possible, I can the MIDI CC only controlling one track at a time in the same way, but there was some overlapping of controls.

 

<figcaption>Writing down my assignments because I have a trash memory. </figcaption

 

A final tidbit - these assignment challenges lead me to think that a stick-figure animation or dance/theatre movement score could be interesting and explicit ways of graphically scoring this kind of musical performance.