Assignment 2- Artist Statement / Reflection

ARTIST STATEMENT – GROUP:

Our group non-tactile interactive study aims to explore the effects of implementing technology to replace physical objects. In our group project, we utilised motion detection to trigger sound clips of instruments that were overlaid on the video input. We had instruments and sound effects located at each corner of the screen – a drum, a cymbal, a “brr brr” sound effect used in hip-hop ad-libs, and a clap. The user would trigger each of these sounds by waving his/her arms in the respective corners, resulting in a concoction of sounds that were unique to the user. We wanted to explore how much better would it be to replace physical instruments with intangible digital versions of themselves. Would the benefits of technology outweigh its drawbacks? Or would the transition be well worth the cost? This project only deals with trivial things such as a drum kit, but what would happen if the same thing were to happen on a more personal level à la Her (a 2013 movie about a man who develops a relationship with an artificially intelligent virtual assistant)?  It’s an important question to ponder upon, in a world where things are increasingly replaced by Artificial Intelligence (AI) technology.

ARTIST STATEMENT – SOLO :

I chose to explore the relationship between the human to the motion detector and the motion detector to trigger sound effects on Max. It is a relatively simple simulation to focus on the basic elements of it that will be developed in my next assignment. The idea was to communicate the motion detector with a sound effect at a specific corner of the screen – in this case a sound effect of a ‘punch’. It was inspired by an unfortunate event that will be explained in the reflection. Skills and techniques I developed from this project was the use of “jit.op @op absdiff” nodes to implement the motion detector to trigger the sound effect.

REFLECTION :

For our group project initially we wanted to incorporate the current “meme” video trend that displays a sense of humor that’s currently been going viral. The idea for the project was inspired by a viral video by @askaboutaj called “Mood I’m going into 2018 with” (https://www.youtube.com/watch?v=Hw4atwjEff4&feature=youtu.be&fbclid=IwAR0mZLujFczewgPbP5DxLq-ODcDq4qxrPWzTSSNeaqkzZei7G6pO91PUR2E) He dodges and punches/kicks graphics of words or objects associated with “bad vibes” the fly across the screen. It was important to us to create something that our users, uni students were able to resonate and have fun with. We gathered as a group outside of class to collaboratively work on it as there were many elements to take into account however after experimenting with the idea, we realised our laptops could not take the large number of “jit.alphabetblend” nodes causing the software to lag and video output to freeze when attempting to run the program. We turned to a simplified but slightly different idea which incorporated the same nodes as the first idea. Max8 was able to register smaller files and ran smoothly making it a better user experience as compared to if we were to go with the more complex idea but fails to achieve interaction. Things we can work on in the future could be programming and incorporating specific movements to represent and trigger the sound effect such as jumping to trigger the cymbal. Throughout the process of this exploring this feature of Max 8 we discovered that the software is still unresponsive and can’t fully handle the weightage of certain files just yet as it failed it detect particular hand movements. Conclusion it was important for us to make the decision to simplify the idea rather than have a complex project that was not functional.

As for my solo project, my initial plan was to explore the 3 dimension world with the “jit.world” node and it’s ability to display animation and communicate with the motion detector feature. This is because I have always had an admiration for animators who have the talent to draw and create these motion pictures. When discovering this new skill I knew I wanted to dive deeper into it, this is because, how cool would it be if people who resonated with my disappointment in being horrible at drawing had the ability to in another way “animate” characters. Challenges included finding the right format that could be read and copied to clipboard, identifying and connecting the right nodes to the selected part of the animation to scale down to it’s value to be detected by the motion detector – HOWEVER when almost completing the project I saved and closed the window to see if it would work if I re opened it and a ton of errors just kept interfering with the functions. There were hundreds of error notifications and to my strongest abilities I tried to re track my steps and fix it but the knowledge I have of the software was insufficient to solve it. I resulted in changing my idea to a more straightforward less complicated project that I was able to put together with the little understanding I have of the software. Hoping Camille has some humour for this next part as from what I had learned throughout the weeks, I decided to work on a motion detecting sound effect simulation that was inspired by the urge to punch my laptop when my intended project failed last minute. Using the “jit.op @op absdiff” nodes to trigger the motion detector and the “past” message to trigger the sound when a certain numerical value is passed. It is completely different from my initial idea however it did spark a revelation, as someone who regularly uses Facetime, I wondered how cool would it be to incorporate a motion detecting sound effect in video calling platforms to express how we feel with sound effects. When I wanted to punch my laptop out of frustration when my first idea came crumbling down it didn’t have the same satisfying impact as it did once I added the sound effect and there was a sense of humour being able to do an action and have a sound effect to express in further.

 

Leave a Reply

Your email address will not be published. Required fields are marked *