2.1 Individual Work
When I see the word “non-tactile”, I knew this would be a hard task for me, because there is a more limited range to produce our artwork. The good thing is, we’ve learned some ideas and techniques of Max in studio at the start of this assignment. Before I started my solo work, I was running out of idea, cuz the theme I could choose was only between audio control and facial control. Therefore, I found a large amount of Max tutorial on Youtube to stimulate myself to get idea and finally, I found a tutorial (showing below) which is kind of relevant.
It’s about using a funny icon moves along with the figure in a media piece. In this example, the used icon is a potato, and within the frame is Trump, who’s talking intensely. And the potato is automatically identified Trump’s face and keep covering on his face. I think the idea is funny. Even through this is no interaction with audience, I can develop it to be interactive. Thus I changed the video piece into camera, which can capture audience in front of the screen, and make the icon follow the audience’s face.
The technical aims I want to reach would be using Max to build the loop. Although this is not a complex work (for me it is), it still needs to be built with several kinds of functions such as identifying face, reading image, etc. One thing that I did not achieve is that I want to combine audio and facial control into my project, which would be imaginary look like when a person framed in the camera, the sound would play and if the person is outside the frame, the sound would be disappear. But it is more complicated and I didn’t find out the way to achieve it.
2.2 Group Work
In our group work, we also did a lot of research for the tutorial. What we wanted to do at the beginning was to make something similar to an artwork we’ve seen in NGV, which is about changing the light gradually by people stepping on the lighting surface. That’s tactile and we originally desired to use sound instead of touch the surface, but after watching thousands of tutorial, we couldn’t find a way to do it, so we changed our mind eventually. This time we still focus on sound, but creating different effects. Below is the tutorial we found.
This is telling us about how to make changes in visual effect. And we gonna use a similar technique to do our work, but we will build it with sound. In other words, our work should be something like changing different effects by audience’s sound. In terms of the technical process, we should put at least three different effects there so that we can gain more interests by seeing the transformation within; the function of sound would be more obvious as well. Another objective we are focusing on is the visual effects, we want to make it attractive so we will keep concentrating on creating some suitable effects. The biggest challenge for us might be how to make the sound to be the controller.
2.3 Reflection
Our group firstly decided to make a 3D animation like the artwork we’ve seen in NGV, and after learning the 3D alien model, we were confident that we can make a similar animation like that. But during the work, we hardly found some similar 3D figure and we couldn’t match the found materials with our concept, so we eventually change our mind. When we found the effect transferring theme, we all thought that’s amazing, that could be easily attract people’s attentions by the variable animation flashing.
Basically our finished work is showing some different effects changing by sounds. We have added a piece of music within as well. And this is a confused part during processing because we firstly considered that the effects are both controlled by the audio recording as well as the music itself, but finally we found that the effect moves along with the music because its soundtrack is gained to the audio recording. The reason we add that music is to get a better vision in effect — if there is no music, there would be some unexpected consequences such as less or no animation if audience do not present the sound properly, or the computer doesn’t gain the voice clearly. So the music here will be played constantly, and when audience make some sounds, they can see easily see the change in animation.
Talking about the technical method, I learned how to read media pieces from websites in Max by using “jit.gl.” object; change color by changing data; change scale and position such as zoom in or zoom out by data too. Another important basic is that we need to add the downloaded materials into search path so that Max can read it. Compare to the first assignment, the connection in objectives, triggers, output and input messages are more complicated. I usually made mistakes while processing, so we practiced thousand times to make it on the right track.
Overall, I feel a bit satisfied with our work, because I’ve acquired a further understanding of media interactions, and started a new area on it — non-tactile. What we achieved in this assignment is it is quite interactive with the animation that can be easily seen the transformation along with the voice change. And we’ve got some fancy animations effect which increase attractions. And I have also developed the knowing of sound in interaction. However, some of the objectives we originally had haven’t achieve at the end, we were forced to change minds on account of some technical reasons as well as conceptual. For example, we couldn’t make interactions on 3D figure so that we change to 2D animation. In my next assignment, I want to figure out more on it, and keep finding out how the sound make effort in media interaction.