4.6 Reflection

During the making of our work, I found difficult to match every video in the same pace and same frame. We firstly filmed in three different locations, with I did same movement and the camera was set in the same position every time. But actually it is quite hard to do exact same movement, and the camera position might be a little bit different. As a result, I adjust the time duration of some pieces of video, and also edit the frame, for trying to match each scene. Finally they seems at the same angles and process on the same speed. Compare to project 3, the musics are in the same tone, so we can switch in different videos smoothly. We originally decided to use specific icons for each video as the thumbnails. For example, tree represents wood, water drop wave represents harbor, building represents city. But we cannot put icons in EKO, thus we just put words within at last. It looks simple but clear understand. The main concept of this work is to let the movement playing continuous through switching scene, so we put the figure in the same position of each scene. We achieve this but in a very limited representation — only 50 sec for the whole video. Furthermore, the figure occupies a small part in middle of the frame, so that the main sight is the environment (occupy most of the frame), the figure is just being the reference object of transition.

In response to the characteristics of online screen media, we have connected to interactivity, modularity, variability, nonlinearity, nonfiction content. The interactivity reflects on audience can view different locations depends on their choice. It is modular and variable because there are a combination of several pieces of media, and each pieces of media is functioning in relation to multiple other pieces of media. It belongs to nonlinear work, without a storyline to force audience to follow the sequence — they can access to any scene by clicking thumbnails. Also, it has a nonfiction content. The setting is real; visual contents are filmed in reality; auditory contents are from musical instruments played in reality as well. In my opinion, our project is totally being web specific. Because it is real-time transition, which traditional media such television cannot achieve.

There are several disadvantages of our work. An obvious one is the performer (me) just makes small and simple actions so that it is more easily to match, since we want the work processes continuous. But it seems boring with those small and short actions, it would be better to making more interesting actions. In addition, we didn’t achieve cross-platform. We were decided to leave a link to Facebook at the end for the comment section, but we didn’t since we couldn’t find the way. Lastly we still not separate the musics and videos, but it looks better than I thought with Sam’s beautiful songs. It would be more interactive if audience can match the music and video clip on their own. Those failings are caused since we are unfamiliar with EKO. Our biggest shortage is we haven’t spend enough time to study the media tool deeply, thus makes a few bugs of the work we’ve done.

Looking back to the beginning of the semester, we have left a couple of question to explore through the studio experience. I think we have resolved some after project works, discussions and research. Firstly, the biggest difference between small scale production (online media production) and traditional media is the former one is interactive; viewer is able to make selection on different media fragments as well as make reactions in real-time. Traditional media is always produced by formal/large-scale press, which could not satisfy every audience, and could not interact with audience either. Secondary, online content is a big interest to people since it has various advantages: efficient, variable, selectable, real-time, interactive, diverse/multiple, gapless… For example, we can switch to another media platform immediately as long as clicking the hyperlink. In order to attract audience by operating social media, we should setup the target audience, and provide the information they want to receive as much as possible. There are thousands of media contents and media forms can be chosen on the internet.

From Thinking in Fragments, I learned a lot of useful things, not only about the course theme, but also the study methods during tutorial and collaboration. I want to thank Hannah and my classmates; they give me a lot of helps so that I’ve grown up a lot there.

4.4 Development #Week 12

For further understanding of the influence of auditory and visual media, I did more research about McGurk effect.

It is interesting that audio and visual difference can produce illusion. What you are seeing might be fake; what you are hearing might not true either. In this case, we can say, our eyes affect what we hear. This is because our brain is attempting to resolve what it thinks it’s hearing with a sound closer to what it visually sees. McGurk effect shows that visual information provided by mouth movements can influence and override what a person thinks he or she is hearing (Nierenberg 2017). Through editing the video, I actually accelerated some pieces of sequence since we need to match the movements of each location. But you can hardly see the accelerated part because the music plays in the same speed, which “cheat” your eyes.

According to Nierenberg (2017),  the brain is receiving auditory speech and visual speech, then putting them together to form some new elements when people watching media or communicating with others. It is called multi-sensory illusion. What if we put the same versions of music into different environment? Will audience generate the same feeling among them? I think actually it works, but would reduce the sensory experience in each scene, because each media fragment interacts with each other. Different music matched different environment, which can produce more vivid and actual scene; thus audience can be more involved.

4.3 Development #Week 11

Getting feedback is the most helpful thing of saving me from hell (through we haven’t got any footage to present). Last time we’ve got Bob Dylan’s “Like a rolling stone” project, which gives an inspiration of our development. We found lost in Eko (what we gonna use to process) — we thought we can’t separate the sound and video in different channels. This time, another group who is using Eko as well told us that there might be a way to do that! But we still need to figure it out in the following days. Also I’ve got Hanna’s feedback from assignment 3, which reminds me to effectively reflect the features of online media production — modularity and variability. Our production basically has three videos, each last for a minute. In Eko audience can watch the next video by clicking the arrow button. Since we want it being more variable and interactive, we will put the optional button (thumbnail) instead of the arrow. Currently we have to options — if the music and video can separate, we will make two groups of thumbnails, one is for choosing music, the other is for choosing video; if we cannot separate the music from video, we just make one group of thumbnails for accessing three videos. Variability is each piece of media can process in relation to other media fragments. Our work reflects this feature since it contains multi-elements within one interface. Modularity is each of the media pieces can be viewed on its own. Our multi-elements are functioning individually so we can say it is also modularity. In addition, it is also interactive, nonlinear, and has a nonfiction content. Audience can select different videos by the thumbnails; video can be watched separately because there is no storyline; and the scene is filmed in reality. The musics are the same but played in different musical instruments, it can be shifted smoothly between different versions because they are in the same tone and speed. To achieve the function of cross-platform, we might make a common section in relation to Facebook, probably make a Facebook link into our Eko web interface.

4.2 Development #Week 10

What we have developed within this week is we confirm the theme. Instead of focusing on the change of facial expressions, we decide to make changes on locations but still in regards to music. We originally have five settings — forest or woods, beach, living room (indoor), busy areas (such as shopping centre or train station), school or basketball playground. Then we would film one person repeat the a series of actions in different settings. The action might be drinking coffee, taking photo, stretching or yawning, making phone calls, reading… It should be a small movement so that we can do exact same thing easily. We need to set the camera on the same position in every location and plan the action match the same time. The aim of it is to let audience see the same action through switching the background location in real-time. And I think it is more interactive to separate the sound track and silent video and let the audience match either of them (if we can figure out how to achieve it). We want to use helloeko.com to make our project. This time we are more concentrate on the interoperation between visual and audio representations; therefore I’ve done some research about the audio-visual relationships. Abel & Hussain’s book (2015) provides a result that multimodal framework (I think online media production can be characterized as this term) can transfer positive results in noisy speech environments, with the use of audio-visual information, depending on environmental conditions. In other words, environment makes a big influence on transferring audio and visual information. The book contains theories, experiments and some useful contents. One of a relevant theory is McGurk effect. Basically it is a phenomenon of the conflict between sound and visual effect of the mouth, which would generate a deviation in sound and vision. A factor of this phenomenon is visual distractors. If audience is more concentrate on vision, their auditory perception would be distracted. Another factor is temporal synchrony. If audience receive auditory stimuli easier than visual stimuli, the synchrony is reduced so that produces deviation. Ideally we wanna separate the soundtrack and video thus audience can figure out the relationship between them. I think it must be interesting  to compare different sounds apply to different visions. What’s more, the action is continuous throughout the project. The figure would continue its action coherently when you switch to another location, which can clearer showing the versions of music in different environments. Videos present in temporal synchrony because the soundtrack and the video start at the same time and are playing in the same pace.

4.1 Development #Week 9

In the first week of starting the final work, we have thought about how to develop the last project, as well as discussed with other groups and gather some suggestions. We want to change the mode. The former one we did is show in different whole video; if viewer choose another video, they need to watch from start. What we willing to do is change it to real time transition, which means viewer can change video at any time and see the other one from that certain time. The advantage of it is viewer can compare the different emotions easier since they can see each of them at the same point-in-time with skipping the same part of videos. What’s more, we might add a comment section at the end by other platform, which Korsakow cannot achieve. We might also change the theme because making breakfast has many limitation that maybe dancing or something more emotive actions would be more attractive (Sam thought about lighting). Then we started to talked to other groups, and receives some useful advices. The other “music” group decide to change their theme instead of continuing the music. They show us a good example of project, which is in EKO. The sound track is always playing the song “Like a rolling stone” from Bob Dylan, viewer can change the video channel on screen by clicking the icons. It is interesting because we can see the same song affects on different scenes, and makes different effects. The most interesting thing is it achieves lip-synchronization in every version. We can see how the same music fits in different types of figures and environments from the project. Below there is a section indicates the numbers of the channel and the themes, pretending we are watching television and changing channel.

Besides, Sam found another project that similar to what we imagined (See the screenshot below).

It is the view of three elevators and audience can click any of the three numbers below to choose which elevator they want to enter, and continue different stories. I feel it it very interactive since we can make reaction depends on our choose, instead of being a passive audience to just watch the video playing.