What we have developed within this week is we confirm the theme. Instead of focusing on the change of facial expressions, we decide to make changes on locations but still in regards to music. We originally have five settings — forest or woods, beach, living room (indoor), busy areas (such as shopping centre or train station), school or basketball playground. Then we would film one person repeat the a series of actions in different settings. The action might be drinking coffee, taking photo, stretching or yawning, making phone calls, reading… It should be a small movement so that we can do exact same thing easily. We need to set the camera on the same position in every location and plan the action match the same time. The aim of it is to let audience see the same action through switching the background location in real-time. And I think it is more interactive to separate the sound track and silent video and let the audience match either of them (if we can figure out how to achieve it). We want to use helloeko.com to make our project. This time we are more concentrate on the interoperation between visual and audio representations; therefore I’ve done some research about the audio-visual relationships. Abel & Hussain’s book (2015) provides a result that multimodal framework (I think online media production can be characterized as this term) can transfer positive results in noisy speech environments, with the use of audio-visual information, depending on environmental conditions. In other words, environment makes a big influence on transferring audio and visual information. The book contains theories, experiments and some useful contents. One of a relevant theory is McGurk effect. Basically it is a phenomenon of the conflict between sound and visual effect of the mouth, which would generate a deviation in sound and vision. A factor of this phenomenon is visual distractors. If audience is more concentrate on vision, their auditory perception would be distracted. Another factor is temporal synchrony. If audience receive auditory stimuli easier than visual stimuli, the synchrony is reduced so that produces deviation. Ideally we wanna separate the soundtrack and video thus audience can figure out the relationship between them. I think it must be interesting to compare different sounds apply to different visions. What’s more, the action is continuous throughout the project. The figure would continue its action coherently when you switch to another location, which can clearer showing the versions of music in different environments. Videos present in temporal synchrony because the soundtrack and the video start at the same time and are playing in the same pace.