Assignment 4 Reflection and Demo

Our project achieved what our last one attempted to do, which was to give the audience control over how they perceive a scene. I am very happy with how these previous two projects developed, and how the things we learned from assignment 3 positively influenced how we decided to approach assignment 4. With this project, we did a much better job of taking advantage of the possibilities offered to us by online spaces. By using Eko Studio we were able to create a flowing piece of entertainment but also allow viewers to choose how it unfolded in real time, which was the goal from the beginning of assignment 3.

The footage was filmed very nicely by Trista, who ensured that the framing, timing and positioning within each shot was exactly the same, and this was the fundamental element that our entire project was going to rely on. The lighting was consistent between all three locations, and they were all tonally neutral, which is what we needed in order for our goal of making an interpretive piece to work. Something I think that could be improved if we were to revisit it is to make the footage one uninterrupted take instead of using cuts and edits added in post production. Some parts of the footage had to be sped up of trimmed to ensure that the movements seen in each scene were synchronised. Because of the circumstances and restrictions on the day of filming, getting the timing right was harder than initially thought, but definitely something that can be fixed if revisited. I think a project like this would also benefit from having more locations and movements, as well as an extended run time. Filming a short film in this style would be extremely interesting to watch and interact with, and would better demonstrate how modular filmmaking could work. This obviously would be extremely time and resource consuming as it would involved shooting and scoring the same film three times, but I think it’s definitely something I would like to attempt in the future.

The audio aspect of this project is something that I really wanted to develop further based on the feedback of our last assessment. I wanted to write an original piece of music that smoothly transitioned as the viewer changed scenes, and I half succeeded. Because of limitations in the Eko Studio software that we were using, we were not able to find a way to allow both the visuals and audio to be swapped; we were limited to visuals only. Luckily, since I ensured that all the songs were synchronised (much like the footage was), I was able to blend them into one single cohesive melody that plays over the footage. While this isn’t what we wanted our final product to be, it still has a huge effect on how the footage comes across, and it is also interesting to hear how three different modular parts can come together to become one single tune. However I did edit a demo video showing how the concept could ideally work, and it came across very well, and also got a good reaction from the class. Before we present our project next week I would like to keep looking into ways to make the audio modular as well, because I think it will add a significant amount of freedom to those who interact with it. Another thing I am happy with in regards to the audio is how emotionally subjective it is. Different people who have listened to it have said that the music makes them all feel different ways; happy, sad, peaceful etc. This is very nice to hear as it means this project is less dictative of how the audience is supposed to feel in comparison to assignment 3, and since the goal of our project was to create media that viewers could interpret however they want, I think this shows that we succeeded in that regard.

Our project is definitely something that is web specific, and it shows that there is untapped space out there for a site that can support and host the kind of content we want to make. Our project was slightly limited by the platforms we currently have access to, and I would personally like to see a platform get made that operated similar to how the demo video we created functions. Being able to swap between camera angles is already something that is being implemented on Youtube and live-streaming sites such as Twitch, and if more options were given to creators that allowed them to make content like we intended to make, we would see some very interesting new projects and films that we haven’t seen before. Because of the nature of our project it is absolutely web specific; the modularity of it isn’t something that can be achieved effectively any other way, but it is definitely something that can be implemented into modern sites, and I think giving audiences this small form of interactivity is a good way to keep them engaged without compromising the intentions of the original content.

While there are certainly ways our project could be taken further, I am very happy with what we have been able to come up with, and am excited to further develop the concept on a bigger scale in the future.

https://drive.google.com/file/d/1g7WS3T_V25mt-SX1gFX2_3o82MiDn7s3/view

Assignment 4 Blog Post 4

Something our last project showed us was how this new form of media making can influence the creative process. We executed our previous assignment in a way that was very reactive to what the other members of our group were doing, and assignment 4 is no exception. Instead of the footage being made in response to the audio however, this time it was the opposite. Music was composed for the project prior to filming, but once the footage was filmed I felt that the music could be changed to better fit what we were trying to achieve. This is not a luxury that can be found in larger scale productions, as everything has to be pre-planned and the plan has to be stuck by throughout the creative process. By making our project so modular, we were able to remove, add and change elements of it as we pleased depending on what felt best at the time, which is a new method of creating that I haven’t experienced until these working on these latest assignments.

This way of creating also encourages creativity, as it forced us to think on our feet more to adapt to what other members were doing. Allowing our initial plans to develop I think resulted in a final product that is far more interesting than what it initially was intended to be, and by giving more control to the viewer I think we have made something that finds a nice balance between being a pre-made piece of entertainment, and a form of expression for whoever interacts with it. The question presented to us in week 1, “How is the production for a smaller scale project different from traditional media” is very relevant to our group, and I think the creative process we’ve been through in making this project demonstrates the difference perfectly.

Assignment 4 Blog Post 3

Interactivity is very fundamental in this project, and we wanted to find a way to allow viewers to have more of an effect on the tone of our piece. This is something that we wanted to do in our last project, but the software we were using was not designed for the kind of interactivity we wanted to do. When searching for software to help us achieve our goal, we came across a tutorial video that came fairly close to what we were trying to achieve. (https://www.youtube.com/watch?v=sih8yfFBWqA). We ultimately ended up using an updated version of this software for our project, which was called Eko Studio. This program allowed us to have three different videos playing at the same time, and allow viewers to click between them. Only one video can be seen at a time, so the software essentially allows participants to make live edits within a video. This is the concept we wanted to show, and I think it could be taken further in the future by creating content that is designed in a way that allows for edits to be made by the viewer without compromising the experience.

Another good example of this being implemented in a game is the original Assassin’s Creed from 2007 (https://www.youtube.com/watch?v=guBvc-RPMnY) The majority of cutscenes in the game gave players the option to change the camera angles, allowing for a cinematic experience tailored to their mood. I think it would be interesting if more projects were filmed with this approach, and if there was a website or software to fully take advantage of it. Our project is a simplified version of this. It is not a narrative piece, but gives viewers the flexibility of viewing something in a way that they see fits. The dynamic music further achieves this, and hopefully every person who views our project will leave with different experiences based on what combination of location and music they chose.

Assignment 4 Blog Post 2

Whereas the last project was made up of eight distinct interpretations of one song, This project is condensed down to three, but played at the same time they all form one song. The viewers can then choose which components of the audio they want to hear at a time, giving them an experience more tailored to them. The visuals of this project follow the same sentiment. We filmed the exact same scene using the exact same camera angles with the exact same positioning of the main character in three different locations. The effect of this is a continuity between the takes, and the synchronised actions of the character in all three versions means that the viewer is able to transition between the various locations and not lose any continuity. Achieving this was very important to us when filming, as the immersion of the viewer would be interrupted if continuity was broken.

Our last project was also filmed in the same location for all takes. While the subject of our previous videos gave an emotional response to indicate what she was feeling to the audience, we wanted to make it more ambiguous this time around, and instead of relying on acting to get the mood across, we opted for the locations to convey this instead. Another thing we decided is to “not have a correct answer”. Our previous footage was very clearly happy, sad, angry etc based on the performance of Michelle, and left little up to the imagination. This time we made our footage even more neutral, showing our main character doing simple tasks such as drinking, stretching and answering the phone, and doing this allowed for the music and scenery to have more of an effect on the viewer.

The environments and music were also deliberately not made to convey a particular feeling, and were paired with the music in a way that allowed the tone to remained ambiguous. The footage we filmed was shot on a flat angle and with the subject facing away, meaning there was no stimulus for the audience visually other than the environment itself. The music was also composed so that it was a happy chord progression played in a sad way. The notes themselves are notes typically found in happy music, but the expressiveness of how they are played is more similar to that of “sad” music. By having these two contrasts it creates a tone in the music that is conflicting, and the idea behind this is that a particular person will have a different reaction depending on who they are.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4038858/

Assignment 4 Blog Post 1

For our final project, we decided to further develop out previous idea of how aural and visual components can influence the tone of a scene. For this iteration however, we went in with the goal of creating something that was a lot more ambiguous, and did not want to dictate the experience viewers would have like we did with our last assignment. We also wanted to further explore the idea of interactivity and modularity, and create an experience that was unique depending on who was interacting with our project. To do this we decided to try and make all the components of our project modular, and give viewers the choice of transitioning between the different states of the video whenever they saw fit, rather than restricting them to a linear story like we did last time. A big inspiration for this assignment was the dynamic musical scores in video games.

A series of games that showcase this very effectively is the Super Mario franchise, which transitions the tone of it’s music based on where the player is in the game; if the player goes under water the music will reflect this. If they fly up into the sky the music will adapt. All while maintaining the same melodic characteristics and structure, which makes it sound like one big evolving piece of music. An even better specific example is the 1998 game Banjo Kazooie, which does the same thing, but changes the theme of it’s levels based on much more locations; the song stays the same, but the tone changes depending on which area of the level you are in (https://www.youtube.com/watch?v=-JAcswUJAFE). It is a really cool technique that helps the levels in the game all feel unique and interconnected.

This is what we want to achieve for our final product, albeit on a much smaller scale. These games usually implement “loops” into the structures of the songs so that the song won’t end while the player is still exploring (Young pg 8). This might be something we need to consider with our music depending on the length of our finished footage.

https://etd.ohiolink.edu/rws_etd/document/get/ouhonors1340112710/inline