ASSIGNMENT #5.2 STUDIO REFLECTION

1> From your studio, reflect on an aspect of two other students/group’s media work on the website in terms of specific insights they produced about a key idea addressed by the studio?

 

Dream Scape

I thought the Dream Scape short film effectively addressed the prompts of the course, namely through their use of a device which can put the user into another world, improving their life for the better, at least initially. The main character, off to her boring job, is clearly unsatisfied with her life, and this is communicated very well through the narrow, almost portrait style aspect ratio, as well as the greyscale colour grading for the first act of the piece. Once the character acquires this ‘Dream Scape’ technology, there is a jump cut shift into full saturated colour, close up on the character’s face, with the aspect ratio widening to a 16:9 ratio as it zooms out, indicating that she is in a new and wonderful world, one which could potentially cure her of her depressing existence where she meets her guide, Piper. In this way the film shows how technology of this kind could in theory help a specific world problem, such as depression, in the fullest sense, as she has been transported to a completely realistic and happy locale, where she is free from these feelings. The film however also comes with a cautionary element as she soon finds herself in a black liminal space in her subconscious, where she wakes up alone and scared, before ultimately deciding to go back to her real life.

 

The Kebab Van at the End of the Universe

The film’s disembodied voice seemed to be the the XR element of the piece, egging the Physicist on to escape the dark drug-fuelled mayhem that she resides in responded to the prompt of changing the way we live and also the prompt referring to improving the quality of life for people. Although it was not entirely clear what the outcome for the Physicist was, it seems that this disembodied voice was aiming to assist her in escaping her current life and coming to terms with the death of her loved ones, thereby improving her quality of life. The film’s use of an authoritative voice, guiding the Physicist through a rough patch in her life could be something that may occur in the future, and is not dissimilar to my own film’s premise in that way. The voice provides the Physicist with the guidance required to improve her own life, and the ending with her walking into the light implies that this guidance was successful at least to some extent.

 

2> Choose one other studio from this list (we suggest selecting a studio that you would not normally be interested in). Then describe a key idea that you think the finished media/studio work communicated with reference to two specific examples (i.e. particular individual/group works)

 

People + Places

 

Intoxicated (Jaden Arendtsz)

This film was about a bass player, Sylvie, of the all-female punk group ‘Toxic Shock’ and her experiences playing in the band as well as her ongoing legacy in starting one of Australia’s first centres for sexual assault. The ethical considerations were addressed very well through the use of graphics and photographs on-screen, and as a result her story was able to be effectively told in her own words, with no additional editorial element, apart from the interviewer being audible at one point. This ensures that the story was told truthfully and ethically. From a technical point of view, the aforementioned graphics and photographs were very effective in telling the story, as well as the inclusion of snippets of the Toxic Shock songs being discussed, immersing the viewer further into the piece.

 

Intermission (Olivia Barnes and Claudia Schenck)

This film focused on a dancer, Sofia, who had quit the competitive element of her dancing and now only danced for her own recreation and emotional outlet. It addresses the prompt of the studio by addressing the ethical considerations of interviewing a girl about her dancing talent, despite her deciding to give it up in the past. The documentary navigates this well, not pushing their subject too far, but simply providing an accurate representation of where her current mindset is in relation to her dancing. In addition to this, her point of view regarding the expectations of perfectionism and the large amount of pressure on children in these dancing competitions was also made clear. The technical considerations involve allowing Sofia to tell her story to camera, but with a large focus on footage of her dancing, while she discusses the topic at hand over the top. In this way we are able to see her expertise, but at the same time we can hear what she has to say about her childhood and her reasons for no longer dancing competitively.

 

ASSIGNMENT #4 REFLECTION

In what ways do you hope your final work (whether individual or group produced) engages its audience and communicated a key concern of the studio?

In terms of the final short film that my group produced, I hope that it gets the audience thinking about the positive effects that VR/XR could have on the world, rather that the negative effects which are typically the ones focused on when the media discusses them. This was a conscious decision from the start for my two trailers as well, but I feel the short film shows these potential benefits best. This kind of technology is improving very quickly at an exponential rate as I saw from the many different advanced programs and technologies we explored throughout the semester, and so in my trailers and then in the group short film, I felt the freedom to choose topics that were quite ‘out there’ in terms of realism in the current age, but which may become possible in a relatively short amount of time (with the exception of my time travel device). The idea of an AI ‘clone’ that takes in your current personality and life choices then filters them through an ‘ideal scenario’ lens is something that potentially isn’t that far off, however its actual effectiveness would be another thing. Again though, we wanted to make sure the technology was presented in a positive light and that any negativity was through the misuse of the technology. We initially planned to have the character over-using the pills for a negative result, but instead settled on him simply running out, mainly because we did not have enough time in the 7 minutes allotted to show this in enough detail, instead using a montage instead to show the clone helping Peter to improve his life as he gradually runs out of the pills.

Imagine you are going to keep working on that media piece (e.g. to screen it somewhere else like a festival, or develop it into a different kind of work, and so on) – what would be the core things you want to improve and extend and why?

I think I would explore the idea of the clone becoming unethical when double dosing the pills, and I would also have additional scenes where Peter attempts to locate another supply of the pills once he runs out. I was very happy with how the split screen shots turned out but I would also utilize green screen shots as well, for more variety in the Peter and clone shots, allowing hand held shots or panning shots, rather than just completely still tripod shots, as well as having Peter and his clone able to pass each other within the same shot, to appear to be interacting with the same object, or for a more exciting visual element when the clone appears, bursting out from within Peter with glow and motion blur effects, rather than just appearing off-screen, which we felt was the most realistic way of doing things in the time we had for shooting and editing. We avoided this in the interest of keeping the shots as realistic looking as possible, but with more time and/or budget I’m confident we could get green screen shots looking just as good as the stable tripod shots. I would also further develop Peter’s struggling love life, perhaps showing him on a date with a girl with the clone giving him advice, which I feel would be a great scene comedically, with Peter responding directly to the clone’s advice, and then cutting to a wide shot where the clone isn’t visible and the girl is looking perplexed at Peter essentially talking to no-one. Again we avoided developing this due to time constraints and to minimise the amount of characters/actors required.

  • You will present all that you’ve worked on since Week 8 – your pre-production, experiments, images, clips, scenes, tests 
  • This could include the draft edits, sound mixes and colour grades – and of course, the reflection associated with it
  • More scene deconstructions and analysis most welcome.

All above will need to be reflected upon and contextualised considering the studio prompt, brief and aims

  • At least 300 words of the 800 words must be on Collaboration (over the whole semester).  Working individually is just as valid a thing to write about as group work. Appraise how you went with it this semester – its pitfalls, upsides (discuss group work done during weekly activities). 

In the above image we are preparing for the scene where Peter first interacts with his clone. We shot the scene twice, with myself sitting in and saying the alternate lines, to be replaced later when we edited them together and overlayed the clone over the footage of me by putting a feathered opacity mask around the clone, which due to lighting changing on the day, gave the clone an ‘aura’. We considered colour grading to match the footage so that there would be no noticable difference, but eventually decided this worked from a story point of view as the clone is ‘hyper-real’, so left it as is.

The location of the above scene, as well as the ones where Peter steals back his bike suffered from a high level of background noise due to trains and cars in the background, which we combated by a combination of re-recording wild lines, which we then added convolution reverb to, as well as denoise and vocal enhancer plugins to minimise the traffic noise and bring out the dialogue.

The above shot of Peter riding the bike down the road was achieved by me driving the car and Jen sitting in the back with the tailgate open, the result of which we were all very happy with, using it a few times through the film, and showing it in all its glory for the end credits scene. We felt it was the most filmic shot we got and so wanted to utilise it as much as we could. In hindsight I would have liked to get more of these shots in different locations to use.

We planned to do a shot of the clone jumping out of Peter for the grand entrance when we first see the clone, but for shooting and editing simplicity we omitted this. Additionally the reveal of the clone, initially speaking to Peter off camera we felt was more engaging than a dramatic special effects driven entrance, which would have only served to be visually engaging rather than driving the story.

Above is a screenshot from Ableton of some of the original score I recorded for the film. I wanted a kind of meandering aimless score for the intro, reflecting Peter’s life at the start and to a lesser extent the end of the film, into an intense fast paced score for the scene where he is running late for the medical centre, and the end where he is debating whether to stay at the medical centre, and also a very happy-go-lucky upbeat score for the montage where the clone is helping Peter to improve his life.

Jen and Ryan added several sound FX to the film, such as typing sound effects, the bike lock, the bleeping for censoring the swear word in the end of the narration, and most importantly in my opinion the tape stop sound and whirring noise when the clone ‘paused’ the film and would speak directly to the audience, which we were inspired by The Emperor’s New Groove (2000) to include.

The colour grading I feel was key in giving the short film a professional ‘filmic’ look, and created cohesion between scenes, especially ones that were shot on a sunny day vs a cloudy one, and allowed us to reduce the stark difference between them somewhat. It also allowed us to visually illustrate the bleakness of Peter’s life before the clone, and the happiness he is feeling once the clone begins to help him, particularly in the montage sequence.

Collaboration

In terms of collaboration, our initial pre-production consisted initially of me and Angus writing somewhat of a screenplay, not quite a full script, but more an outline of the action within each scene as well as any accompanying dialogue. Writing together was far better than writing alone and being able to bounce ideas off each other was extremely beneficial. Through discussion between us we were able to work out the rest without the need for a full script. We felt this was the best way to conceptualise the film in greater detail, beyond our initial discussions with Jen and Ryan regarding ideas of perfectionism, a utopian life/existence as well as Elon Musk’s Neuralink.

It is possible that storyboarding further would have prevented a couple of the errors we made, such as missing a line or two of dialogue while shooting, however it turned out to be a happy accident, as the missed lines of dialogue necessitated having the clone narrate the film, which I felt was highly effective in terms of engaging the audience as well as adding more comedy to the film. This narration was not a part of the film until quite late in the editing process but after discussing it together we felt it would really tie the narrative together in addition to the other benefits. This is an example of how the group work really elevated the final product.

By working together particularly on the shooting, I feel the shots we got were far better than if any one of us was doing it alone. In the editing process, it was great getting a few sets of eyes on the project, reducing ‘edit fatigue’ and getting perspectives that I never would have on my own, allowing us to be more ruthless with editing, and being able to divide up responsibilities, with Jen and Ryan mainly focusing on audio editing, and me and Angus focusing on the assembly of clips and precise clip editing. I can sometimes find it hard to relinquish control over a project, as it becomes my ‘baby’ but my group for the short film really gelled well and were able to fluidly play our roles in the group without stepping on each other’s toes.

In terms of in-class collaboration, I was somewhat disappointed with the lack of input by a lot of other class members, with only a select few consistently contributing. Having said that I still enjoyed and benefitted from the discussions and debates that did occur with the people that did contribute, and found it useful not only in the group short film, but also in discussing my teaser trailer and trailer assignments as well.

Write one reflection on, or response to, the content of the Presentation in Week 9 by student work other than your own 

I thought the strengths of Cem’s presentation of his film idea was that it had the potential to be quite easy to shoot the raw live action footage for, due to only having one actor and one location, but I did wonder whether the film would be engaging without any other visible characters within such a confined location, with only disembodied voices accompanying the character, with minimal visual action from the actor within the kebab van. The story to me seemed a little bit vague, and the genre of the film as well seemed somewhat vague, but perhaps this ambiguity was the point. The story where the first part is a drug-induced dream/hallucination before the Physicist walks into the light was also very open ended, and reminiscent of the ‘it was all a dream’ trope that generally would want to be avoided, but the ‘real world’ second half very serious subject matter relating to the car accident and the Physicist’s partners death could become even more engaging if it does double down on the realism and specificity of the Physicist’s situation, when starkly compared to the drug-induced ambiguous fantasy first half.

The CG elements did sound very interesting, but I did also wonder how effectively these could be done without breaking immersion for the audience, as they seemed very high-concept ideas such as dinosaurs with tentacles in a pre-historic jungle and outer space along with various other locations, which could be very difficult to do realistically, especially in such serious subject matter. I believe lower end CG elements are more forgivable in a comedic setting, such as in a film like Mars Attacks! (1996) but if the Star Wars prequels suffered from similarly unrealistic CG elements then the immersion would be broken and the viewer would disengage far more easily. Therefore if the green screen elements and associated sci-fi locations are done very well to the point of photo-realism then it could be very visually effective, helping to ‘sell’ the story as well, but a huge amount of time would need to be devoted to this, which as a one-man show could be difficult, although Cem appears to have more of a background in this type of thing than myself, so it may be easier to achieve than I expect.

Similarly, relying on just one visible actor (as well as the off screen dark force character) may be asking a lot of said actor, with the potential for the film to become visually boring and the audience to not feel empathetic about the character, or that they are believable, but again if the actor can pull it off then it could be very engaging. Using Unreal Engine and Blender for the CG elements/backgrounds could be effective, but I feel it could also be jarring when overlayed with the live action footage as although it does look fairly realistic, personally I don’t think it is at the truly ’real’ level yet based on what I’ve observed. Despite these concerns, I do feel the film has the potential to be very good and emotionally effective, if these potential pitfalls are avoided.

ASSIGNMENT #2 REFLECTION

What you were trying to achieve in terms of critically communicating about extended reality (XR) media and the method in which the editing process was used to attempt this

This time, I began to think about the furthest reaches of what AI/XR could achieve in the future. Initially, I thought of a device that allows you to pick from ‘dialogue options’ that it generates in real time to ensure you are saying what the other person wants to hear, which was based on Azuma’s definition, paraphrased by Hollerer, T and Schmalstieg, D. (2016) that AR requires ‘precise real-time alignment of corresponding virtual and real information’ in this case, dialogue options coming up with a % of which one is likely to be best, however I moved on from this as I thought it wouldn’t be that exciting visually.

I then imagined that with the right computing power, perhaps a method of time-travel in some form could be possible if some corporation managed to create a powerful enough AI, as ‘some of the first actual applications motivating the use of AR were industrial in nature’ (Hollerer and Schmalstieg). I remember reading that Einstein believed that time travel was theoretically possible, but only to the past, and so I imagined the first prototype of a device like this would probably only be able to go back only a very limited amount of time, which was not a concept I had seen before in other time travel films, and so my concept was born. This concept also ties in with what Hollerer and Schmalstieg said about ‘AR turning into a more general interface paradigm’, in this case a device that can send you back one minute earlier.

As with assignment 1, I again went for a lighter more comedic genre, with the main character only using the machine to better his own life, and not for any outright malicious purposes. I wanted the device to follow Jerald’s (2015) thoughts that a VR experience should be ‘a collaboration between human and machine where both software and hardware work harmoniously together to provide intuitive communication with the human’ and although my concept pushes the boundaries of reality, virtual or otherwise, it still applies here. To achieve the time shift effect, I used a combination of green screen in some scenes, and opacity masking as well to achieve the time travel effect. I also used green screen to put the news report on the TV in the final scene as this was far easier to film than actually putting the video on the TV, and also gives a cleaner look, as filming TVs sometimes doesn’t get great results.

 How did your preproduction/production/post production process go and what would you do differently/improve next time? Your reflection should also include commentary on what you thought the most and least successful parts of your Film Trailer were, and why so?

During preproduction I also considered a few other shots, such as the character having foresight that someone was about to trip and catching them, catching a glass that initially falls etc, but I just didn’t have enough time with all the other footage. This time I did have a shot list which helped immensely, but the shooting was a bit on the fly for the scene in the ‘office’ which meant I didn’t get the framing or shot I had in mind.

The production phase was relatively efficient, although I was forced to shoot the green screen overlays first, and then the backgrounds, which didn’t really match as the office location was found a bit ‘on the fly’, leading to the abovementioned awkward framing, which I attempted to correct through zoom and motion control in Premiere Pro.

In terms of post-production, to achieve the time-shift effect I utilised green screen and overlaid myself on the background footage that I then reversed and sped up, to give the illusion time was going backwards for everyone except the main character. The second method I used was one where I put an opacity mask around myself, and used other footage of the same shot reversed and sped up, to again give the illusion that everything else was going back except for the main character. This method was far more realistic looking in that the lighting was perfect as I was actually in the scene, unlike the green screen overlay which required a lot of brightness/contrast and colour correction to fit in better, however even this masking method had drawbacks as it was possible to see the leaves behind me not moving fast as they were in the opacity mask. In hindsight I probably could have shot the office reverse sequence with the opacity mask method, but it just didn’t occur to me at the time.

I made extensive use of nested sequences to allow me to apply motion and effects to both the overlays and backgrounds at once, and I also used 3 distinct colour grading adjustment layers. A very cold dreary one for the opening until he finds the device, a much more saturated and warm middle point when he is happy with the device, catching the train, getting the girl etc, and finally another one in a more neutral colour grading, to symbolise the conflict of the news report and girl finding out about the device. I used both LUTs and manual grading.

I initially recorded very sad piano music for the opening scene, which I intended to be ironic, but it just didn’t seem to work, so I replaced it with a dejected descending guitar riff which seemed to fit better. I went with a funky upbeat style for the middle music, to symbolise the happiness and ‘boss’ feeling the character had, and some quintessential rom-com rock music when he gets the girl.

I was happy with how the reversal shots turned out, although I would have liked to reshoot the office background scenes in a better location, with better lighting, as the actual location was far dimmer than the cyclorama where I filmed the green screen overlays. I think the slowing down voice effect really added to the shot, and made it more immersive for the viewer, as they hear what the main character would theoretically hear. I used the Pitch Bender effect in Adobe Audition to achieve this. I did attempt to use optical flow to slow down the footage as well, but I couldn’t get it to look smooth so I just left it normal motion and then froze the image before the fast reverse shot, which didn’t matter once the sound effects were added.

 

References:

 

Hollerer, T. Schmalstieg, D., 2016. Augmented Reality: Principles and Practice (Usability) 1st Edition. p.3

Jerald, J., 2015. The VR Book: Human-Centered Design for Virtual Reality p. 10

 

Blog Post 4: https://www.mediafactory.org.au/jack-purnell/2023/04/06/immersive-sandbox-week-4/

 

Blog Post 5: https://www.mediafactory.org.au/jack-purnell/2023/04/06/immersive-sandbox-week-5/

 

Blog Post 6: https://www.mediafactory.org.au/jack-purnell/2023/04/06/immersive-sandbox-week-6/

IMMERSIVE SANDBOX WEEK 6:

In week 6 we looked at various forms of AI generative art. Weavesilk, essentially Microsoft Paint brought hurtling into the 21st Century and then supercharged by AI, allows beautiful symmetrical artworks to be created almost effortlessly, and it was very satisfying creating these ‘artworks’, but I did feel a certain lack of control over the process, despite the results still being good.

Chromata, an AI powered tool which takes a source image and reimagines it, was also very effective, however it got me thinking about copyright, usage and so on. Who owns these images? The person who created the source image? The person using the AI tool? The AI itself? These questions will no doubt become a big issue, more so than they already are, as these AI tools continue to be ‘trained’ (in my opinion just a term for mechanical plagiarism) off another artists work, and then creates art in an identical style. It does beg the question, what will happen if there are no new artists because all of their work is immediately stolen by AI? It is for this reason that very strict regulations need to be implemented, however this will be very very difficult, and very hard to keep accurate and relevant.

IMMERSIVE SANDBOX WEEK 5:

For week 5 we looked at projection art, and had a look at MadMapper and TouchDesigner. I had a go with MadMapper at home and created a cool setup, animating the pictures of the ocean on the wall to seem like the water was moving, making the light switch flash as though an objective in a video game, as well as a video projected onto the door of the view behind, giving the illusion of seeing through the door. I have a decently powerful projector that I have used for band related purposes and so had a lot of fun messing around with the program. I want to get more adept at this kind of thing to create visuals for my live shows, as I am only beginning to realise the potential uses for projection mapping.

We also created a virtual exhibition space and looked at some online NGV virtual tours. These were quite effective in giving you the feeling of being at the exhibition, however I did find myself wanting to look a bit closer, like in real life. In the future, the potential for immersion into these spaces will be mind boggling.

IMMERSIVE SANDBOX WEEK 4:

For week 4 the focus was on audio. We tried out the Zoom H2n recorders to record some foley sound around campus with the intention of syncing it up with footage of a dragon. The potential for manipulation in Adobe Audition meant that there were a huge number of everyday sounds that would potentially work with the right audio processing.

We also had a look at AR Synth, which is similar to some of the soft synths I have used when recording music, although this was a far more ‘tactile’ version with actual 3D models of the synths, rather than just a plugin window in a DAW, which was cool to see as I’ve never really used actual synths. I feel it would only be practical with the right equipment, such as hand controllers to manipulate the synths with, but it is a great concept nonetheless. Infinite Drummer was also a very cool concept and I intend to utilise this in future for a range of purposes, having always thought about the potential for sounds in my own everyday life to become part of a percussion track.

ASSIGNMENT #1 REFLECTION:

 

What you were trying to achieve in terms of critically communicating about extended reality (XR) media and the method in which the editing process was used to attempt this?

In thinking about extended reality media, and its potential for nefarious use in the not very distant future, I decided to make a film which makes reference to the deepfaking of people’s faces, and in reading Petropoulos’ (2018) comments asking governments not to rush into policy regarding AI as it is such a complex and unchartered territory that can’t be rushed into, it got me thinking what if the AI was the policy maker itself, and the potential for an unchecked AI machine (such as the very aggressive Bing-bot) to go haywire and impersonate the world’s leaders via this technology, essentially taking over the world.

Considering the (hopefully!) far-fetched nature of the idea, I decided to go for more of a black comedy or dramady. I initially planned to use some kind of deep faking, but soon realised that this was very difficult and time consuming to do convincingly, so instead used green screen to create an AI ‘sprite’ version of myself to insert into preshot background footage such as the shot with me sneaking up on myself, or in front of an image, like the ‘prime minister’s office’ background, which I found on a government website.

How did your preproduction/production/post production process go and what would you do differently/improve next time? Your reflection should also include commentary on what you thought the most and least successful parts of your Film Teaser were, and why so?

In the preproduction phase in addition to actual deepfaking, I also considered the use of 3D objects being included in the shots, such as the ‘discipline bots’ however I don’t have enough expertise to do it convincingly, and so opted not to until doing further research/practice in this area, but definitely intend to utilise this in future. I did not want to affect the immersion as this therefore would affect the telepresence of the audience as Oh, Herrera and Bailenson (2019) proved, and although this did not demonstrably have a knock on effect on affective valence, I still felt immersion and telepresence were important when trying to ‘sell’ a far-fetched idea such as this to an audience.

My preproduction process consisted of recording a scratch audio track of myself saying all of the lines to get an idea of timing, and soon found I had to make the dialogue more concise and speed up my delivery to fit everything into the clip, accompanied by simple text layers which stated the eventual shot that would occupy that space (e.g. Pan Down to Radio CU, Prime Minister Mid shot with TV graphics), rather than a traditional storyboard as I was the one setting up the framing of each shot in the production phase, however in hindsight it would have been handy for some of the trickier shots, such as the double Prime Minister green screen shot, and the shots where I was being filmed by others, as they had to go off my direction rather than referring to an actual storyboard shot.

In ‘AI in the media and creative industries’ (April 2019) the idea of ‘automatic news extractions and creation’ was discussed, and this again was a trigger point in my teaser trailer, specifically the radio announcer section and the idea that the media would need to be completely bought and owned by the AI in order for this to occur, and then also how the remaining human media conveyers might try to trick the AI, such as the radio announcer saying in a somewhat sarcastic tone that they would NEVER accuse the PM of being a deepfake clone, as AI may not understand sarcasm.

In the production phase I was able to get everything filmed fairly efficiently, and did not desperately need to reshoot anything, however with more time I would have either filmed the backgrounds for the press conference scene first, or just film it at an actual location rather than using green screen, as editing this was very difficult as it was shot freehand, and although stabilisation improved things, it also caused some strange perspective issues, which I attempted to counter by tracking the motion of the background to these odd perspective shifts, however it still did not look completely realistic when analysing closely. I also feel that using the green screen for a ‘genuine’ purpose, instead of as something intended to be artificial was a mistake as it lessens the impact and affective valence of the AI PM shots, which I intentionally made look a bit hyperreal and slightly out of place.

Post-production was a fairly smooth process, creating the TV graphic overlay in Photoshop and keying out the background, syncing up the final shots to the scratch audio/shot list, however the audio for the outdoor footage at the end of the trailer was very difficult to edit, and I had to do a lot of enhancement, compression, equalisation and even high pass filtering as a plane went overhead that I didn’t notice. I probably should have just re-recorded these lines, but wanted to avoid it as it never looks as good as using the actual dialogue audio.

The music utilised ‘glitchy’ audio elements, to represent the AI, and utilised only synthesisers and drums, again in keeping with the technological theme. I would have made it a bit more lighthearted if I was to do it again, however it was tough to still get the ‘techy’ feel when playing in a more major scale. I also intended to emulate some AI generated music I had heard, as discussed in ‘AI in the media and creative industries’ (2019), which discusses how AI creates music from being first fed a dataset of a particular genre or songwriter, which can generate fairly convincing, to extremely crazy results with more non standard or crazy melodies.

I was happy with the TV graphic overlay, with the clock hands pointing to the letters A and I in DAILY, however I would have liked to have animation rather than just static graphics. I was also happy with the double PM shot, and would have liked to include more of these double PM shots in hindsight, as the moment comes and goes extremely quickly. I thought the radio section maybe was a bit slow to start with, but I didn’t feel I could cut anything further out, however I probably could have made the wording a bit more concise. I did want to add some newspaper headlines, but didn’t know how to make a convincing looking newspaper, short of just photoshopping photos of an actual newspaper and then animating the motion of the photo to make it look like a shot.

Bailenson, J., Herrera, F., and Oh, C. 2019. The Effects of Immersion and Real-World Distractions on Virtual Social Interaction. Cyberpsychology, Behavior and Social Networking. 22 (6), 365-372.

Amato, G., Behrmann, M., Bimbot, F., Caramiaux, B., Falchi, F., Garcia, A., Geurts, J., Gibert, J., Gravier, G., Holken, H. and Koenitz, H., 2019. AI in the Media and Creative Industries.Links to an external site.

Petropoulos, G., 2018. The Impact of Artificial Intelligence on Employment. In: M. Neufeind, J. O’Reilly & F. Ranft, eds. Work in the digital age: challenges of the fourth industrial revolution Identifying the challenges for work in the digital age. Washington: Rowman & Littlefield Publishers, pp. 119-132.Links to an external site.Links to an external site.

 

Blog posts:

Week 1:  https://www.mediafactory.org.au/jack-purnell/2023/03/04/immersive-sandbox-week-1/

Week 2: https://www.mediafactory.org.au/jack-purnell/2023/03/17/immersive-sandbox-week-2/ ‎

Week 3: https://www.mediafactory.org.au/jack-purnell/2023/03/17/immersive-sandbox-week-3/ ‎

 

 

IMMERSIVE SANDBOX WEEK 3:

In week 3 we only had the Wednesday class due to the public holiday, and focused on the Luminar Neo software, which can edit the sky of photos to transform a sunny day into a cloudy one, or day into night, or any combination. It unfortunately didn’t install in time for me, but seeing others using it, I did wonder about the possibilities in the future when commercial computers are far more powerful and how it could be used for video instead of just photos, and for other elements of the footage instead of just the sky.

We also looked at some unstitched and stitched 3D footage, and played around with this in Premiere Pro. This particular type of filming is extremely interesting to me, due to the post-production options in terms of choice of angle, but also in the potential for user interactivity, blurring the lines between video games and films and increasing telepresence for things such as live concert films. I intend to do more research on the stitching process, and will rent the cameras to play around with the unstitched footage anyway.

IMMERSIVE SANDBOX WEEK 2:

For week 2 we looked at Unreal Engine’s MetaHuman and created some characters. In many ways it was similar to a lot of video game character creation options that I have used on various games, which makes sense given Unreal Engine’s background. I found amazing how many options there were, however as the person shuffled about, or moved into various poses and facial expressions, the feeling of the uncanny valley was very strong, and I couldn’t help but feel like I was invading the privacy of these non-existant 3D sprites. Very strange, but amazing technology.

 

We also filmed some stuff with the Sony x70 Cameras, which are incredible cameras and make filming so much easier compared to using a phone, even when shooting freehand as we were. We once again filmed footage to be chroma keyed later, and in doing so again I was able to focus more on the lighting of the interior green screen shots, and how it would correspond to the outdoor background shots.

IMMERSIVE SANDBOX: WEEK 1

For week 1 we focused on chroma keying in Adobe Premiere Pro as well as utilizing the amazing Polycam which uses 3D LiDar scanning to create 3D models using a multitude of photos as the source data, which generated results from quite good, to quite bad, to horrific mutant alien creatures that bore absolutely no resemblance to the subject originally captured.

Despite its flaws, the technology is incredible, and when used by a more skilled or experienced hand can generate seriously impressive results, which could then be refined further in ‘traditional’ 3D modelling programs, such as Sketchup or 3DS Max.

In editing the audio of the chroma key exercise, I used the in-camera sound for the room tone, a slowed down, reverbed, distorted in-camera sound for the giant footsteps, and a BBC sound effect for the damaged footpath cracking noise.

For the video I utilized motion tracking, opacity layers, colour correction, chroma keying, changing the film speed, as well as masking. Editing this was harder than I expected as I had to account for the motion in both the background and green screen shots as these were all shot freehand, but in the end, I think it turned out reasonably well and could have been improved with more time for editing, as some strange things were happening with the motion tracking.

As I was editing it resonated what the EEAAO directors/creators were saying about shots not necessarily looking realistic, but just getting the job done.