This scene is a direct continuation of my earlier AI mesh experimentation. I used the same animation but migrated the project from Cinema 4D to Blender, continuing to develop the visual identity of the brushstroke style and elemental effects.
2D Lightning Effects As with the Cinema 4D version, I attached two-dimensional animated lightning to the character’s feet. This maintains visual consistency while allowing further experimentation with the energy-based aesthetic I’m pursuing.
Brushstroke Stylization (Single Layer) In this version, I did not split layers, and that’s visibly apparent. It’s a very early iteration where I’m starting to play with how much layering and separation actually contribute to the final feel.
Blurry Echo Effect One of the more experimental ideas I tried here was duplicating the mesh, offsetting it slightly, and lowering the opacity:
This created a motion trail effect or visual blur behind the character.
It helps hint at speed or displacement without relying on traditional motion blur tools.
This will likely be refined in future iterations.
Environment Setup (Sky, Plane & Sphere)
I placed a plane in the background with a sky texture.
Behind that, I added a large sphere to round out the depth of the backdrop.
However, the result was grainy, especially in the background layers, due to either sampling limitations or the shader setup.
I plan to troubleshoot and refine this to reduce noise in future renders.
Reflection
Although this is clearly a rough test, I’m quite happy with the visual direction. The blurry mesh duplication and 2D lightning animations serve as promising leads for stylization. Going forward, I’ll be focusing on refining the brushstroke fidelity, experimenting with opacity layering, and optimizing render clarity—particularly when using painted backdrops or procedural skies.
Let me know if you’d like this written in a more academic tone or broken up visually for layout purposes (e.g. with subheaders for FX, Shading, and Environment).
The video above is a rough reconstruction of my animation progress so far. I’ve retargeted the original skeleton onto a mesh I previously developed for another module within the Experimental Animation unit. This allowed me to consolidate workflows across modules and experiment with character continuity between projects.
Retargeting & Rig Adjustments
I had to shorten the character’s arms slightly to align with the proportions of the imported rig.
The skeleton came from Autodesk Maya and included numerous deform bones, some of which caused unwanted mesh distortion.
To resolve this, I learned how to unbind specific bones, allowing for a cleaner, more intentional deformation across the mesh.
Environment & Composition
I initially considered using a more complex backdrop (like mountains, which featured in earlier iterations), but I felt this overcomplicated the composition.
I opted instead for a minimalist environment: a flat plane with light rock variation, and a simplified skyline in the background. This keeps the focus entirely on the character and their motion.
Camera Logic & Movement
The camera tracks the character dynamically, at first chasing them and then overtaking them as they reach the apex of their attack.
This final camera position, ending in front of the character, is symbolic — it subtly reinforces the character’s dominance over the crystal enemy they just defeated.
Throughout the animation, I focused on maintaining strong camera framing — ensuring the character remains in view for clarity, weight, and visual intent.
This version of the scene represents an important turning point in my project: it consolidates character performance, rig integration, and cinematic storytelling into one evolving sequence.
In the blocking stage of my dialogue animation, I began implementing key elements drawn from my reference footage and feedback I received. One of the first considerations was character orientation—I kept the character facing the camera more directly, using a maximum of a ¾ angle. This was to ensure that the head and body didn’t shift too far to the side like they did in my reference, which made facial expressions harder to read. Since the character is speaking to someone just off-camera, it made sense to have the upper body turned toward the viewer, prioritizing clarity over realism.
Initially, I had considered having the character’s body turned away to reflect their perceived superiority, but I quickly realized that in 3D animation, visibility and silhouette are key. I also applied George’s general class feedback: avoid disconnecting facial and body movement. Instead, the head and face should move in relation to the body. I began incorporating subtle upper body movement that matched the character’s expressions and gestures, which George praised for enhancing expressiveness—something I’ll continue refining.
In terms of critique, George pointed out that animating the eyebrows as if they’re connected in a unified curved motion reads much better than moving each brow separately. When he demonstrated this on my animation, I agreed—it cleaned up the motion significantly. I’ll be implementing this across future work.
Another issue was the mouth shapes. I initially had them too flat, which made expressions look stiff. Raising the corners and shaping the mouth more like a semicircle helped improve clarity, even if it strays slightly from realism. I’ll also be focusing on placing the teeth more intentionally going forward.
For the eyes, I was leaving the lids slightly open even when “closed,” which left a pixel of white visible—something that actually disrupted the look more than I expected. I’ve since corrected this to allow the eyelids to fully close and meet in a clean arc. I also had the eyelids slightly overlapping the iris, which gave the character an unintended “drugged” look. While I was trying to portray a regal, detached expression with heavy eyelids, I now understand that I can still lower the lids for that effect without obscuring the iris.
I’ve also started syncing eyebrow movement with the eyelids—when the eyebrows raise, the upper lids should follow, and when they lower, the lids come down too. This adds a more natural skin-pulling effect.
Finally, I received positive feedback on how I animated the jaw—there’s a good amount of movement, which adds a lively, stylized touch to the character. George said it reminded him of King Julian from Madagascar, which I take as a compliment—DreamWorks nails personality-driven facial animation.
Overall, this blocking phase has taught me a lot about how small adjustments can make a massive difference in clarity, appeal, and character.
After some trial and error, I’ve finally settled on a reference that really excites me—Monty Python and the Holy Grail. The scene I’ve chosen features King Arthur attempting to assert his authority over a group of peasants who challenge the legitimacy of his rule, delivering a satirical take on anarcho-syndicalist communes. What drew me to this scene was the rich variety of emotion: the character (Arthur) fluctuates between pride, confusion, and incredulity, all while maintaining a comedic undertone.
This reference provides an opportunity to animate nuanced emotional transitions—starting from superiority and indignation, moving through confusion, and landing in moments of visible frustration. I especially like how Arthur keeps his chin raised, conveying condescension, then briefly dips into bewilderment when his authority is questioned. These shifts in posture and expression offer a lot of material to explore through 3D animation.
George provided some helpful critique during our session: one key point was ensuring that the character maintains consistent eye contact with the other person in the scene. I tend to look away at times, which can reduce clarity in the facial performance—especially from a three-quarter angle. Keeping both eyes visible strengthens readability and emotional impact. I was also encouraged to include the little chuckle I do in the reference—it’s subtle, but adds humanity and sells the internal emotional state.
Another suggestion was to slow the timing down slightly. While I already have snappy movements for contrast, adding variety in pacing should give more room for micro-expressions to land and feel intentional. Lastly, I plan to trim the clip to focus on the most essential beats—the peasant’s reply at the end, for example, doesn’t add much to the core performance and can be left out.
Overall, I’m excited to dive into this one—it’s expressive, challenging, and gives me space to push character personality through facial acting and timing.
This reference was originally recorded for the Dialogue Animation module. Although I won’t be using it in my final submission, I still wanted to include it here to document my process and thought development. The purpose of this exercise was to explore the nuances of transitioning between emotional states—both internally (subtle facial shifts) and externally (spoken delivery and gestures).
The reference is based on a scene from Monsters, Inc., specifically the moment when Sully and Mike meet the Abominable Snowman in the Himalayas. What drew me to this clip was the dynamic emotional interplay between the characters: one emotes before speaking, the other speaks before emoting. This layering of emotional beats and verbal rhythm really helped me observe how character intention shifts from moment to moment.
However, while studying this clip was useful, I’ve decided not to use it for my final piece. Although we weren’t explicitly told to avoid referencing existing animations, it makes sense from an academic standpoint to animate original material or live-action references. Using pre-animated content—especially from stylised films—risks undermining the learning objective, which is to interpret and convey raw emotional beats through our own animation work.
Still, I found this study valuable for understanding how emotional expression isn’t just about exaggerated mouth shapes or eye movements—it’s about timing, contrast, anticipation, and subtlety. I’ll carry this insight into the rest of my project, using live-action or self-recorded references instead.
This second reference is taken from Whiplash, specifically a moment when Fletcher—the film’s intense and domineering antagonist—publicly confronts a student with thinly veiled contempt. What drew me to this scene was the emotional tension beneath Fletcher’s delivery: there’s an underlying aggression simmering below the surface, yet his facial expression remains controlled and composed. The subtlety of this interaction made it a compelling study for how rage can manifest through micro-expressions and tone rather than overt gestures.
My intention was to explore this emotional restraint and attempt to reinterpret it in a 2D animated format. However, I ultimately chose not to use this reference for a few reasons. Firstly, the shot involves swearing which I apparently shouldn’t use in my showreel, and if I censor it then whats the point really? Might as well use a better clip, the character will be swearing visibly as well so. Secondly, after reviewing the criteria, I realized that the emotional transitions within the clip are minimal. Fletcher moves from calm to contempt without much visible progression, and the shift happens quickly and with limited variation in facial expression.
Additionally, because this moment is already animated cinematically (in the live-action sense), recreating it one-to-one would offer little creative interpretation. The goal of this task is to explore the character’s internal state and show emotional evolution—something better achieved through original or live-action references where I can capture a broader spectrum of emotional change.
While I won’t be using this scene in my final submission, studying it still helped me reflect on the complexity of restrained emotion and the importance of contrast and build-up when animating subtle psychological shifts.
This was the week I completed the spline phase of my animation. I transitioned from blocking plus into spline, and focused on integrating the feedback I’d received in earlier reviews—especially around body mechanics and staging.
One of the main adjustments I made was correcting the pivot point during the backflip. Previously, I had the character rotating around the hips, but after feedback I shifted the rotation to the upper chest, which creates a much more natural arc. It immediately felt more grounded and realistic.
Another change I made was to how the character leans into the spline. Initially, I had them leaning forward going into the run-up, in a sort of exaggerated cartoon curve—similar to what you might see in Tom and Jerry, where a character forms a backwards “C” before launching forward. It was fun and expressive, but ultimately too stylised for the tone I was going for. Now, the character leans back slightly before the jump, which reads better physically and visually.
I also refined the silhouette, especially during the spin. I added a subtle S-curve in the arms to strengthen the overall pose clarity and motion arcs. Another great piece of feedback I received was to have the character lean more into the run. That small adjustment gave the movement more weight and purpose—it feels like they’re genuinely sliding along the ground into a ducking pose rather than floating through it.
There was also discussion about hand positioning during the takeoff—originally, both hands moved back in sync, but I’m now experimenting with slight asynchrony, which makes the action feel more natural and dynamic.
Regarding the jump itself, George recommended that the character should visibly stretch into it, but I explained that in my concept, the character actually disappears from the ground and reappears mid-air. Since they’re a speedster who manipulates lightning, I wanted to show their power through a sort of instantaneous movement. I plan to use a lightning strike or burst effect to visualise that teleport-like motion.
For the final dash, the character currently vanishes and reappears just before the attack. George suggested adding a brief pause or hover or zip around before the final strike. I really liked this idea—it allows the character to occupy more screen space, adds personality, and fits with the powerful, theatrical nature of a lightning-based finisher.
Overall, this week was about refining key movements, tightening up the spline animation, and staying true to the character’s style while making the action more readable and intentional.
In this iteration, I’ve reworked the animation from a previous version—starting with a new design for the crystal and a complete overhaul of the environment. Unlike before, this version introduces actual camera movement and does away with the wireframe look I was previously experimenting with. I found that the wireframe aesthetic ended up being too visually noisy; it added unnecessary detail to background elements and ultimately distracted from the core action. So, I’ve decided to remove it in favour of a cleaner, more focused visual style.
A key change in this version is the introduction of a continuous camera movement that follows the character. Initially, my idea was to use multiple dynamic cuts to convey speed—as if the camera itself was struggling to keep up with the character. However, I learned that for this unit, we’re required to use a single, unbroken camera shot. I actually really appreciate this limitation, as I believe creative restrictions often push better solutions. So I embraced the challenge and began designing the shot around a continuous camera move.
This version is still a preliminary pass. One of my next goals is to make sure the crystal is clearly visible at the start of the animation, to better ground the viewer and give the action more intentionality. Right now, it leaves the frame briefly, and I’d like to avoid that—it breaks cohesion. Keeping it in frame will take more work in terms of timing and choreography, but I think it will make the sequence feel tighter and more purposeful.
Unfortunately, in this Cinema 4D version, the rig is broken, so the character animation isn’t functioning fully. I plan to retarget the motion onto a more stable mesh for the final pass, and I’ll also explore different art styles for the render.
One small detail I like: rather than having the crystal explode on impact, I’ve made it fall to the ground. It feels more muted, which gives other effects—like the environmental disintegration—more space to shine. I also animated the floating earth around the crystal to crumble and fall, visually representing that the crystal has lost its power even if it doesn’t blow up. If everything explodes, nothing stands out—so I wanted to create more contrast by keeping certain effects more subdued.
This section represents a deeper exploration into the kind of animation I want to create, particularly focusing on environmental destruction and deformation. I’m very interested in this area not only because it adds visual impact, but also because I aim to develop these techniques further in my other units. It’s an area I want to become comfortable with, especially as I move toward more stylised, painterly visual styles in future projects.
In these tests, I hand-drew elements like lightning and particle effects to visualise the impact more clearly. Whether I keep these 2D elements in the final piece is still undecided, but I find the idea of blending 2D effects with 3D animation really exciting. It opens up a lot of potential for stylistic flexibility, and it aligns well with my goal of creating a painterly aesthetic that blurs the line between traditional and digital art.
I also spent time experimenting with the crystal and its design. Rather than going with a simple crystal and sphere combo, I tried giving the object more personality—playing with shape, silhouette, and material. I used a wireframe overlay as a way to test ground deformation without overcomplicating the scene, and I’m working within a low-poly style to keep things efficient while still visually engaging. Harsh shadows and angular geometry are intentional; they contribute to a sharp, stylised world that feels cohesive and deliberate.
In terms of movement, I’m continuing to develop the choreography between the crystal, the environment, and the character. For example, before the strike, the ground bends inward slightly, which helps visually link the energy of the character’s attack with the space around them. Without this interaction, the environment would feel static and disconnected.
The starfish move also has a dual purpose: it’s playful and acrobatic, but also acts as a charge-up for the final strike. Lightning effects radiate from the character’s limbs just before the impact, building anticipation and suggesting stored energy. After the attack, the crystal disintegrates, symbolising the character’s victory in a clear and satisfying way.
I’ve decided to go with one large, powerful attack rather than multiple smaller ones. This decision is both stylistic and narrative. The character is nimble and agile, and by contrast, I want the crystal to feel slower and more imposing. This dynamic creates contrast and tension in the scene—if both were small and fast, the action might feel too chaotic or repetitive. By exaggerating the differences between the two, I aim to create more visual and narrative clarity.
During my Advanced Body Mechanics Blocking session, I received some really useful feedback from George. One of the main points was about the character’s arm and leg positioning during the run—he pointed out that they should splay outward slightly, rather than staying too upright and linear. I hadn’t considered this before, but it made a lot of sense in terms of natural movement. It also got me thinking more about how I want this character to come across: they’re quite carefree, and driven—an active type of person—so making the movement more dynamic and slightly flailing might actually help bring out that personality.
We also discussed the jump arc. Initially, I had the backflip pivoting from the hips, but George explained that it should actually pivot more from the mid-to-upper chest, which I’ll be adjusting. Another area of debate was the final dash. George suggested that the body angle should be close to parallel with the ground—almost like a vacuum between the character and their destination. I personally lean more toward having it around a 45-degree angle, since I want the feet to be grounded enough to support a forward slide when they land. That way, the motion would transition more naturally into the Superman-style landing I’m aiming for.
There were also notes on the legs trailing during the backflip, which I fully agreed with—it adds more realism, as the legs would naturally lag behind the body’s rotation. Another point was about the arc during the starfish moment. In my blocking, I had the character do a small hop just before the strike, which breaks the single smooth curve of the arc. I agree that simplifying this into one clean arc would be stronger visually, even though animating that curve smoothly is a bit more complex.
Lastly, when the character hits the ground, I originally had one hand already placed on the floor. Looking back, it would probably work better if they landed with their feet first, then let the hands trail and follow through as they slide, making the movement feel more reactive. I’m also really happy with the way the blast propels the character forward—it gives the attack more impact and weight. George and I had some differing opinions on a few of these points, but I plan to test multiple variations and see what works best in context. I’m open to experimentation and refinement as I go.
Here, I’ve constructed a reference by stitching together multiple video clips to help block out the movement I’m aiming for. Since I’m not nearly athletic or nimble enough to physically act these out, I pulled inspiration from online sources.
The first part of the movement is based on the Scout from Team Fortress 2. Even though he’s a stylised character, his exaggerated, high-energy run feels perfect for what I want to capture. It fits well with the character I’m animating, who is a speedster with dominion over lightning.
For the next section, I decided the character will do a ground slide. The closest real-world reference I could find was a skateboard 360 spin—it’s got that same sense of fluid motion and dynamic body movement. From there, the character launches into a backflip, followed by a playful “starfish” pose mid-air. I’m using this to really push the acrobatic side of the character and show that they’re having fun with the situation.
After landing, the character dashes forward and slides into a pose similar to a Superman landing, ending with them raising their hand in preparation for a powerful energy blast.
VCAM
As part of this module, we were introduced to the process of using Unreal Engine’s Virtual Camera (VCam) system to record video within 3D environments, using mobile devices as remote cameras. This technique essentially allows for live cinematography inside a virtual space and can also support motion capture workflows when paired with other tools like Live Link.
To set it up, I first had to enable the necessary motion capture and remote control plugins in Unreal Engine. After that, I established a Live Link connection between my mobile device and the UE project, which allows real-time data (like camera movement) to be streamed into the engine. Remote Session was then configured to allow the mobile device to control the editor viewport. The Take Recorder was used in multi-user mode to capture the camera movement as actual animation data. Lastly, I installed and used the Unreal VCam app on my phone, which turns the device into a handheld virtual camera that can move around and record shots as if filming in a physical space.
During practical sessions, we ran into a few problems—mainly to do with networking. Since the mobile devices and the Unreal Engine project needed to be on the same network, we experienced frequent dropouts and connection issues, which I think might be tied to how restrictive institutional or university networks can be.
Going forward, I plan to test this workflow at home using my own router and devices. I want to better understand the strengths and limitations of this method, especially how it performs with reduced network latency and fewer restrictions. Overall, learning to use VCam gave me new insight into virtual cinematography and how live-action techniques can be applied inside a game engine.
Above is my storyboard for this sequence. The character runs in from the right side of the screen, moving towards a kind of magical crystal. As they approach, they notice something’s wrong—the crystal looks like it’s about to launch an attack through the ground. Reacting quickly, the character jumps into the air and does a flip, then comes crashing down with a lightning-based attack to try and destroy it.
But the crystal dodges at the last second and retaliates. It manipulates the ground into spear-like shapes that shoot up toward the character. Just before the spears hit, the character raises their hand and fires off a massive energy blast that wipes everything out, ending the scene with them coming out on top.
These clips are part of my exploration into the weight and movement of the character. Rather than going for a full flip in the air, I decided to go with a more playful “starfish” motion mid-jump. This was a deliberate choice—I wanted to start giving the character a bit more personality and make the movement feel unique and expressive.
You’ll also see how I’m planning to have the ground deform in response to the action. There’s a close, almost reactive relationship between the character, the crystal, and the environment. I’m trying to build a sense of interaction where all elements feel connected, rather than just separate layers moving around each other.
Here, as you can see, I’m experimenting with deforming geometry in Cinema 4D. The goal was to loosely recreate the 2D animation in 3D, just to help me visualise the scene more clearly before committing to the final animation. Since I’m going to be animating this in a full 3D environment, having a rough version like this gives me a better sense of scale, motion, and placement early on.