STYLE‑FRAME PROTOTYPE

This is a concept-driven look development project focused on exploring how AI-assisted workflows can support early-stage visual decision-making in film, advertising, and branded content.

Exploratory style‑frames and short motion tests for a hybrid live‑action/animation series, achieving a painterly, “Flee‑like” realism that can be re‑generated from a single prompt —no reference image required
for clear legal ownership.

When the U.S.-based AI animation studio Native Foreign invited me to contribute to the visual development of an upcoming animated documentary series, the initial brief was deceptively simple:

“To make it look more like the FLEE reference, unique look away from anime. 
The man look less Asian and more Russian”

The main challenge was keeping the characters grounded in realism while every diffusion model kept drifting into anime. The solution was to embed explicit negative cues in the prompt (“no manga, no hard anime eyes”) and iterate until the output stayed consistent across shots.

Workflow

This exercise proves that a distinct, emotionally grounded look can be locked in by prompt alone —
a crucial precedent for studios navigating AI IP constraints.
I needed to test whether this style could be recreated without any reference images—using only a text prompt. Here’s the pipeline of the “reverse process”:

  • Prompt reconstruction with GPT — I asked the model to write a prompt that perfectly described my finished style-frame, keeping the exact Midjourney seed.

  • Midjourney refinement — I iterated on that prompt inside MJ until I arrived at a stable, final version.

  • Photoshop finishing pass — I applied Color Transfer and subtle Adjustment Layers in Adobe Photoshop to polish the frame.

Midjourney

Photoshop finishing

Motion tests

Result:
The style can be regenerated end-to-end from text + seed alone,
no image reference required — a clean, studio-friendly IP pipeline.

TENWAYS

Village