[ MODFX ]

pHNK /

Synthetic Narrative Pipeline: Deconstructing the Chrome Monk

// THE COMPONENT PIPELINE

Generative Video is not a 'one-shot' process.

It is a composite of Layers, Performance, and Audio.

Built with Midjourney, Nano Banana, and Suno.

01. COMPOSITING: LAYERS
// USE CASE 01

Component-Based Assembly

Rejecting the 'text-to-video' slot machine. I treated the scene as a composite: generating the 'Stage' (Background Plate with custom signage) and the 'Actor' (Character Poses) separately to ensure independent control over lighting and composition.

02. ANIMATION: LATENT FLOW
// USE CASE 02

Latent Performance Capture

Using image-to-video interpolation to direct performance. By feeding specific static keyframes (The Walk, The Look) into the model, I constrained the AI's hallucination to follow a strict narrative block, rather than random morphing.

03. AUDIO: SYNTHETIC CULTURE
// USE CASE 03

Cultural Research Engine

Authenticity through AI research. I utilized LLMs to analyze Japanese youth slang and 'Drift' subculture linguistics, generating a culturally-grounded prompt for Suno to create a backing track that fit the visual aesthetic perfectly.

04. POST: THE GLUE
// USE CASE 04

The Unifying Grade

The final cohesion happens in the NLE (Premiere). Applying heavy color grading and film grain acted as the 'visual glue,' merging the disparate AI-generated layers into a single, broadcast-ready frame.

$ phnk --specs
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PIPELINE TYPE: Composite
WORKFLOW: Hybrid
TECH STACK: MJ+Suno
ROLE: Director
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━