[ACM MM 2024] ReCorD:

Reasoning and Correcting Diffusion

for HOI Generation


    1National Yang Ming Chiao Tung University, 2National Taiwan University

    Teaser

    ReCorD: A training-free method that improves image generation by integrating Latent Diffusion Models with Visual Language Models. This approach enhances the depiction of human-object interactions, resulting in higher fidelity images and surpassing SOTA methods in HOI classification, FID, and Verb CLIP-Score.

    Abstract

    Diffusion models revolutionize image generation by leveraging natural language to guide the creation of multimedia content. Despite significant advancements in such generative models, challenges persist in depicting detailed human-object interactions, especially regarding pose and object placement accuracy. We introduce a training-free method named Reasoning and Correcting Diffusion (ReCorD) to address these challenges. Our model couples Latent Diffusion Models with Visual Language Models to refine the generation process, ensuring precise depictions of HOIs. We propose an interaction-aware reasoning module to improve the interpretation of the interaction, along with an interaction correcting module to refine the output image for more precise HOI generation delicately. Through a meticulous process of pose selection and object positioning, ReCorD achieves superior fidelity in generated images while efficiently reducing computational requirements. We conduct comprehensive experiments on three benchmarks to demonstrate the significant progress in solving text-to-image generation tasks, showcasing ReCorD's ability to render complex interactions accurately by outperforming existing methods in HOI classification score, as well as FID and Verb CLIP-Score.

    Generation Process

    Pipeline

    Overall architecture of ReCorD. Given a text prompt, ReCorD is structured by three components during the inference of LDM and VLMs, where we first leverage Coarse Candidates Generation \(\mathcal{M}_{g}\) to produce coarse candidates. Then, Interaction-aware Reasoning \(\mathcal{M}_{r}\) determines the optimal pose and layout regarding to the input. Finally, Interaction Correcting \(\mathcal{M}_{c}\) adjusts object placements and maintains the chosen poses to enhance the preliminary images within one generation cycle.

    Qualitative Results

    HICO_DET&VCOCO

    Visual comparison with existing baselines for HICO-DET (a-c) and VCOCO (d-f) using different text prompts, where ReCorD attains better delineation of interaction, and renders images matching the text instructions. (a) a young man is signing a sports ball. (b) a woman is carrying a pizza. (c) a boy is chasing a bird. (d) a child is cutting a cake. (e) a toddler is pointing at a laptop. (f) a woman is holding a fork. The bounding boxes on the results of L2I models are additional input for HOI generation.

    T2I_Compbench

    Visual comparison against existing benchmarks for T2I-CompBench using various text prompts, ReCorD achieves improved delineation of interaction and produces images that closely match the text instructions.

    Quantitative Results

    Method HICO-DET V-COCO
    \(\mathcal{S}_{\mathrm{CLIP}}\uparrow\) \(\mathcal{S}_{\mathrm{CLIP}}^{verb}\uparrow\) PickScore \(\uparrow\) FID \(\downarrow\) \(\mathrm{HOI_{Full}}\uparrow\) \(\mathrm{HOI_{Rare}}\uparrow\) \(\mathcal{S}_{\mathrm{CLIP}}\uparrow\) \(\mathcal{S}_{\mathrm{CLIP}}^{verb}\uparrow\) PickScore \(\uparrow\) FID \(\downarrow\) HOI \(\uparrow\)
    SD CVPR'22 31.74 21.82 21.50 51.31 18.78 10.02 31.10 21.26 21.26 77.29 15.85
    A&E ACM TOG'23 31.63 21.72 21.33 46.41 16.57 8.62 31.21 21.11 21.11 70.74 14.52
    LayoutLLM-T2I ACM MM'23 31.63 22.02 21.01 38.94 16.98 8.06 31.65 21.62 20.88 59.35 16.64
    BoxDiff ICCV'23 31.42 21.69 21.22 45.88 16.33 8.67 31.06 21.27 20.96 68.67 12.34
    InteractDiffusion CVPR'24 28.72 21.34 20.40 29.74 21.57 10.25 28.34 20.76 20.16 49.74 15.78
    MultiDiffusion ICML'23 31.64 21.81 21.67 51.51 22.46 11.15 32.53 21.31 21.81 83.27 17.96
    SDXL ICLR'24 32.06 22.29 22.68 40.32 25.85 14.24 31.76 21.40 22.54 75.40 19.02
    LMD TMLR'24 28.67 20.11 20.62 51.37 9.10 2.65 29.31 20.29 20.56 75.68 10.26
    ReCorD (ours) 31.92 22.26 21.49 37.03 22.86 12.72 31.60 21.55 21.31 58.20 20.00
    ReCorD\(^\dagger\) (ours) 32.40 22.65 22.54 36.72 26.33 15.39 31.94 21.84 22.22 60.74 22.48

    Comparison between ReCorD and existing baselines in terms of generated image quality scores in \(\mathcal{S}_{\mathrm{CLIP}}\), \(\mathcal{S}_{\mathrm{CLIP}}^{verb}\), PickScore, FID, along with HOI classification score on HICO-DET and VCOCO. Ours\(^\dagger\) represents using SDXL as backbone.

    Method \(\mathcal{S}_{\mathrm{CLIP}}\uparrow\) \(\mathcal{S}_{\mathrm{CLIP}}^{verb}\uparrow\) PickScore \(\uparrow\)
    SD CVPR'22 30.03 21.39 20.96
    A&E ACM TOG'23 29.59 21.65 20.33
    LayoutLLM-T2I ACM MM'23 30.35 22.13 20.36
    MultiDiffusion ICML'23 30.59 21.74 21.14
    SDXL ICLR'24 30.44 21.86 21.82
    LMD TMLR'24 27.27 20.63 19.94
    ReCorD (ours) 30.14 21.94 20.83
    ReCorD\(^\dagger\) (ours) 30.71 22.38 21.64

    Comparison with SOTAs on T2I-CompBench.

    BibTeX

    @inproceedings{jianglin2024record,
      title={ReCorD: Reasoning and Correcting Diffusion for HOI Generation},
      author={Jiang-Lin, Jian-Yu and Huang, Kang-Yang and Lo, Ling and Huang, Yi-Ning and Lin, Terence and Wu, Jhih-Ciang and Shuai, Hong-Han and Cheng, Wen-Huang},
      booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
      year={2024}
    }