Paper ID: 2176
(Local Web) Supplementary Material:

Personalized Cinemagraphs using
Semantic Understanding and Collaborative Learning



Teaser of the proposed method

Input

Our candidate cinemaprah 1

Our candidate cinemaprah 2


Contents

A. Comparison of our semantic-based cinemagraph with other methods

B. Comparison without and with user editing for our method

C. Additional results for our method

* Supplementary PDF (Implementation Details, Additional Results, etc.)


Notes

- Required video codec: MPEG-4 (mp42) and H.264-AVC compatible codecs.
- This web page only consists of local web-pages; hence, no internet is accessed within these pages.
- Please click on individual videos to see original-size versions.
- There is a single candidate cinemaprah generated by our method from an input video clip.
- To reduce the file sizes (due to the limit of the supplementary file size as 100MB), we resize all the results to have maximum width of 960 pixels, and the size of input videos to be less than half of the original sizes used in this work.
- In Contents A and B, once multiple candidate cinemagraphs by our method are displayed, we put them in order of descreasing predicted ratings from left to right. For this, after clustering the user representation (see the supplemenatry PDF), we selected a representative user who is the closest neighbor to the centroid within the largest cluster.
- The cinemagraphs with red border indicate that marginal amounts of user editing is involved.


A. Comparison of our semantic-based cinemagraph with other methods

- Please click on individual videos to see original-size versions.

 

□ Comparison with Liao et al. [1] (Automatic method)

The method of Liao et al. is an automatic video loop generation method.

Input

Liao et al. [1]

Ours

 

□ Comparison with Tompkin et al. [2] (Manual method)

The method of Tompkin et al. requires user interaction to generate the result.

Input

Tompkin et al. [2]

Ours

 

□ Comparison with Neel et al. [3] (Manual method)

The method of Neel et al. requires user interaction to generate the result.

Input

Neel et al. [3]

Ours

 

□ Comparison with Yeh [4] (Automatic method)

The method of Yeh is an automatic cinemagraph generation method of which results do not loop.

Input

Yeh [4]

Our candidate cinemagraph 1

Our candidate cinemagraph 2

 


B. Comparison without and with user editing for our method

- Comparison without and with user editing for our method.
- The left column is without user editing and the right column is with user editing.
- The editing to correct a cinemagraph took less than 19 seconds for the first row and less than a minute for the second row, which requires a bit complex editing.
- Please click on individual videos to see original-size versions.

 

Without user editing

With user editing

 

 


C. Additional results for our method

- Additional results obtained by using our approach.
- For editted results (red box), editing took less than 30 seconds in worst case.
- Please click on individual videos to see original-size versions.

 

 

Our candidate cinemagraph 1

Our candidate cinemagraph 2

Our candidate cinemagraph 3

 


 

References

[1] Liao et al., “Automated video looping with progressive dynamism.” ACM Transactions on Graphics (SIGGRAPH), 2013

[2] Tompkin et al., “Towards Moment Imagery: Automatic Cinemagraphs.” Conference for Visual Media Production (CVMP), 2011

[3] Neel et al., “Cliplets: juxtaposing still and dynamic imagery.” Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST), 2012

[4] Yeh, “Selecting Interesting Image Regions to Automatically Create Cinemagraphs.” IEEE MultiMedia, 2016