FIELD: image processing.
SUBSTANCE: proposed method of interpolating video frames comprises steps of: obtaining at least two video key frames; on one of key frames areas of repeating structures are detected; evaluating motion between said key frame and the interpolated frame by supplying the key frames and areas of repeating structures to the trained motion estimation neural network, wherein the value of the error function is calculated when training the neural network; interpolated frame is obtained by performing motion compensation using said key frame and the obtained motion vectors.
EFFECT: high quality of interpolated frames due to more accurate estimation of motion in areas of repeating structures taken into account during frame interpolation.
29 cl, 14 dwg, 1 tbl
Title | Year | Author | Number |
---|---|---|---|
FRC OCCLUSION PROCESSING WITH DEEP LEARNING | 2020 |
|
RU2747965C1 |
FRAME RATE CONVERSION METHOD SUPPORTING FRAME INTERPOLATION REPLACEMENT WITH MOTION COMPENSATION BY LINEAR FRAME COMBINATION AND DEVICE IMPLEMENTING IT | 2022 |
|
RU2786784C1 |
SYSTEM FOR GENERATING VIDEO WITH RECONSTRUCTED PHOTOREALISTIC 3D MODEL OF HUMAN, METHODS FOR SETTING UP AND OPERATING THIS SYSTEM | 2024 |
|
RU2834188C1 |
METHOD AND SYSTEM OF SUPERRESOLUTION BY COMBINED SPARSE APPROXIMATION | 2016 |
|
RU2661537C2 |
TEXTURED NEURAL AVATARS | 2019 |
|
RU2713695C1 |
VIDEO QUALITY ASSESSMENT TECHNOLOGY | 2010 |
|
RU2540846C2 |
METHOD OF CREATING FULL-LENGTH ANIMATED AVATAR OF PERSON FROM ONE IMAGE OF PERSON, COMPUTING DEVICE AND MACHINE-READABLE MEDIUM FOR IMPLEMENTATION THEREOF | 2023 |
|
RU2813485C1 |
VISUALIZATION OF RECONSTRUCTION OF 3D SCENE USING SEMANTIC REGULARIZATION OF NORMALS TSDF WHEN TRAINING NEURAL NETWORK | 2023 |
|
RU2825722C1 |
METHOD FOR VISUALIZING A 3D PORTRAIT OF A PERSON WITH ALTERED LIGHTING AND A COMPUTING DEVICE FOR IT | 2021 |
|
RU2757563C1 |
METHOD FOR SKIPPING REFINEMENT BASED ON SIMILARITY OF INSERTION, WHEN REFINING MOTION VECTOR ON DECODER SIDE BASED ON BILINEAR INTERPOLATION | 2019 |
|
RU2786383C2 |
Authors
Dates
2025-03-11—Published
2024-05-20—Filed