FIELD: computing technology.
SUBSTANCE: invention relates to modelling realistic clothing worn by people and to realistic 3D modelling of people. In the method for training an overlay network for modelling clothing on a person, wherein the clothing adapts to the pose and body shape of any person, a set of frames of people is provided, wherein each person is dressed in clothing and wherein the frames constitute a video sequence where each person performs a series of motions; a Skinned Multi-Person Linear (SMPL) mesh is calculated for each frame for the pose and body shape of the person in the frame; for each frame, a clothing mesh is calculated for the pose and body shape of the person in the frame; a source point cloud in the form of a set of vertices of said Skinned Multi-Person Linear (SMPL) meshes is created for each frame; a randomly initialised d-dimensional code vector is set to code the style of clothing for each person; the source point clouds and code vectors of clothing are supplied to the overlay network, namely: the source point clouds are supplied to the input of the neural network of the cloud converter of the overlay network, and the code vectors of clothing are supplied to the input of the neural network of the MLP (multilayer perceptron) encoder; the code vector of clothing is processed by the neural network of the MLP encoder, and then the output thereof is transferred to the neural network of the cloud converter, which deforms introduced the input point cloud with account for the output of the neural network of the MLP encoder, and outputs the predicted point cloud of clothing for each frame; after processing all the frames from the set of frames of people, a pre-trained overlay network is obtained, namely: weights of the trained neural network of the MLP encoder, weights of the trained neural network of the cloud converter, code vectors of clothing of the coded styles for all people; by means of the pre-trained overlay network, a suitable style of clothing corresponding to one of the vectors and one of the point clouds is overlaid on any body shape and any pose selected by the user.
EFFECT: possibility of generating images of a person wearing clothing selected from another person.
6 cl, 8 dwg, 2 tbl
Title | Year | Author | Number |
---|---|---|---|
TEXTURED NEURAL AVATARS | 2019 |
|
RU2713695C1 |
NEURAL NETWORK TRANSFER OF THE FACIAL EXPRESSION AND POSITION OF THE HEAD USING HIDDEN POSITION DESCRIPTORS | 2020 |
|
RU2755396C1 |
METHOD AND SYSTEM FOR REMOTE CLOTHING SELECTION | 2020 |
|
RU2805003C2 |
NEURAL DOT GRAPHIC | 2019 |
|
RU2729166C1 |
NEURAL-NETWORK RENDERING OF THREE-DIMENSIONAL HUMAN AVATARS | 2021 |
|
RU2775825C1 |
METHOD FOR 3D RECONSTRUCTION OF A HUMAN HEAD TO OBTAIN A RENDER IMAGE OF A PERSON | 2022 |
|
RU2786362C1 |
VISUALIZATION OF RECONSTRUCTION OF 3D SCENE USING SEMANTIC REGULARIZATION OF NORMALS TSDF WHEN TRAINING NEURAL NETWORK | 2023 |
|
RU2825722C1 |
METHOD OF CREATING FULL-LENGTH ANIMATED AVATAR OF PERSON FROM ONE IMAGE OF PERSON, COMPUTING DEVICE AND MACHINE-READABLE MEDIUM FOR IMPLEMENTATION THEREOF | 2023 |
|
RU2813485C1 |
METHOD FOR VISUALIZING A 3D PORTRAIT OF A PERSON WITH ALTERED LIGHTING AND A COMPUTING DEVICE FOR IT | 2021 |
|
RU2757563C1 |
RAPID TWO-LAYER NEURAL NETWORK SYNTHESIS OF REALISTIC IMAGES OF A NEURAL AVATAR BASED ON A SINGLE IMAGE | 2020 |
|
RU2764144C1 |
Authors
Dates
2022-07-27—Published
2021-07-30—Filed