Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2700
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLo, Anh Duc-
dc.contributor.authorMa, Thi Chau-
dc.date.accessioned2023-09-25T08:07:21Z-
dc.date.available2023-09-25T08:07:21Z-
dc.date.issued2023-06-
dc.identifier.isbn978-604-80-8083-9-
dc.identifier.urihttp://elib.vku.udn.vn/handle/123456789/2700-
dc.descriptionProceeding of The 12th Conference on Information Technology and It's Applications (CITA 2023); pp: 53-64.vi_VN
dc.description.abstractReconstructing 3D hair structures from a single image is a highly promissing research direction. However, current methods only focus on real-world images, while user-oriented applications require more freedom in input data. Existing methods for buildings 3D hair from sketchy images use synthetic data for training, which encounters the domain gap issue. We experiment with building a dataset directly from hand-drawn sketches and propose a model trained on it. As a result, without using an intermediate oriented map representation, the model is still able to learn how to reconstruct hair at a satisfactory level. This opens up a new direction for this problem.vi_VN
dc.language.isoenvi_VN
dc.publisherVietnam-Korea University of Information and Communication Technologyvi_VN
dc.relation.ispartofseriesCITA;-
dc.subject3D hairvi_VN
dc.subjectSketchvi_VN
dc.subjectSingle imagevi_VN
dc.titleThree-Dimensional Hair Structure Reconstruction from a Single Sketch Image without Intermediate Representationvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2023 (National)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.