Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/3993
Title: | Multi-task Learning Model for Detecting and Filtering Internet Violent Images for Children |
Other Titles: | Mô hình đa nhiệm nhận diện và lọc hình ảnh bạo lực cho trẻ em |
Authors: | Le, Kim Hoang Trung Nguyen, Van Thanh Vinh Phan, Le Viet Hung Nguyen, Huu Nhat Minh |
Keywords: | Multi-task learning Violence detection |
Issue Date: | Jun-2024 |
Publisher: | Journal of Infomation & Communications |
Abstract: | The Internet has emerged as an essential daily information access, but exposing children to inappropriate content can impair their early development. Existing content filtering methods exhibit limitations in accurately and efficiently detecting diverse inappropriate internet content. In this paper, we propose a multi-task learning model for detecting and filtering violent images to provide safer online experiences. The multi-task model is developed from the pre-trained lightweight base model such as MobileNetv2 to enable proper integration within web browser extensions. Pure training to detect violent images could raise false alarms in the classification results when the landscape or object images don’t contain any human, hence we develop two joint learning tasks such as detecting humans and detecting violent images simultaneously. Our experiments demonstrate that the proposed multi-task approach with binary rule achieves 98.5% accuracy, outperforming the single-task model for detecting violent images by a margin. Thereafter, the multi-task model is also integrated into the web extension to detect and filter out violent images to prevent children from harmful content. |
Description: | Research, Development and Application on Information and Communication Technology; Vol 2024, No 1; pp: 18-24 |
URI: | https://ictmag.vn/cntt-tt/article/view/1261/559 https://elib.vku.udn.vn/handle/123456789/3993 |
ISSN: | 1859-3526 |
Appears in Collections: | NĂM 2024 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.