Page 183 - Kỷ yếu hội thảo khoa học lần thứ 12 - Công nghệ thông tin và Ứng dụng trong các lĩnh vực (CITA 2023)
P. 183

Tran Quy Nam and Phi Cong Huy                                                   167


                       To sum up, we simply implement a traditional Convolutional Neural Network with
                     3 Convolutional layers for feature extraction. The classification is tested with Softmax
                     function  for  categorical  cross-entropy.  Then,  we  used  4  other  classifiers,  namely
                     XGBoost,  SVC  and  Decision  Tree  Classifier,  AdaBoost  Classifier,  Multi-layer
                     Perceptron Classifier. The test results of CNN-Softmax, CNN-XGBoost, CNN-SVC,
                     CNN-Decision  Tree  Classifier,  CNN-AdaBoost,  CNN-Multi-layer  Perceptron
                     Classifier models are shown in Table 1 below.


                                             Table 1. Results on accuracy of models

                                                                     Accuracy (%)
                           Model
                                                         Train            Valid           Test

                           CNN-Softmax                   71.00            66.00          69.00
                           CNN-XGBoost                   99.98            73.68          72.75

                           CNN-SVC                       75.78            71.18          71.74
                           CNN-Decision Tree             99.98            66.47          65.94

                           CNN-AdaBoost                  55.58            51.91          52.46
                           CNN-MLP                       87.52            71.03          70.72


                     We can think that the accuracy is quite low (72.75%). But the research question of
                     this study is not accuracy, it aims to check whether what kind of hybrid deep learning
                     has the better performance compare to other hybrid models regarding the problem of
                     weather  image  classification.  Therefore,  we  implement  a  very  simple  CNN  with  3
                     Convolutional layers for feature extraction which make lower accuracy but run faster.
                     Our  hypothesis  does  not  focus  on  accuracy  but  performance  of  models  in
                     classification.  Also,  the  dataset  were  imbalance  acceptance  which  leads  to  lower
                     accuracy.  If  we  want  higher  accuracy,  we  should  implement  a  transfer  learning
                                                                                                 deeply
                     trained  with  million  of  images  in  ImageNet  database.  The  aim  of  this  study  is  to
                     investigate  the  hybrid  model,  which  one  performs  better  in  term  of  weather  image
                     classification problem. Therefore, the CNN model to make feature extraction is also
                     simple; only 3 convolutional layers not include any batch normalization layers as our
                     purpose is in order to speed up the process of experiments.
                       The table 1 above shows that the performance of CNN-XGBoost is the best among
                     other 4 models in the problem of weather image classification. Its values of accuracy
                     on  test  set  at  72.75%  which  is  higher  than  other  comparative  models  on  the  same
                     tested dataset of weather images.



                     5      Conclusion


                     In this study, there were four hybrid models and a simple CNN models, totally five
                     models, were employed them on the same weather image classification problem. We




                     ISBN: 978-604-80-8083-9                                                  CITA 2023
   178   179   180   181   182   183   184   185   186   187   188