RnDCircle Logo
김미혜 연구실
대구가톨릭대학교 컴퓨터소프트웨어학부
김미혜 교수
기본 정보
연구 분야
발행물
구성원

김미혜 연구실

대구가톨릭대학교 컴퓨터소프트웨어학부 김미혜 교수

김미혜 연구실은 인공지능 시스템 및 응용, 지식관리 및 정보검색을 중심으로 전문 분야에 적합한 문서 관리와 시맨틱 검색, 웹 기반 지식서비스, 문제 해결형 지능 시스템을 연구하며, 동시에 컴퓨팅 사고와 소프트웨어 교육을 통해 실용적 디지털 역량을 확산하는 융합형 연구를 수행하고 있다.

대표 연구 분야
연구 영역 전체보기
인공지능 시스템 및 응용 thumbnail
인공지능 시스템 및 응용
주요 논문
3
논문 전체보기
1
article
|
gold
·
인용수 0
·
2025
Joint Encryption and Optimization for 6G MEC-Enabled IoT Networks
Manzoor Ahmed, Wali Ullah Khan, Fatma S. Alrayes, Yahia Said, Ali M. Al-Sharafi, Mi-Hye Kim, Khongorzul Dashdondov, Inam Ullah
IF 3.6
IEEE Access
With the advent of advancements in future sixth-generation (6G) communication systems, Internet of Things (IoT) devices, characterized by their limited computational and communication capacities, have become integral in our lives. These devices are deployed extensively to gather vast amounts of data in real-time applications. However, their restricted battery life and computational resources present significant challenges in meeting the requirements of advanced communication systems. Mobile Edge Computing (MEC) has emerged as a promising solution to these challenges within the IoT realm in recent years. Despite its potential, securing MEC infrastructure in the context of IoT remains an open task. This study explores the operational dynamics of a secured IoT-enabled MEC infrastructure, focusing on providing real-time, on-demand, secure computational resources to low-powered IoT devices. It outlines a joint optimization problem to maximize computational throughput, minimize device energy consumption, reduce computational latency, and mitigate security overhead. An optimization algorithm is introduced to address these challenges by jointly allocating resources, thereby optimizing throughput, conserving energy, and meeting latency benchmarks through dynamic system adaptation. The effectiveness of the proposed model and algorithm is demonstrated through comparisons with relevant benchmark schemes, highlighting its efficiency in various scenarios. This work showcases the potential of advancements in encryption to deliver scalable security solutions with reduced resource consumption as the number of devices increases.
https://doi.org/10.1109/access.2025.3565415
Joint (building)
Computer science
Encryption
Internet of Things
Computer network
Computer security
Engineering
2
article
|
gold
·
인용수 0
·
2025
Time Series Data Visualization Method Using DTW Distance 2D Vector Space Visualization
Khongorzul Dashdondov, Yong-Ki Kim, Mi-Hye Kim
IF 3.6
IEEE Access
Speech recognition and time-series data processing are crucial for applications such as human–computer interaction, assistive technologies, and biometric authentication. However, traditional methods often struggle with noisy data, speaker variability, and high-dimensional feature spaces, which limit their accuracy and interpretability. This study proposes a novel framework that integrates Dynamic Time Warping (DTW) with Multidimensional Scaling (MDS) to improve the visualization and analysis of speech time-series data. The framework consists of four stages: data preparation, preprocessing, DTW distance calculation, and two-dimensional (2D) vector space mapping. Lip regions were extracted from video frames and represented using raw grayscale images, lip-shaped approximations, and hybrid features. DTW is applied to measure temporal similarity, followed by MDS to project the data into a lower-dimensional space for clearer feature distribution and more efficient Cluster Validity Index (CVI) computation. The experimental results show that the proposed approach enhances recognition performance. Among the features tested, V_Δgray achieved the highest speaker-dependent recognition rate of 94.96% (±0.0364 standard deviation), whereas V_shape yielded the best speaker-independent recognition rate of 50.91% (±0.2596 standard deviation). Additionally, syllable- and word-level analyses further confirmed the robustness of V_shape. In conclusion, the DTW–MDS framework improves class separability and interpretability and offers a reliable and efficient method for time-series speech analyses. These findings have significant implications for mobile and wearable speech recognition systems.
https://doi.org/10.1109/access.2025.3631317
Dynamic time warping
Pattern recognition (psychology)
Robustness (evolution)
Visualization
Feature vector
Interpretability
Feature extraction
Biometrics
Word error rate
Data visualization
3
article
|
gold
·
인용수 7
·
2023
NDAMA: A Novel Deep Autoencoder and Multivariate Analysis Approach for IoT-Based Methane Gas Leakage Detection
Khongorzul Dashdondov, Mi-Hye Kim, Kyuri Jo
IF 3.6
IEEE Access
Natural gas is widely used for domestic and industrial purposes, and whether it is being leaked into the air cannot be directly known. The current problem is that gas leakage is not only economically harmful but also detrimental to health. Therefore, much research has been done on gas damage and leakage risks, but research on predicting gas leakages is just beginning. In this study, we propose a method based on deep learning to predict gas leakage from environmental data. Our proposed method has successfully improved the performance of machine learning classification algorithms by efficiently preparing training data using a deep autoencoder model. The proposed method was evaluated on an open dataset containing natural gas and environmental information and compared with extreme gradient boost (XGBoost), K-nearest neighbors (KNN), decision tree (DT), random forest (RF), and naive Bayes (NB) algorithms. The proposed method is evaluated using accuracy, F1-score, mean square error (MSE), mean intersection over union (mIoU), and area under the ROC curve (AUC). The presented method in this study outperformed all compared methods. Moreover, the deep autoencoder and ordinal encoder-based XGBoost (DA-MA-XGBoost) showed the best performance by giving 99.51% accuracy, an F1-score of 99.53%, an MSE of 0.003, mIoU of 99.40 and an AUC of 99.62%.
https://doi.org/10.1109/access.2023.3340240
Autoencoder
Computer science
Mean squared error
Decision tree
Artificial intelligence
Leakage (economics)
Random forest
Deep learning
Support vector machine
Pattern recognition (psychology)