주요 논문
3
*2026년 기준 최근 6년 이내 논문에 한해 Impact Factor가 표기됩니다.
1
article
|
인용수 27
·
2024Isogeometric 3D optimal designs of functionally graded triply periodic minimal surface plates
Huy Tang, Nam V. Nguyen, H. Nguyen‐Xuan, Jaehong Lee
IF 9.4 (2024)
International Journal of Mechanical Sciences
https://doi.org/10.1016/j.ijmecsci.2024.109406
Isogeometric analysis
Surface (topology)
Structural engineering
Minimal surface
Materials science
Mathematics
Geometry
Finite element method
Engineering
2
article
|
인용수 33
·
2024A hierarchically normalized physics-informed neural network for solving differential equations: Application for solid mechanics problems
Thang Le-Duc, Seunghye Lee, H. Nguyen‐Xuan, Jaehong Lee
IF 8 (2024)
Engineering Applications of Artificial Intelligence
https://doi.org/10.1016/j.engappai.2024.108400
Computer science
Partial differential equation
Artificial neural network
Perceptron
Heuristic
Convergence (economics)
Process (computing)
Function (biology)
Applied mathematics
Stability (learning theory)
3
article
|
인용수 16
·
2022Strengthening Gradient Descent by Sequential Motion Optimization for Deep Neural Networks
Thang Le-Duc, Quoc Hung Nguyen, Jaehong Lee, H. Nguyen‐Xuan
IF 14.3 (2022)
IEEE Transactions on Evolutionary Computation
In this article, we explore the advantages of heuristic mechanisms and devise a new optimization framework named sequential motion optimization (SMO) to strengthen gradient-based methods. The key idea of SMO is inspired from a movement mechanism in a recent metaheuristic method called balancing composite motion optimization (BCMO). Specifically, SMO establishes a sequential motion chain of two gradient-guided individuals, including a leader and a follower to enhance the effectiveness of parameter updates in each iteration. A surrogate gradient model with low computation cost is theoretically established to estimate the gradient of the follower by that of the leader through chain rule during the training process. Experimental results in terms of training quality on both fully connected multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) with respect to three popular benchmark datasets, including MNIST, Fashion-MNIST, and CIFAR-10 demonstrate the superior performance of the proposed framework in comparison with the vanilla stochastic gradient descent (SGD) implemented via backpropagation (BP) algorithm. Although this study only introduces the vanilla gradient descent (GD) as a main gradient-guided factor in SMO for deep neural network (DNN) training application, it is great potential to combine with other gradient-based variants to improve its effectiveness and solve other large-scale optimization problems in practice.
https://doi.org/10.1109/tevc.2022.3171052
MNIST database
Backpropagation
Computer science
Gradient descent
Stochastic gradient descent
Artificial neural network
Artificial intelligence
Convolutional neural network
Benchmark (surveying)
Heuristic