While recent advances in deep learning many existing approaches primarily emphasize textual descriptions and fail to fully capture the mathematical content of equations. To address this limitation, this study proposes a hybrid model that integrates a Long Short-Term Memory (LSTM) network with Bidirectional Encoder Representations from Transformers (BERT) for classifying equations into topic domains. In the proposed method, LaTeX-formatted equations are transformed into bigram token sequences for structural analysis by the LSTM encoder, while the surrounding textual context is processed by the BERT encoder to capture semantic meaning. The two encoded representations are concatenated and passed through a feed-forward neural network for final classification. Experiments on a high school mathematics dataset demonstrate that this approach outperforms standalone LSTM and BERT models, achieving 91.96% accuracy, 92.11% precision, 91.97% recall, and an F1-score of 0.92, confirming the effectiveness of jointly leveraging structural and contextual information for mathematical problem classification.