Air pollution poses major risks to public health, driving the adoption of low-cost sensor (LCS) networks for fine-grained and real-time monitoring. However, the variable accuracy of LCS data compared with reference instruments necessitates robust quality control (QC) frameworks. Over the past decade, machine learning (ML) has emerged as a powerful tool to calibrate sensors, detect anomalies, and mitigate drift in large-scale deployment. This survey reviews advances in three methodological categories: traditional ML models, deep learning architectures, and hybrid or unsupervised methods. We also examine spatiotemporal QC frameworks that exploit redundancies across time and space, as well as real-time implementations based on edge–cloud architectures. Applications include personal exposure monitoring, integration with atmospheric simulations, and support for policy decision making. Despite these achievements, several challenges remain. Traditional models are lightweight but often fail to generalize across contexts, while deep learning models achieve higher accuracy but demand large datasets and remain difficult to interpret. Spatiotemporal approaches improve robustness but face scalability constraints, and real-time systems must balance computational efficiency with accuracy. Broader adoption will also require clear standards, reliable uncertainty quantification, and sustained trust in corrected data. In summary, ML-based QC shows strong potential but is still constrained by data quality, transferability, and governance gaps. Future work should integrate physical knowledge with ML, leverage federated learning for scalability, and establish regulatory benchmarks. Addressing these challenges will enable ML-driven QC to deliver reliable, high-resolution data that directly support science-based policy and public health.