Abstract We propose a Buckley–James (BJ) Boost Q-learning framework for estimating optimal dynamic treatment regimes from right censored survival outcomes in longitudinal randomized clinical trials, motivated by the clinical need to support patient specific treatment decisions when follow up is incomplete and covariate effects may be nonlinear. The method combines accelerated failure time modelling with iterative boosting using flexible base learners, including componentwise least squares and regression trees, within a counterfactual Q-learning framework. By modelling conditional survival time directly, BJ Boost Q-learning avoids the proportional hazards assumption, yields clinically interpretable time scale contrasts, and enables estimation of stage specific Q-functions and individualized decision rules under standard potential outcomes assumptions. In contrast to Cox-based Q-learning, which relies on hazard modelling and can be sensitive to nonproportional hazards and model misspecification, our approach provides a robust and flexible alternative for regime learning. Simulation studies and analyses of the ACTG175 HIV trial and the CALGB 8923 two-stage leukaemia trial show that BJ Boost Q-learning improves treatment decision accuracy and produces more stable within participant counterfactual contrasts, particularly in multistage settings where estimation error and bias can compound across stages.