GPS spoofing remains a critical threat in the use of autonomous vehicles. Machine-learning-based detection systems, particularly support vector machines (SVMs), demonstrate high accuracy in their defense against conventional spoofing attacks. However, their robustness against intelligent adversaries remains largely unexplored. In this work, we reveal a critical vulnerability in an SVM-based GPS spoofing detection model by analyzing its decision boundary. Exploiting this weakness, we introduce novel evasion strategies that craft adversarial GPS signals to evade the SVM detector: a data location shift attack and a similarity-based noise attack, along with their combination. Extensive simulations in the CARLA environment demonstrate that a modest positional shift reduces detection accuracy from 99.9% to 20.4%, whereas similarity to genuine GPS noise-driven perturbations remain largely undetected, while gradually degrading performance. A critical threshold reveals a nonlinear cancellation effect between similarity and shift, underscoring a fundamental detectability-impact trade-off. To our knowledge, these findings represent the first demonstration of such an evasion attack against SVM-based GPS spoofing defenses, suggesting a need to improve the adversarial robustness of machine-learning-based spoofing detection in vehicular systems.