In recent years, research has focused on developing intrusion detection systems (IDS) within vehicle networks to prevent automotive hacking from external cyberattacks. While machine learning (ML) techniques have shown promise in detecting known attacks, their vulnerability to adversarial examples remains a significant challenge. In this study, we introduce a Generative Adversarial Network (GAN)-based method for creating adversarial attacks capable of bypassing ML-based IDS in in-vehicle networks. Our approach involves preprocessing an automotive hacking dataset, training a GAN-based model, and evaluating the generated attacks using accuracy metrics. The results demonstrate that adversarial attacks effectively reduce the detection accuracy of various IDSs to less than 50%, emphasizing the importance of addressing adversarial cases when designing and evaluating ML-based IDSs for in-vehicle networks. Additionally, t-SNE visualization reveals the successful generation of new adversarial attacks, highlighting the need for ongoing research to strengthen the security of in-vehicle systems.