This article examines differential privacy on federated learning. While recent studies are actively exploring this topic in conventional network environment, there are few studies that address the topic in vehicular networks. In particular, this research investigates the trade-off between accuracy performance and level of data protection. We have applied a local differential privacy with adaptive clipping method to two different federated learning models running on vehicular networks and conduct experiments under two different mobility scenarios. Results show that privacy-enhanced federate learning models degrade performance by 2.96% - 42.97%, representing that a vehicular network is sensitive to vehicles’ mobilities, learning models, and privacy solutions. With initial results, we plan to bring research questions for open discussion to audience.