The increasing integration of Artificial Intelligence (AI) into team-based decision environments necessitates an examination of trust formation that extends beyond the individual user. This context introduces complexities stemming from interpersonal dynamics, heterogeneous individual trust levels, and collective decision processes. This study investigates how group decision-making processes change AI trust dynamics compared to individual settings. We aimed to develop a theoretical framework capturing these multilayered trust dynamics using structural equation modeling. We employed Körber’s Trust in Automation (TiA) scale pre- and post-task. A total of 51 participants, organized into 16 teams, performed a collaborative decision-making exercise (NASA moon survival) using ChatGPT. Structural equation modeling (SEM) was used for analysis. Group trust formation patterns significantly diverged from individual contexts. Attitudinal trust, strongly influenced by collective pe rceptions of AI performance and reliability, was the primary predictor of overall group trust, outweighing behavioral trust (actual usage). Factors like understanding/predictability showed no significant influence in group settings. Group-level dynamics fundamentally alter AI trust formation, challenging individual-centric views. Practical implications include the need for trust-building strategies focused on collective perceptions and experiences. The findings underscore the need for new theoretical models and group-specific trust measurement tools.