The rapid advancement of Large Language Models (LLMs) and Large Vision Models (LVMs) has achieved breakthrough performance but has also created significant deployment challenges due to their massive scale, often exceeding 100 billion parameters. Parameter-Efficient Fine-tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), address this issue by approximating weight updates as products of low-rank matrices, dramatically reducing the number of trainable parameters while maintaining competitive performance. However, LoRA’s effectiveness is fundamentally constrained by the limited effective rank of its update matrices, limiting its ability to capture the rich, high-dimensional changes required for complex adaptation tasks. We propose GroupLoRA, which overcomes this limitation through a divide-and-conquer strategy that partitions weight matrices into g groups, with independent LoRA modules for each group. GroupLoRA introduces two key innovations: (1) an Inter-Group Bridge module that enables information exchange between groups through learnable interaction matrices, and (2) Learnable Scaling Factors that dynamically adjust each group’s contribution to the final output. This design enables group specialization while maintaining cross-group coordination, maximizing model expressiveness within limited parameter budgets. Extensive experiments on LLMs and VLMs demonstrate that GroupLoRA consistently achieves higher performance with fewer parameters than conventional LoRA. Our approach successfully applies group-wise processing principles to parameter-efficient fine-tuning, offering a practical solution for efficient large-model adaptation across diverse tasks.