In this study, we propose PerAct-L, an enhanced version of the PerAct robot manipulation model based on the Perceiver IO architecture, specifically designed to reduce its computational complexity. PerAct-L incorporates a linear low-rank self-attention mechanism that projects the Key and Value matrices into a fixed low-dimensional space. This approach reduces the time complexity of the attention operation from O(N²) to O(N<sub>k</sub>), effectively lowering both computational requirements and memory usage. Experiments conducted in the RLBench environment show that PerAct-L improves the success rate by 13.6 % over the original PerAct model when performing the complex "Put Item in Drawer" task. Additionally, in a previously unseen environment with an expanded rotation range, it achieves a performance gain of 12.8 %. Further analysis of the latent vector activation distribution after cross-attention reveals that PerAct-L tends to distribute information more evenly. This suggests that computational optimization contributes to enhanced representational diversity and robustness in manipulation. These findings validate that low-rank attention mechanisms can not only improve real-time performance and adaptability but also enhance expressivity, demonstrating the structural scalability and practical potential of PerAct-L.