This article introduces a novel feature-based visual simultaneous localization and mapping (v-SLAM), termed finite-plane (FP)-SLAM, using an RGB-D camera Specifically, the problem solved in this article is to extract sparse features effectively from RGB-D images and build a graph consisting of poses and sparse features as accurately as possible in real time on CPU. Motivated by surfel-point SLAM (SP-SLAM), FP-SLAM extracts points and surfels from RGB-D images as sparse features and optimizes a pose and feature graph jointly. Compared to SP-SLAM, however, two new residuals are proposed to optimize the graph and improved the accuracy in FP-SLAM. These new residuals exploit the relationships between surfel-surfel (SS) and between point-surfel (PS), capturing interfeature relations. Incorporating these new residuals with those from SP-SLAM, a total of four residuals are employed to jointly optimize points, surfels, and camera poses. In addition, to expedite processing, the Jacobian of the error function is directly implemented in the optimization instead of using the off-the-shelve derivative module to speed up the processing. Finally, the effectiveness of the proposed FP-SLAM is validated on benchmark datasets by comparing it against previous methods in terms of localization accuracy.