Federated Learning(FL) has become an emerging method that trains private data by distributed learning of shared parameter, however, it has high communication overhead and is still exposed to attack on the model parameters. To minimize the communication overhead of federated learning while preserving its accuracy and security, we considered a combination of techniques with gradient quantization, clipping, and Huffman coding. Our system produces reduced parameters through quantization and clipping, then encodes the parameters through Huffman coding, which further increases the compression ratio as well as security of the parameters to be transmitted. Preliminary results identify that the scheme can significantly reduce the amount of transferring while preserving accuracy and security.