Tunable analog activation functions are essential for energy-efficient artificial intelligence (AI) hardware. Two transistor designs are presented: the sigmoid-like activation function transistor (SA-transistor) and the Gaussian-like activation function transistor (GA-transistor), which implement analog sigmoid and Gaussian functions using a screen gate structure. In the SA-transistor, adjusting the screen gate voltage (V<sub>Screen-G</sub>) enables precise control of the sigmoid slope and saturation level. In the GA-transistor, the amplitude and standard deviation of the Gaussian response are tunable through the same mechanism. These transistors enable precise and continuous tuning of analog activation parameters such as slope, amplitude, and width at the device level. This controllability allows hardware-optimized neural computations tailored to specific tasks or datasets. Applied in real-world tasks, the SA-transistor improved lung magnetic resonance imaging (MRI) classification accuracy from 77% to 84%, and the GA-transistor raised the time-series forecasting coefficient of determination (R<sup>2</sup>) from 0.82 to 0.93. Furthermore, by assembling these devices into a hardware-based multilayer perceptron (MLP), robust inference is demonstrated on the IRIS dataset with 96.7% overall accuracy. This system-level validation highlights that analog activation transistors can directly support neuromorphic accelerators without digital post-processing, reducing circuit complexity and power consumption while maintaining high classification fidelity.