This research presents a self-play reinforcement learning framework for the game of Othello, enabling generalization across diverse board constraints. These constraints include variable board sizes, blocked cells, and total inference time limitations. FastOthelloNet incorporates a lightweight convolutional input architecture and employs Monte Carlo Tree Search (MCTS) for efficient planning. Unlike AlphaZero, which assumes a fixed board structure, or MuZero, which explicitly models latent dynamics, FastOthelloNet is trained directly on a randomized Othello environment with dynamic constraints, eliminating the need for a complex world model.