Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control, time series prediction, etc. In this paper, feedback analysis of the learning algorithm of radial basis function neural networks is presented. It studies the robustness of the learning algorithm in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. The learning scheme is first associated with a feedback structure and then the stability of that feedback structure is analyzed via small gain theorem. The analysis suggests bounds on the learning rate in order to guarantee that the learning algorithm will behave as robust nonlinear filters and optimal choices for faster convergence speeds.