Sinusoidal neural networks have been shown effective as implicit neural representations (INRs) of low-dimensional signals, due to their smoothness and high representation capacity. However, initializing and training them remain empirical tasks which lack on deeper understanding to guide the learning process. To fill this gap, our work introduces a theoretical framework that explains the capacity property of sinusoidal networks and offers robust control mechanisms for initialization and training. Our analysis is based on a novel amplitude-phase expansion of the sinusoidal multilayer perceptron, showing how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies. This relationship can be directly used to initialize the input neurons, as a form of spectral sampling, and to bound the network’s spectrum while training. Our method, referred to as TUNER (TUNing sinusoidal nEtwoRks), greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions, while preventing overfitting.
To train a sinusoidal MLP (gray model, top-left), we employ two techniques derived from Thrms 1 and 2. First, we initialize the input frequencies ω (green, bottom-left) with a dense distribution of low frequencies (red square) and a sparse distribution of higher frequencies (green grid).
This initialization gives flexibility to learn the remaining signal frequencies which are simply integer combinations of ω (the yellow nodes on the right), a consequence of the amplitude-phase expansion given by Thrn 1.
Note that this initialization resembles a frequency sampling since the training generates those new frequencies around ω.
Second, we bound the coefficients of the hidden layer weights (blue nodes) to ensure that the MLP remains within a specified bandlimit. This approach is effective because the amplitude-phase expansion (shown on the right) of each hidden neuron (purple nodes) indicates that the amplitudes of the generated frequencies have an upper-bound depending only on the hidden weights (blue, bottom-right).
Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks
Tiago Novello, Diana Aldana, Andre Araujo, and Luiz Velho
Please send feedback and questions to Tiago Novello.
@InProceedings{Novello_2025_CVPR,
author = {Novello, Tiago and Aldana, Diana and Araujo, Andre and Velho, Luiz},
title = {Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {3071-3080}
}