There are many ways to make neural networks more robust to small changes in data and architecture. One way is to use data augmentation, which essentially creates more data by artificially changing the data that already exists. This can be done by adding noise, flipping images, or other types of transformations. Data augmentation can help reduce overfitting and make the model more robust to small changes in the data.
Another way to make neural networks more robust is to use pre-training. This is where the model is first trained on a large dataset, then fine-tuned on the smaller dataset. This can help the model to better learn the underlying features of the data and be less susceptible to overfitting.
There are also many ways to make the architecture of neural networks more robust. One way is to use dropout, which randomly drops nodes from the network during training. This forces the model to learn to be robust to changes in the structure of the network and can help prevent overfitting.
Overall, there are many ways to make neural networks more robust to small changes in data and architecture. Data augmentation, pre-training, and dropout are all effective methods for improving the robustness of neural networks.
References:
https://en.wikipedia.org/wiki/Data_augmentation
https://en.wikipedia.org/wiki/Pretraining
https://en.wikipedia.org/wiki/Dropout_(neural_networks)