Please use this identifier to cite or link to this item:
A Case Study on Sample Complexity, Topology, and Interpolation in Neural Networks
|Title:||A Case Study on Sample Complexity, Topology, and Interpolation in Neural Networks|
|Authors:||Humphries, Jonathan Owen|
|Contributors:||Altenberg, Lee (advisor)|
Computer Science (department)
|Publisher:||University of Hawai'i at Manoa|
|Abstract:||The general heuristic for determining the sample size to use for training artificial neural networks on real world data sets is “more is better”. Similarly, the heuristic for selecting the number of neurons in the hidden layer of the neural network has also been that “more is better”. However, increased sample complexity and topology increase costs in the form of longer training duration and additional computing power. This study uses as its task for learning the completely known and relatively simple problem of numeric addition. It attempts to add to the existing body of knowledge on the double descent curve, sample complexity and topology via a detailed analysis. Though we were unable to identify the exact sample complexity of numeric addition given the available hardware, we were able to identify hyper-parameters for continuing this line of research. We also identified that, given a large enough sample size, the training vs testing error will become correlated and negligible early in training. Finally, we identified an important learning difference between the pyTorch neuralnetwork framework and a coded from scratch framework.|
|Rights:||All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.|
|Appears in Collections:||
M.S. - Computer Science|
Please email firstname.lastname@example.org if you need this content in ADA-compliant format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.