|Academic Supervisors||Steven Gunn, Seb Stein|
|Pure Link||Active Project|
Recent work in the field has shown that neural networks reserve much class separation to the later layers of the network and class separation increases gradually throughout the network. With many classification problems there exist easier samples which do not necessarily need the full representational capacity of a neural network. Hence, there may be a benefit to introducing models which make immediate gains in class separability to maximise the use of the earlier layers of a neural network and allow early exiting more frequently.
To that end this work introduces the idea of progressive intelligence, whereby a machine learning model approaches the inference process incrementally. These models look to make fast gains in accuracy during a forward pass which are gradually increased depending on performance requirements and resource constraints. Branched models act as a focal point for the work and looks to optimise them for progressively intelligent deployments. This is investigated using linear separability, class separation, and centred kernel alignment throughout the intermediate layers of the networks.
It is found that branches improve intermediate representations predominantly in class separability by up to ∼%30 in earlier layers, whilst having little effect on later layer separability. By varying branch weighting, position, training objective, and training scheduling, an optimal training regime is found to produce pro- gressively intelligent models: equal weighting on equidistant branches, with a finetuning period at the end of training to improve final layer performance. When deployed as such this work suggests classification branches can maximise increases in generalisability throughout the network without hindering final layer performance, presenting an initial step towards progressively intelligent systems.