What is the major difference between boosting and NNs? In boosting we use a few weak learners to improve the performance of the final model. Basically the errors of the first few models are used to build a strong model.
Similarly in a neural network,we use the cost function and backward propagation/gradient descent to adjust the weights in the synapses until the cost function is close to zero and we have accurate prediction.
In both the methods, conceptually we use feedback from the initial output to reduce the error to an acceptable level.
What is the difference, other than the architecture?