Discussion of article "Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design"

 

New article Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design has been published:

There are minor things to cover on the feed-forward neural network before we are through, the design being one of them. Let's see how we can build and design a flexible neural network to our inputs, the number of hidden layers, and the nodes for each of the network.

We all know that hard-coding models fall flat when it comes to wanting to optimize for the new parameters, the whole procedure is time-consuming, causes headaches, pain in the back etc. (It's not worth it)

If we take a closer look at the operations behind a neural network you'll notice that each input gets multiplied to the weight assigned to it then their output gets added to the bias. This can be handled well by the matrix operations.

neural network matrix multiplication

Author: Omega J Msigwa

 
What does this mean:
int hlnodes[3] = {4,6,1};

4 inputs, 1 hidden laer with 6 neurons and one output?


You don't explain the most important thing well. How to declare the architecture of the model.

How many hidden layers can I use?

How do I define how many neurons each hidden layer has?
Example: I want a network with 8 inputs.
3 hidden layers with 16, 8, 4 neurons.
And 2 Outputs..
It's possible??