This is definitely bad:

```
//Calc OutputFor InputLayer
for (int i = 0; i < layers[0].Count; i++)
layers[0][i].Output = sigmoid(layers[0][i].Input);
```

Never transform input values before feeding it into first neuron. While it may works for small inputs around 0, your whole network lose great deal of adaptivity. Also there are unnecessary redundancy here:

```
//Add Bias
layers[i][j].Input += layers[i][j].Bias * weights[i][j, 0];
//layers[i][j].Input += layers[i][j].Bias;
```

Remove layers*[j].Bias altogether and use trainable weights*[j, 0] only.
Also “+=” instead of “=” scares me. Do you have nonzero values from
previous run and use it? More clear code will be

**

```
layers[i][j].Input = weights[i][j, 0];
```

As for learning algorithm, backpropagation is not the best method for setting neuron weights. There are more powerful algorithms, but there are no best approach for all cases. Try to play with learning constants, start from different set of weights and then choose the best.

after reading some articles about neural network(back-propagation) i try to write a simple neural network by myself.

ive decided XOR neural-network, my problem is when i am trying to train the network, if i use only one example to train the network,lets say 1,1,0(as input1,input2,targetOutput). after 500 trains +- the network answer 0.05. but if im trying more then one example (lets say 2 different or all the 4 possibilities) the network aims to 0.5 as output :( i searched in google for my mistakes with no results :S ill try to give as much details as i can to help find what wrong:

-ive tried networks with 2,2,1 and 2,4,1 (inputlayer,hiddenlayer,outputlayer).

-the output for every neural defined by:

double input = 0.0;

for (int n = 0; n < layers.Count; n++)

input += layers[n].Output * weights[n];

while ‘i’ is the current layer and weight are all the weights from the previous layer.

-the last layer(output layer) error is defined by:

value*(1-value)*(targetvalue-value);

while ‘value’ is the neural output and ‘targetvalue’ is the target output for the current neural.

-the error for the others neurals define by:

foreach neural in the nextlayer

sum+=neural.value*currentneural.weights[neural];

myerror=myoutput*(1-myoutput)*sum;

-all the weights in the network are adapt by this formula(the weight from neural -> neural 2)

weight+=LearnRate*neural.myvalue*neural2.error;

while LearnRate is the nework learning rate(defined 0.25 at my network). -the biasweight for each neural is defined by:

bias+=LearnRate*neural.myerror*neural.Bias;

bias is const value=1.

that pretty much all i can detail, as i said the output aim to be 0.5 with different training examples :(

ive upload my project here:

http://www.multiupload.com/G68E57N4BM .

im realy stuck here,dunno where my mistake after checking the code again and again:(

thank you very very much for your help:)