Tuesday, October 1, 2013

Nene. A15, Neural Networks

We've seen computers do all sorts of things. Just here in our journey through AP 186, we've even coded a few interesting things ourselves. We've improved the contrast of a very dark image and saved a wedding picture, removed the lines on a picture of a crater of the moon and even taught the computer to read and play musical notes on a staff. However, this final activity before the project could just probably the most interesting, at least in concept. We will be teaching the computer to learn.

I have to admit that I haven't completely grasped the whole concept of Neural Networks but here is a summary based on how I understand it. We give the code certain groups of characteristics that belong to one of a few objects and then tell the code that this belongs to this object. For example, you have a bunch of fruits. We put in the characteristics of a banana and tell the computer that this is a banana. This is part of the learning phase. Then we have a few more lines that processes this input. The code gives weights to each of the input so that it can determine the proper output. Finally, we have the testing phase wherein we input certain characteristics again and hope that the code would output the proper fruit.

We were given the neural network code of an AND logic gate. The learning input are:
[0 0] [0 1] [1 0] and [1 1]

and the expected output used for the learning is:
[0 0 0 1]

That is, for the corresponding learning input, the expected output should only be 1 for the fourth learning input which is both 1.

Using the learning input for the testing input, we get the result, we get
0.0054392    0.0304290    0.0291350    0.9501488
which could be approximated to [0 0 0 1]

Little by little, we can play with this code just to see how well this computer learned how to learn.
Let's try mixing up the input.
[1 1] [0 0] [0 1]  [1 1]

The computer shows that this is no problem for it giving us a result of
0.9501488    0.0054392    0.0291350    0.9501488
or [1 0 0 1]
as expected.

Commercial break.
I just suddenly noticed that this is probably the least graphic project which has no figures. Don't worry though. I'll be using Neural Networks for my final project which is very graphic...


Now, we try to make something a bit more difficult.
How about and XOR logic gate. It outputs 1 if only 1 digit is 1. You got that?
If not, learn it at the same time as the neural network code does, by example.
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 0

The result is 1 if and only if 1 digit is 1.
get it now?
because the computer has already
and gives us this output
0.0473429    0.4593762    0.9103764    0.4669348

which is right, right?
wrong.
well, sort of.

The expected output is already given above.
However, the computer didn't get it completely right.
I relate this to people guessing.
It's as if the computer is just 50% sure that the second and last input are 1.

A short chat with Chester gives me valuable insight. He suggests changing the learning rate and/or the training cycles. He said that the learning rate changes the sensitivity of the neural network to the weights.

Indeed changing the learning rate from 2.5 to 5 gives us
0.0304071    0.9671456    0.9653771    0.0425667

which were the results we were expecting.
Right? Right.


Moving on with using neural networks for the pattern recognition activity. This is where neural networks come in handy. The input characteristics I use now are the R G and B values of the bills. Then of course, I also input 0 for 1000 peso bills and 1 for 500 peso bills as the expected output. Of course we can easily find the output of basic logic gates but now, I challenge you to identify the bill just by the RGB values which the neural network is able to do to some extent.

At a learning rate of 10 and 400 learning cycles, the output is
0.0000802 
    0.0233898 
    0.0057143 
    0.5679243 
    0.2478101 
    0.0000542 
    0.0067900 
    0.9166035 
    0.9996348 
    0.9857239 
    0.9996027 
    0.9979951 
    0.9994319 
    0.7868495

which is a good approximation of the expected 0 0 0 0 0 0 0 1 1 1 1 1 1 1
since the first 7 test input were 1000 and the rest were 500 peso bills.

There were a few discrepancies though such as the third test input where the computer is pretty unsure about the classification. There were also a few less unsure answers such as the fourth and the last input. Although the value they have is still obviously closer to the right answer.

In summary,
1/14 was 50-50
2/14 were 70-80% correct
while the rest, 11/14 were spot on.

Errors could come from the fact that some of the test values are near the values of the other object. That is, some 1000 peso bills have RGB values similar to that of the 500 peso bill.

I give myself a 9.


No comments:

Post a Comment