Woodridge Tech Talk

Our ongoing series of bi-weekly technical presentations

Neural Networks

Neural Networks

Chad provides a basic introduction to and overview of neural networks.

  1. 1.
    Neural Networks
    Chad Eatman – July 21, 2015

  2. 2.

    What is a Neural Network?
    – It connects together Artificial Neurons
    (objects which are roughly based on
    biological neurons)
    – It can “learn” to do simple or complex tasks;
    particularly useful when we don’t know
    exactly how to program an algorithm/function

  3. 3.

    The Perceptron
    – Basic Model
    – Inputs to Neuron: x1, x2, … , xi
    – Weights of Each Input to that
    Neuron: w1,w2, … , wi
    – Output = StepFunction(Sum(xi * wi))
    – Can add a bias, b, to the sum

  4. 4.

    A Simple Network
    – Inputs: Activated by something
    external to network
    – Hidden Layer: Not seen
    directly; helps map input
    combination to outputs
    – Outputs: The result of putting
    inputs into the network

  5. 5.

    – Perceptrons can be used to model any digital circuit
    – …but, we don’t know what the biases and weights
    should be, and small changes in them either make no
    impact to a neuron’s output or a large one
    – …and we can already model digital circuits, with actual
    digital circuits, so this is not very impressive
    So what we want is a neuron with an output that behaves
    similar to a step function, but allows us to make small
    changes, and have output values other than 1 and
    0……………how about a Sigmoid function for the output?

  6. 6.

    The Sigmoid Neuron
    – With a Sigmoid Neuron, we use the same
    principles as the Perceptron, but now:
    Output = Sigmoid( Sum(xi*wi)+b )
    – Now, a small change in xi, wi, or b will have
    a small change in the output, and we don’t
    have to stick to binary logic

  7. 7.

    How Does a Neural Network Learn?
    We have to create a method to find the ideal weights and
    – Backpropagation
    – Genetic Algorithms

  8. 8.

    – Define a Cost function, which is minimized
    when the network performs better
    – Run the network, as is, through a few tests
    – Use some calculus to figure out how to
    adjust the biases and weights for the output
    – Using information about how we changed
    the layer to the right, change weights and
    biases moving to the left

  9. 9.

    Example Using Backpropagation
    – Red Ball wants to chase Blue Ball (controlled by mouse)
    – Red Ball knows whether the blue ball is above, below, to the left, or to the
    – Red Ball knows where it should move, but it’s network starts out not
    knowing how to control its movement (up, down, left, right)
    – Actual movement of the red ball compared to how it should move, and
    backpropagation used to change weights and biases

  10. 10.

    Example Using Backpropagation
    – Neurons drawn as circles, with the color
    representing the activation, red(0) -> blue(1)
    – The color of each path represents the weight
    of the connection, red(0) -> blue(1)
    – The graph shows the weights of the paths
    into the top-most neuron of the hidden layer
    over time
    – See the Example Here

  11. 11.

    Genetic Algorithms
    – Use a “genetic sequence” which contains all the information of the weights
    and biases in a network
    – Make a ton of these sequences, randomly
    – Test each sequence with a network, and score them based upon how well
    they perform the desired task
    – Take the sequences that perform the best, “breed” them to create several
    children, and go back to testing
    – Over several generations of this process, the sequences should get closer
    and closer to the desired functionality
    – Example: https://www.youtube.com/watch?v=qv6UVOQ0F44 (this uses a
    more advanced technique called NEAT, but it’s still using genetic

  12. 12.

    Deep Learning
    – Uses multiple hidden layers which feed into
    each other
    – Concept: Data inputted into the network can be
    understood as an interaction between multiple
    factors in various levels of abstraction
    – Can be supervised or unsupervised
    – Example – Google DeepDream:

  13. 13.

    Sources and Useful Links
    – http://neuralnetworksanddeeplearning.com/in
    – http://nn.cs.utexas.edu/downloads/papers/st
    – https://en.wikipedia.org/wiki/Deep_learning
    – http://www.ai-junkie.com/ga/intro/gat1.html
    And visit our website: https://woodridgesoftware.com

Categories: Tech Talk