Vanilla JavaScript Neural Network: Simple XOR Implementation
This challenge asks you to build a basic, feedforward neural network from scratch using only vanilla JavaScript. The goal is to implement a network capable of learning the XOR (exclusive OR) function, demonstrating fundamental neural network concepts like weighted sums, activation functions, and backpropagation. This exercise will solidify your understanding of how neural networks operate at a low level.
Problem Description
You are tasked with creating a neural network that can accurately predict the output of the XOR function. The XOR function takes two boolean inputs and returns true if exactly one of the inputs is true, and false otherwise. Your neural network should consist of:
- Two input nodes: Representing the two boolean inputs to the XOR function.
- Two hidden nodes: These nodes will perform intermediate calculations.
- One output node: Representing the final prediction of the XOR function.
- Weights: Randomly initialized weights connecting each node.
- Bias: A bias term for each node.
- Sigmoid Activation Function: Used to introduce non-linearity.
- Backpropagation Algorithm: To adjust weights and biases based on the error.
The network should be trained using the following training data:
- Input 1:
[0, 0]Output:0 - Input 2:
[0, 1]Output:1 - Input 3:
[1, 0]Output:1 - Input 4:
[1, 1]Output:0
Your implementation should include functions for:
sigmoid(x): Calculates the sigmoid of a given value.sigmoidDerivative(x): Calculates the derivative of the sigmoid function.feedForward(inputs): Propagates the inputs through the network and returns the output.train(inputs, targets, learningRate): Trains the network using the provided inputs, target outputs, and a learning rate. This function should iterate through the training data multiple times (epochs) to improve accuracy.predict(inputs): Uses the trained network to predict the output for a given input.
Examples
Example 1:
Input: inputs = [[0, 0], [0, 1], [1, 0], [1, 1]], targets = [0, 1, 1, 0], learningRate = 0.1, epochs = 10000
Output: (After training) Network should predict outputs close to [0, 1, 1, 0] for the given inputs.
Explanation: The network learns the XOR pattern through repeated adjustments of weights and biases using backpropagation.
Example 2:
Input: inputs = [[0, 0], [0, 1]], learningRate = 0.5, epochs = 5000
Output: (After training) Network should predict outputs close to [0, 1] for the given inputs.
Explanation: Training on a subset of the XOR data will still allow the network to learn a portion of the XOR pattern.
Example 3: (Edge Case - High Learning Rate)
Input: inputs = [[0, 0], [0, 1], [1, 0], [1, 1]], targets = [0, 1, 1, 0], learningRate = 10, epochs = 1000
Output: Network may not converge or may oscillate due to the high learning rate.
Explanation: A learning rate that is too high can cause the training process to become unstable and prevent the network from learning effectively.
Constraints
- Input Format: Inputs should be arrays of numbers (representing boolean values as 0 or 1). Targets should be arrays of numbers (0 or 1).
- Learning Rate: The learning rate should be a positive number.
- Epochs: The number of epochs should be a positive integer.
- Network Architecture: The network must have 2 input nodes, 2 hidden nodes, and 1 output node.
- Accuracy: After training, the network should achieve an accuracy of at least 90% on the training data.
- No external libraries: You are restricted to using only vanilla JavaScript.
Notes
- Start by initializing the weights and biases randomly.
- The sigmoid function is crucial for introducing non-linearity.
- Backpropagation involves calculating the error gradient and updating the weights and biases accordingly.
- Experiment with different learning rates and numbers of epochs to find optimal training parameters.
- Consider using a small epsilon value (e.g., 1e-8) to avoid division by zero in the sigmoid derivative calculation.
- Debugging can be challenging; print intermediate values (e.g., outputs of each layer) to understand the network's behavior.
- The XOR problem is a classic example for demonstrating the power of neural networks. Good luck!