We formulate a fairly simple problem, detecting sarcasm.

Let's look at a couple of scathing reviews on products sold online.
Intuitively, if a review has a positive response but a low rating, then "probably" it is sarcastic.
I was tired of being approached by beautiful girls.
After I bought this jacket, problem solved.
I have purchased two security.
So I pleasantly noticed that when I turn them and point them against the wall can pass through the Star Gate.
The semantics of the phrase suggests a positive sentiment ( "problem solved", "pleasant"), but the rating is low.
This is a hint of sarcasm.
Now we suspect that there is some relationship between the sentiment, rating and sarcasm.
Assegnamogli of scores.
- Sentiment (+1 if positivio, 0 if neutral, -1 if negative).
And we pull fuoti a list of data points by analyzing a certain amount of reviews.
(Sentiment, Rating, Sarcasm) (1, 0.5, 1) (1, 1, 1) (1, 5, 0) (- 1, 4, 1) (- 1, 1, 0) and so on ...
So to pull out of the relationship, we have to work on the rating values and Sentiment in order to pull out a value for Sarcasm.
We will use various levels (layers) to get from input to output.
Let's look at the first example (1, 0.5, 1):
Each line in the network has a certain weight (weight).
We will use these weights to calculate values in the circles of the hidden layer (hidden layer) and in the output layer (which we hope is 1).
Initially, we assign the weights in a random manner.
In this way we get our first neural network "stupid".
For each circle (or neuron) in the hidden layer and output, multiply the inputs with the corresponding weight and sum the results.
Neuron Hidden Layer 1 = (1 * 0.2) + (0.5 * 0.4) = 0.4 (1 * 0.2) + (0.5 * 0.4) = 0.4Hidden Neuron Layer 2 = (1 * 0.3) + (0.5 * 0.6) = 0.6 ( 1 * 0.3) + (0.5 * 0.6) = 0.6Hidden Neuron Layer 3 = (1 * 0.4) + (0.5 * 0.7) = 0.75
In addition, we want the output (Sarcasm) is a number between 0 and 1 (other values would not make sense).
And we do so using a magical function on the output layer that reduces each number in a number between 0 and 1.
Each function that we apply to the neuron level is called the activation function, and in this case, we will use the sigmoid function on the level of output.
Final Layer = (0.4 * 0.3) + (0.6 * 0.4) + (0.75 * 0.5) = 0,735 (0.4 * 0.3) + (0.6 * 0.4) + (0.75 * 0.5) = 0.735Output = sigmoid (0.735) sigmoid (0.735 ) = 0.324
We change slightly the weights assigned to the strings to get us to the correct value.
We do this using a system called Back propagation method, explained in this blog.
We repeat this step a thousand times about our training data, adjusting from time to time our burdens.
With the aim of achieving an ideal weight to predict the most of sentiment and sarcasm given rating.
That's it, most of the applications of neural network are nothing more than variations of what has been seen above.
- Input and output structure.
- The number of hidden layers and neurons.
- The process of Training.
- The activation functions.
Comment and share it if you came in handy!
This entry was posted by admin September 24, 2017 at 15:03 and is filed under vocabulary.
You can follow responses to RSS 2.0 feeds .You go to the end and leave a comment.
The Ping is currently not allowed.

From Hackerstribe