The information entropy of a random event is the expected value of its self-information.
In information theory, self-information or surprisal is a measure of the information content [clarification needed] associated with an event in a probability space or with the value of a discrete random variable.
By definition, the amount of self-information contained in a probabilistic event depends only on the probability of that event: the smaller its probability, the larger the self-information associated with receiving the information that the event indeed occurred.
As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 bits (probability 1/16), and the information content associated with getting a result other than the one specified would be 0.09 bits (probability 15/16).
— Wikipedia on Self-information
2015.12.31 Thursday ACHK