**Behavioral finance FAQ / Glossary (Bayes, Bayesian)**

This is a separate page of theBsection of the Glossary

Dates of related message(s)in theBehavioral-Finance group(*):

Year/month, d: developed / discussed,

i: incidental

Bayes, Bayesian probability, learning

00/9i,10i - 01/2i - 02/2i,5i,8i,9i -

03/3i - 04/8i - 08/9d + see model,

probabilities, uncertainty, weak

signals, (reaction to) information,

adjustment + probabilities site link

When there isfog ahead,

the trick isto test the track step by step

until being more certain of the road.

Definition:Bayesian probabilities (or

conditional probabilities)are probabilities that

* arebased on initial subjective (or at least tentative) hypotheses,

*are later adjustedwhen new information

(or even ...lack of event) confirm or contradict them.

They usually apply to a set of "mutually exclusive" hypothesis / probabilities

/ asumptions (it will be fully red, or blue, or blue and red, or another color)

They are flexible / , adaptable prevision tools.

They start with "quantified intuitions" (and

sometimes objective probabilities) and the Bayesian

process is to test and adjust them in real time

when additional information are known.

Why to be subjective?

Why start often withimpressions, or assumptions

drawn from your hat?

Just because only them are available!And because there is my brain under my hat!

.

In many cases, objective probabilities are not available or not fully

relevantThis uncertainty (see that word) happens:

When events and situations are new(as unprecedented

or with rare precedents).All the more when

human factorsare at play (in economics and

finance for example).

Then, only subjective probabilities, tentative assumptions, partial

knowledge,testable hypothesesand educated guessescan be

applied.

To apply subjective / partially subjective probabilities to models that were

built to use fully objective probabilities apparently strays from rationality.

How could half baked guesses lead to fully scientific /predictive "asset

pricing models" ?

Actually, such tentative models, whatever their shortcomings, at

least help test the water.

And their subjectivity decreases at each adjustment step to such tests.

Bayesian probabilities differ

from standard probabilities

Past frequencies vs. future hypothesis.Standard probabilities are usually based on data

frequencies in past, not on assumptions and hypotheses, however

statistics

bright.

Also,

in random time-distributions, the next event is usuallyassumed toof prior ones.

be independent

On the contrary, the Bayesian law is

fully future

/ scenario oriented,not just driven by past data (if they exist)

For example, it does not trust "back testing" fully, although that procedure

can help to do the first estimate...It also considers the next probability, when a

new event occurs (or new

data appear) as"conditioned"(thus the "conditional probabilities"

appellation), of course to that new fact but also to theprevious estimates

=> It simply adjusts that prior estimate.

It can also consider that this next probability is conditioned to the occurrence

of some event.=> If that probable event

does not occur, the probability changes also.

How the "Bayes law" (*) expresses this.

(*) fromThomas Bayes, a 18th Century mathematicianThe probability

-p-of some propositionafter receiving information

that some proposition -q-has occurredis called its"conditional.

probability"It is written as Pr(p|q) and reads as

the new probability of-p-given that-q-has occurred.

Example: choose your Excalibur!Supposing only one among three swords

(or spades

if you have no use for swords):Excalibur A, Excalibur B and Excalibur C

can make you King Arthur

(or a good gardener)You have to choose 1 of them.

That gives you a 1/3 probability, no?

Knowing that, you decide to play,

and you choose A.It does not give you the crown

(or the garden), sorry.But it gives you the right to choose between B and C.

Your probability becomes 1/2.

And you discover that in fact

the hidden original probability was above 1//3.

How to determine the initial hypotheses

and on what basis to adjust them.

Educated guess, then looking for more clues.

1)Bayesian

probabilities are

at the start

assumptions/about the

hypotheses /

priors

odds that one (*)

proposition will

prove right or

wrong.

Whenever possible,

those priorsare

not full fantasies.They might be objective probabilities, but

insufficient to cover the specificities of to

the actual situation,

They are often based on experience, or on

some signals, and flimsy coincidences or

on deductions, inferences, reasoning.But they might also be purely intuitive /

subjective

(degree of belief).

2)Each new finding

or event (or lack of

finding or event) is

compared to its

previousof

probabilities

occurrences, so as

toadjust them.This helps to deduct

new probabilities* about the real

truth (in static

situations),* or about the final

outcome (in

dynamicsituations).

In practice, every time new information

or a new finding is available,probabilities are(see adjustment) into "revised"

adjusted

probabilities.This is done by taking into account those

initialprobabilitiesas well as anynew relevant

information:

All new / additional facts or events(and

their

frequency) that might reinforce,

complement or change the assumptions.

Also all non-occurrences (**) of events

(and theirfrequency) that would confirm orinfirm a scenario.

or

The "residual possibilities"after an

expected event took place (***), or failed

to take place. It changes:the probability

numeratorof a

predicted outcome,but also the

denominator, the sum

of all possible outcomes.

(*) Actually, there might be oneor several propositions, depending

onthe scenariosthat can be imagined,every one with its odds

(**)Among other specificities, and in some applications, Bayesian

statisticshelp tocalculate,from the number of times a probable

event has*not*occurred, theprobability thatit will occur in

the future.

This iscalled "inverse probability".

(***) Your cat just died? Sorry, but at least it will not die again!Those

revised probabilitiesare less based on subjective beliefs than

the initial ones, as objective facts are taken into accounts.Bayesian probabilities should not be confused with fuzzy logic (see that phrase),

although there is some similarity in their iterative processes.Also, there is some relation with computer techniques such as genetic algorithm

orsoft computing.

Is it just about waiting for occurrences?

Or about anticipating them?

Are your plans B, C, D, etc. ready?

We see that a Bayesian probability is an adaptation approach that leads

tto change the probabilities p when q occurs (or does not occur).But

adaptation, however needed, is not anticipation.

To make decisions there is a need not only to adapt to what

happens,but also to anticipate what could happen.The anticipation part in the Bayesian "philosophy" is to start with "priors"

and to have an idea aboutwhat could confirm or change them.

This incites, whenever possible to

prepare beforehand

a range of variouskey scenarios(see that

word) and give them tentative probabilities.Every key scenario anticipates a

possible bifurcation(see

that word) of the chain of eventsthat can change dramatically the.

foreseeable situation

In what cases to use Bayesian probabilities?

In the case of a misty past and of a foggy future.

This mental process, also called

Bayesian learning or(see that word and its

Bayesian inferences, suits situations ofambiguity /

uncertainty

difference with "risk"), in whichprobabilities cannot

be scientificallyquantified,as past data do not exist

or are not reliable.Here are several cases:

When historical data about the frequency of events are

unavailable,too scarce,or not trustworthy,

When situations are new, thus when past data are:Either inexistent, or insufficient

Or not relevant. It would be dangerous to use data on past

situations that are not really similar.

That covers all cases where the futuremight not be fully.

identical to the past we knowThose various situations are frequent in

economics and finance, as

those human activities are highly prone to uncertainty.In such cases, Bayesian probabilities help

to adjust economic and financial.

expectations and decisions

Are people Bayesian?

Is Joe a good adjuster?

"Adjustment" (of probabilities) is the key in Bayesian thinking.This rings a bell as the word is also used for the behavioral finance

"underreaction - adjustment - overreaction" chain(see those

words).When a new event takes place, investors and the market often:

Underreact:they stay anchored on the old situation and are not

awarz of the new odds.This delays the market price adjustment,

Overreact:they look only at the new event without checking whether

it fits or not the previous hypotheses they made on the basis of the old

situation.This shows that underreactions / overreactions are also

under-adjustments /of prior beliefs and are signs that

over-adjustmentsinvestors are not(and also not fully rational).

always fully Bayesian

But use the stuff carefully.

Beware of "false positive".All this does not mean that Bayesian thinking is fully rational and effective.

As seen above, it often starts with subjective assumptions, hoping they will

become progressively more objective when tested against the events and

adjusted accordingly.

The risk is that some biased assumptions might get

wrongly) confirmed :

By a scarce random

coincidence

(see small numbers, representativeness),Or by

selective exposure(see that phrase).Then the biased mental anchoring get solidified (see "confirmatory bias").

As a common example, this is sometimes what creates

superstition.

(*)To find those messages: reach thatBF groupand, once there,

1) click "messages", 2) enter your query in "search archives".

BF pollsMembers of the Behavioral Finance Group,

please vote on the glossary quality at

This page last update: 10/08/15

B section of the Glossary

Behavioral-Finance Gallery main page

Disclaimer / Avertissement légal