Noiseless coding theorem
How is information source shorten?
There are M sources.
a is each information and p is possibility.
You encode it.
L is the length.
li=|K(ai)|
Entropy is I(E), and possibility of E is P(E).
∴
EX.
This is the first dimension.
There are M sources.
S=(a1,a2,・・, aM)
a is each information and p is possibility.
pi=P(ai)
You encode it.
K(ai)=ci
L is the length.
li=|K(ai)|
Entropy is I(E), and possibility of E is P(E).
∴
EX.
This is the first dimension.
コメント