1

Arikan in his paper (arXiv link or IEEE link) about polar coding, explains how you can force the channel to polarize by applying a linear transformation (the generator matrix). He calls this a split and combine process, where he creates synthetic channels that have near perfect or near useless capacity.

But I understand that this is happening so that you can find the best positions in your block to place information bits to be coded. So what does that have to do with the actual channel?

AlexTP
  • 6,585
  • 2
  • 21
  • 38
p.storm
  • 13
  • 3
  • Can you please provide a link to the paper you are referencing? – Hilmar Nov 08 '20 at 16:31
  • https://ieeexplore.ieee.org/document/5075875 – p.storm Nov 08 '20 at 16:36
  • "so what does that have to do with the actual channel" is a bit broad. It's basically "explain Polar Codes to me", to be honest. – Marcus Müller Nov 08 '20 at 16:40
  • Its "I dont understand why he creates "synthetic subchannels" when he doesnt have access to the channel during coding process". Why does he call them channel indices when all I understand he does is choose the positions of the information bits? – p.storm Nov 08 '20 at 16:43
  • the chosing comes at the very, very end of the encoder construction, not at the beginning. – Marcus Müller Nov 08 '20 at 17:29
  • @p.storm isn't the channel described by the transition $W(y\mid x)$ in Section I? – AlexTP Nov 08 '20 at 18:14
  • Yes it does, but it is related to the polarization process. What I don't understand is how the permutation/choice of information bit positions is related to some "channels" that don't actually exist. – p.storm Nov 08 '20 at 18:14
  • @p.storm No, the transition $W(.)$ is a characteristic of the channel itself. – AlexTP Nov 08 '20 at 18:41

1 Answers1

1

Polar codes exploit the channel polarization phenomenon. But it helps if you separate the channel polarization and polar coding for a while.

In channel polarization, the bits that you need to transmit i.e., $\mathbf{u}=[u_1,u_2,\dots,u_N]$, are transformed into $\mathbf{x}=[x_1,x_2,\dots,x_N]$. Each of the transformed bit $x_i$ is transmitted over the actual channel $W(y|x)$. From viewpoint of the transformed or coded bits, nothing special happens as each coded bit $x_i$ goes through the same channel and is recieved as $y_i$. However, looking from the view point of all the user or uncoded bits together, $\mathbf{u}$ goes through a vector or block channel and is received as $\mathbf{y}=[y_1,y_2,\dots,y_N]$. This is what is referred to channel combining. The combined channel, $W_N$, is then split into $N$ virtual/synthetic/artificial channels, one for each of the input bits $u_i$. Now this is where magic happens i.e., they way these virtual channels are defined. Virtual channel of each bit $u_i$ has the following outputs:

  1. Output of the combined channel $\mathbf{y}$. Nothing special about this as this is what we get from the actual channel $W(y|x)$ as the received codeword.
  2. Exact values of $u_1,\dots, u_{i-1}$. This is the special part. This implicitly assumes successive cancellation decoding. If the true knowledge of $u_1,\dots, u_{i-1}$ is available, the virtual channels $W_N^i(\mathbf{y},u_1,\dots, u_{i-1}|u_i)$ exhibit polarized behavior.

Notice that nothing happened to the actual channel $w(y|x)$ through which the coded bits travel. It is like performing MRI with the use of contrast dye. You can do the imaging without the contrast dye. But if you inject the dye into your blood (channel combining) and then do the imaging (channel splitting), you get better results.

Now we come to polar coding. In code construction stage, we use channel polarization to figure out which bits to freeze. Then when the code is used for error correction, the encoder does the channel combining and the use of successive cancellation decoder actually does the channel splitting.

Shahji
  • 36
  • 2