Skip to content

Commit 91c313e

Browse files
authored
Merge pull request #40 from kirlf/master
New in channel codes README
2 parents bd09e47 + 39fd1a0 commit 91c313e

File tree

1 file changed

+123
-3
lines changed

1 file changed

+123
-3
lines changed

commpy/channelcoding/README.md

Lines changed: 123 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ The main idea of the channel codes can be formulated as following thesises:
77
- **redundant bits** are added for *error detection* and *error correction*;
88
- some special algorithms (<u>coding schemes</u>) are used for this.
99

10-
![](https://raw.githubusercontent.com/kirlf/CSP/master/FEC/assets/FECmainidea1.png)
10+
<img src="https://raw.githubusercontent.com/kirlf/CSP/master/FEC/assets/FECmainidea1.png" width="800" />
1111

1212
The fact how "further" a certain algorithm divides the code words among themselves, and determines how strongly it protects the signal from noise [1, p.23].
1313

14-
![](https://habrastorage.org/webt/n7/o4/bs/n7o4bsf7_htlv10gsatc-yojbrq.png)
14+
<img src="https://habrastorage.org/webt/n7/o4/bs/n7o4bsf7_htlv10gsatc-yojbrq.png" width="800" />
1515

1616
In the case of binary codes, the minimum distance between all existing code words is called **Hamming distance** and is usually denoted **dmin**:
1717

@@ -39,10 +39,130 @@ Redundancy of the channel coding schemes influences (decreases) bit rate. Actual
3939
To change the code rate (k/n) of the block code dimensions of the Generator matrix can be changed:
4040
![blockcoderate](https://raw.githubusercontent.com/kirlf/CSP/master/FEC/assets/coderateblock.png)
4141

42-
To change the coderate of the continuous code, e.g. [convolutional code](https://github.com/kirlf/CSP/blob/master/FEC/Convolutional%20codes%20intro.md), **puncturing** procedure is frequently used:
42+
To change the coderate of the continuous code, e.g. convolutional code, **puncturing** procedure is frequently used:
4343

4444
![punct](https://raw.githubusercontent.com/kirlf/CSP/master/FEC/assets/punct.png)
4545

46+
## Example
47+
48+
Let us consider implematation of the **convolutional codes** as an example:
49+
50+
<img src="https://habrastorage.org/webt/v3/v5/w2/v3v5w2gbwk34nzk_2qt25baoebq.png" width="500"/>
51+
52+
*Main modeling routines: random message genaration, channel encoding, baseband modulation, additive noise (e.g. AWGN), baseband demodulation, channel decoding, BER calculation.*
53+
54+
```python
55+
import numpy as np
56+
import commpy.channelcoding.convcode as cc
57+
import commpy.modulation as modulation
58+
59+
def BER_calc(a, b):
60+
num_ber = np.sum(np.abs(a - b))
61+
ber = np.mean(np.abs(a - b))
62+
return int(num_ber), ber
63+
64+
N = 100000 #number of symbols per the frame
65+
message_bits = np.random.randint(0, 2, N) # message
66+
67+
M = 4 # modulation order (QPSK)
68+
k = np.log2(M) #number of bit per modulation symbol
69+
modem = modulation.PSKModem(M) # M-PSK modem initialization
70+
```
71+
72+
The [following](https://en.wikipedia.org/wiki/File:Conv_code_177_133.png) convolutional code will be used:
73+
74+
![](https://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/Conv_code_177_133.png/800px-Conv_code_177_133.png)
75+
76+
*Shift-register for the (7, [171, 133]) convolutional code polynomial.*
77+
78+
Convolutional encoder parameters:
79+
80+
```python
81+
rate = 1/2 # code rate
82+
L = 7 # constraint length
83+
m = np.array([L-1]) # number of delay elements
84+
generator_matrix = np.array([[0o171, 0o133]]) # generator branches
85+
trellis = cc.Trellis(M, generator_matrix) # Trellis structure
86+
```
87+
88+
Viterbi decoder parameters:
89+
90+
```python
91+
tb_depth = 5*(m.sum() + 1) # traceback depth
92+
```
93+
94+
Two oppitions of the Viterbi decoder will be tested:
95+
- *hard* (hard inputs)
96+
- *unquatized* (soft inputs)
97+
98+
Additionally, uncoded case will be considered.
99+
100+
Simulation loop:
101+
102+
```python
103+
EbNo = 5 # energy per bit to noise power spectral density ratio (in dB)
104+
snrdB = EbNo + 10*np.log10(k*rate) # Signal-to-Noise ratio (in dB)
105+
noiseVar = 10**(-snrdB/10) # noise variance (power)
106+
107+
N_c = 10 # number of trials
108+
109+
BER_soft = np.empty((N_c,))
110+
BER_hard = np.empty((N_c,))
111+
BER_uncoded = np.empty((N_c,))
112+
113+
for cntr in range(N_c):
114+
115+
message_bits = np.random.randint(0, 2, N) # message
116+
coded_bits = cc.conv_encode(message_bits, trellis) # encoding
117+
118+
modulated = modem.modulate(coded_bits) # modulation
119+
modulated_uncoded = modem.modulate(message_bits) # modulation (uncoded case)
120+
121+
Es = np.mean(np.abs(modulated)**2) # symbol energy
122+
No = Es/((10**(EbNo/10))*np.log2(M)) # noise spectrum density
123+
124+
noisy = modulated + np.sqrt(No/2)*\
125+
(np.random.randn(modulated.shape[0])+\
126+
1j*np.random.randn(modulated.shape[0])) # AWGN
127+
128+
noisy_uncoded = modulated_uncoded + np.sqrt(No/2)*\
129+
(np.random.randn(modulated_uncoded.shape[0])+\
130+
1j*np.random.randn(modulated_uncoded.shape[0])) # AWGN (uncoded case)
131+
132+
demodulated_soft = modem.demodulate(noisy, demod_type='soft', noise_var=noiseVar) # demodulation (soft output)
133+
demodulated_hard = modem.demodulate(noisy, demod_type='hard') # demodulation (hard output)
134+
demodulated_uncoded = modem.demodulate(noisy_uncoded, demod_type='hard') # demodulation (uncoded case)
135+
136+
decoded_soft = cc.viterbi_decode(demodulated_soft, trellis, tb_depth, decoding_type='unquantized') # decoding (soft decision)
137+
decoded_hard = cc.viterbi_decode(demodulated_hard, trellis, tb_depth, decoding_type='hard') # decoding (hard decision)
138+
139+
140+
NumErr, BER_soft[cntr] = BER_calc(message_bits, decoded_soft[:-(L-1)]) # bit-error ratio (soft decision)
141+
NumErr, BER_hard[cntr] = BER_calc(message_bits, decoded_hard[:-(L-1)]) # bit-error ratio (hard decision)
142+
NumErr, BER_uncoded[cntr] = BER_calc(message_bits, demodulated_uncoded) # bit-error ratio (uncoded case)
143+
144+
mean_BER_soft = np.mean(BER_soft) # averaged bit-error ratio (soft decision)
145+
mean_BER_hard = np.mean(BER_hard) # averaged bit-error ratio (hard decision)
146+
mean_BER_uncoded = np.mean(BER_uncoded) # averaged bit-error ratio (uncoded case)
147+
148+
print("Soft decision:\n"+str(mean_BER_soft)+"\n")
149+
print("Hard decision:\n"+str(mean_BER_hard)+"\n")
150+
print("Uncoded message:\n"+str(mean_BER_uncoded)+"\n")
151+
```
152+
153+
Outputs:
154+
155+
```python
156+
>>> Soft decision:
157+
>>> 0.0
158+
>>>
159+
>>> Hard decision:
160+
>>> 3.0000000000000004e-05
161+
>>>
162+
>>> Uncoded message:
163+
>>> 0.008782
164+
```
165+
46166
### Reference
47167

48168
[1] Moon, Todd K. "Error correction coding." Mathematical Methods and Algorithms. Jhon Wiley and Son (2005).

0 commit comments

Comments
 (0)