The training size for the SVM detector is another parameter

The training size for the SVM detector is another parameter read FAQ that the designer needs to control. In general, for any machine learning algorithms, the training size should be as large as possible to improve the prediction of the unknown testing data. The tradeoff in this application is the increased time required to produce and collect the training data. Figures Figures55 and and6,6, respectively, show the SVM demodulator’s error performance and the number of SVs required on the same system as stated above with different training sizes. When the C parameter is fixed at 2, and with a training size of about 200, the performance of the SVM detector would reach to its limit where the increase of SVs would not improve its accuracy.Figure 5Correct rate of the SVM model with linear kernel for different training sizes, n = 5.

Figure 6Number of support vectors from the SVM model for different training sizes, n = 5.4.3. Posterior Probability EstimatesIn order to reduce the complexity of the SVM analyzed in Section 2, we select only a few samples from the filter output as the features for training and testing (i.e., n = 5 in this case). We depict the probabilities obtained by the SVM output of SNR = ?9dB in Figure 7. The signal in Figure 7 is submerged in noise, so the optimal performance cannot be achieved by using a conventional threshold decision. Yet, the probability which the demodulator output by SVM technique is accurate while a source symbol sequence [0,0, 0,0, 1,1, 0,1, 0,1] is transmitted, and the noise from the part which did not carry any information of the waveform of symbol ��1�� is almost removed.

Figure 7The waveform of SIF output and the posterior probability output obtained by SVM at SNR = ?9dB.To understand the difference in PPEs, we have plotted the curves for the SVM and the MHY in Figures 8(a) and 8(b), respectively, with SNR = ?5dB. We depict the estimated probabilities P(y = 1 | x) versus the ones when a source symbol sequences with all ones are transmitted. We can appreciate that the SVM PPEs are closer to ��1�� and less spread, most of the values of demodulation output are between 0.9 and 1. Thereby, SVM estimates are closer to the true posterior probability, which explains its improved performance with respect to the MHY, when we measure the BER after the LDPC decoder.Figure 8The posterior probability P(y = 1 | x) obtained by SVM and MHY method, in (a) and (b), respectively, where source symbols with all Entinostat ones are transmitted.In a previous subsection, we have shown that the demodulator is based on an SIF and SVM classifier, when we compare performances at a low BER. In this section, we focus on the performance after the sequence has been corrected by an LDPC decoder.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>