Home An Overview of Signal Classification
You can’t see them, but there are signals everywhere — A signal is an observable change in quantity that carries information. A signal can carry information about virtually anything from audio to video and text data. The ability to classify signals is an important task that holds opportunity for many different applications. For example, EMG sensors measure electrical activity in response to nerve stimulation of the muscle. If EMG sensors are attached to the wrist, electrical signals can be classified as individual finger movements which can then be used to control a robotic hand.
Because there are so many different kinds of signals, we need a way to differentiate and extract information from signals. This is a topic that my mentor asked me to explore as part of my summer internship at GSI Technology, specifically regarding radio signal classification.
The existence of radio waves was successfully proven in 1888 by German physicist Heinrich Hertz. Only six years after the discovery, Gugliemo Marconi began developing the first wireless radio telegraph, kicking off a revolution in wireless communication. The importance of radio waves in the way our modern world functions cannot be overstated.
In an age of mass wireless communication, the need for fast and accurate electromagnetic signal processing has never been greater.
Many different parties share the RF spectrum. A key technique for spectrum monitoring and mangagement is signal classification. We need to quickly differentiate and identify signals right off the antenna.
In a typical RF setting, a device may need to quickly ascertain the type of signal it is receiving. Consider the image above: these are just a few of the many possible signals that a machine may need to differentiate.
It turns out you can use state of the art machine learning for this type of classification. But first let’s take a look at the traditional method for signal processing.
Jean-Baptiste Joseph Fourier (21 March 1768–16 May 1830) was a French mathematician and physicist best known for discovering the Fourier series and Fourier transform, the basis of all theories involving signals.
The basic idea behind Fourier transforms is simple and best described by an analogy: if a painter mixes several different colors together, find the original recipe of colors that created the mix.
Instead of mixed paint, however, we want to find the recipe that makes up a given signal reading. The Fourier Transform reverse engineers a given signal in order to find all the different frequencies that comprise it.
Consider the image above, an example from the domain of sound. When you combine the yellow and purple frequencies together the result is the green signal at the top. The Fourier transform is a mathematical function that can be used to show the different frequency components of a continuous signal .
To illustrate how the Fourier transform works, let’s consider a simple example of two sinusoidal functions: f(t) = sin(2t) and g(t) = sin(3t).
Imagine if you were just given the signal on the right; how could you decompose it into its original two components shown on the left?
A Fourier transform will help us do this. It translates signals in the time domain to the frequency domain. So after applying the transform to the signal on the right you would get something like this:
Now let’s look at how to do all of this in Python.
Here we generate two sinusoidal frequencies sin(2t) and sin(3t).
# frequency, in cycles per second, or Hertz f1 = 2 f2 = 3# sampling rate, or number of measurements per second sample_rate = 100 seconds = 10 intervals = seconds * sample_rate# create signal time series data t = np.linspace(0, seconds, intervals) sin1 = np.sin(f1 * 2 * np.pi * t) sin2 = np.sin(f2 * 2 * np.pi * t)
Then we plot out the frequencies …
# plot signal plt.plot(t, x1) plt.plot(t, x2) plt.xlim(0,2) plt.title('sin(2t) and sin(3t)') plt.xticks([], []) plt.yticks([], []) plt.show()
Here we add sin(2t) and sin(3t) together to generate our combined signal:
# generate signal signal = [] for i in range(len(sin1)): signal.append(sin1[i] + sin2[i]) # plot signal plt.plot(t, signal) plt.xlim(0,2) plt.title('Combined Signal') plt.xticks([], []) plt.yticks([], []) plt.show()
And we plot the result:
Now we will apply a fast Fourier transform to our signal above using a built in function from the SciPy Python library:
signal_fft = fftpack.fft(signal) signal_fft = np.abs(signal_fft) freqs = fftpack.fftfreq(len(signal)) * sample_rate# plot fft plt.plot(freqs, signal_fft) plt.xlim(0,5) plt.style.use(['dark_background']) plt.yticks([], []) plt.show()
And by plotting out the result we get this:
Notice how all the values of FFT are close to zero except around the frequencies 2 and 3, i.e., the frequencies corresponding with sin(2t) and sin(3t). We successfully isolated the fundamental frequency components of the signal. The Fourier transform takes a signal from the time domain to the frequency domain.
Some of you might notice that the diagram above is slightly different than that illustrated in Figure 1. This is because Figure 1 is the continuous Fourier transform and the one above is a FFT, which is much faster but also discrete and approximate.
This is intended to be a very brief introduction to the Fourier transform. If you want to learn more on this topic I recommend watching 3Blue1Brown’s video: But What is the Fourier Transform.
Fourier analysis has been the dominant mathematical technique for processing, deconstructing, and ultimately classifying signals. But more recently, there is increased interest in using deep neural networks to accomplish these tasks.
Lately I have been experimenting with convolutional neural networks to classify radio signals from a small dataset with 11 classes. I used the simple CNN architecture shown below:
The convolutional classifier reached up to 73% test accuracy.
Deep residual networks (resNets) have demonstrated state of the art results in image and audio processing and show promise for signal classification. I will try using a deep residual network on a much larger database of radio signals with even more classes of signals. In my next blog, we will look into an interesting paper that experiments with using residual neural networks for signal classification.