Friday, March 15, 2019

How to use AI (Artificial Intelligence) to identify Radio signals using a RTL SDR dongle and Linux (Ubuntu)

How to use AI (Artificial Intelligence) to identify Radio signals using a RTL SDR dongle and Linux (Ubuntu)
Identifying Radio stations

I was wondering if there is not a good framework to identify RF signals as I wanted to add some capabilities to my SDR's to identify RF signal.

I was thinking of a way to recognize Satellite signals and the automatically apply the necessary Demodulator's and decoders for the specific satellite.

I was looking at AI Deep Learning library to be able to identify RF Radio signals. There are countless deep learning frameworks available today.

By using Python3 and rtl-sdr dongle it would be possible to scan a frequency range trying to identify a satellite.

Here is a graph with all the most used Deep learning frameworks available.
Deep Learning Frameworks.

I found this opensource project called cnn-rtlsdr and it is available from github here https://github.com/randaller/cnn-rtlsdr

This framework is using Keras and TensorFlow to learn and recognize the RF signals.

So how dose it work?

You first need take an clean RF signal and digitize it and then let the framework learn its signature. The more you letting the AI framework learn a specific signal the more accurate it will able to recognize the RF Signal.



Here is my instillation procedure to get it working on my Ubuntu 18.10 Laptop

Installation Procedure.

Lets check if you have version 2 or 3 of python.
You need version 3
python -V
apt-get install git
git clone https://github.com/randaller/cnn-rtlsdr.git
cd cnn-rtlsdr



sudo apt-get update

sudo apt-get install python3-pip
sudo apt-get install rtl-sdr

sudo apt-get install build-essential libssl-dev libffi-dev python-dev

sudo pip3 install --upgrade pip

sudo pip3 install tensorflow
sudo pip3 install pyrtlsdr

sudo pip3 install scipy

[remove dongle]
rmmod dvb_usb_rtl28xxu rtl2832
[insert dongle]


Installing rtl-sdr and calibrating the frequency offset.

Using the Kal utility to calibrate your dongle offset using the GSM network.
Installing Kal
sudo apt-get install automake
sudo apt-get install libtool
sudo apt-get install libfftw3–dev
sudo apt-get install librtlsdr-dev
sudo apt-get install libusb1.0.0-dev

git clone https://github.com/steve-m/kalibrate-rtl.git
cd kalibrate-rtl/
./bootstrap
 ./configure
 make
 sudo make install


In south Africa we can use the GSM900 frequency
Lets run Kal
kal -s GSM900
Found 1 device(s):
  0:  Generic RTL2832U OEM

Using device 0: Generic RTL2832U OEM
Found Rafael Micro R820T tuner
Exact sample rate is: 270833.002142 Hz
[R82XX] PLL not locked!
kal: Scanning for GSM-900 base stations.
GSM-900:
    chan: 40 (943.0MHz - 736Hz)    power: 25909.17
    chan: 47 (944.4MHz - 817Hz)    power: 28430.99
    chan: 63 (947.6MHz - 128Hz)    power: 29010.57
    chan: 69 (948.8MHz - 597Hz)    power: 32479.73

We now select the strongest Station to measure the average frequency offset
kal -c 69
Found 1 device(s):
  0:  Generic RTL2832U OEM

Using device 0: Generic RTL2832U OEM
Found Rafael Micro R820T tuner
Exact sample rate is: 270833.002142 Hz
[R82XX] PLL not locked!
kal: Calculating clock frequency offset.
Using GSM-900 channel 69 (948.8MHz)
average        [min, max]    (range, stddev)
- 413Hz        [-460, -354]    (106, 30.402500)
overruns: 0
not found: 0
average absolute error: 0.435 ppm



We now need to test to see if we can identify any signals using the default test learn data.

Final test

The Default script will scan the normal FM broadcast band 88 to 108Mhz.
Although it detects the radio stations as TV is ok as the test data id was tv.

sudo python3 predict_scan.py
Found Rafael Micro R820T tuner
[R82XX] PLL not locked!
88.400 MHz - tv 99.98%
89.600 MHz - tv 99.91%
91.500 MHz - tv 99.99%
92.700 MHz - tv 99.93%
94.700 MHz - tv 99.13%
95.900 MHz - tv 98.04%
98.000 MHz - tv 100.00%
99.200 MHz - tv 99.95%
99.600 MHz - tv 81.13%
101.500 MHz - tv 99.91%
102.700 MHz - tv 100.00%
105.100 MHz - tv 100.00%
106.300 MHz - tv 99.56%



We now need to learn the different Rf signals so we can identify it.
Best way to do this is with an rtl dongle and your signal of interest.

Learning from existing RF signal Database.

1) "wfm" Wide band FM
2) "tv" TV signal
3) "gsm" GSM signal
4) "tetra" Tetra DMR
5) "dmr" DMR
5) "other"

Link to database https://drive.google.com/file/d/1PuhzXkk6AVwXPPKjtFUCpQVsqOOlszu8/view
Some RF signals have been learned by other users so you don't need to learn the common RF signals but just import the learn database.

Unzip the file in the cnn-rtlsdr directory
Then run the following command to learn the RF signal 
It takes about 80secons to learn a sample. So go and have a coffee or a bear :-)
Make sure you have your rtl_sdr dongle connected as the code will do a test at the end of the learning procedure.
python3 train_keras.py
You will need a lot of memory for your application tu run so close all necessary applications otherwise you will get an out of memory error..
Learning RF samples for the following RF signals.
When the learning is complete the script will do a test with the RTL-sdr dongle.
Testing signals with the new database. 


Lets learn our own signal not yet in database.
I want to learn a Satellite Telemetry signal from Satellite.

Learning my own unique signal.



python3 train_keras.py
Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1062: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2550: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1123: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Train on 64972 samples, validate on 27844 samples
Epoch 1/50
64972/64972 [==============================] - 70s - loss: 0.3469 - acc: 0.8527 - val_loss: 0.0716 - val_acc: 0.9836
Epoch 2/50
64972/64972 [==============================] - 72s - loss: 0.0575 - acc: 0.9839 - val_loss: 0.0731 - val_acc: 0.9791


...
64972/64972 [==============================] - 79s - loss: 0.0016 - acc: 0.9995 - val_loss: 0.0069 - val_acc: 0.9984
Epoch 49/50
64972/64972 [==============================] - 80s - loss: 7.5126e-04 - acc: 0.9998 - val_loss: 0.0093 - val_acc: 0.9981
Epoch 50/50
64972/64972 [==============================] - 78s - loss: 0.0065 - acc: 0.9983 - val_loss: 0.0357 - val_acc: 0.9923

Found Rafael Micro R820T tuner
[R82XX] PLL not locked!
92.9 wfm 99.9636411667
49.25 other 99.8086333275
95.0 other 99.9997735023
104.0 other 99.9999880791
422.6 other 99.9927401543
100.5 other 99.9997496605
120.0 other 100.0
106.3 wfm 100.0
942.2 other 99.999666214
107.8 other 100.0
Validation: 30.0





Friday, March 8, 2019

The New Corel USB Accelerator module adds Edge TPU co-processor to your system for AI development. Ideel for MobileNet v2 (100+ fps) development.

The New Coral USB Accelerator module adds Edge TPU co-processor to your system for AI development. Ideal for MobileNet v2 (100+ fps) development.

Coral USB Accelerator dongle

The Coral USB Accelerator dongle is a USB device that adds an Edge TPU co-processor to your Linux development system. It includes an USB3 socket and it dose accelerated ML inferencing.

The onboard Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with a low power cost.
The unit can execute state-of-the-art mobile vision models such as MobileNet v2 at 100+ fps, in a power efficient manner.

What can I do with this Unit?

You can execute your your TensorFlow Lite models against the device.

DURING BETA period


Currently, the Edge TPU compiler requires that your model use one of the following architectures:

    MobileNet V1/V2:
    224x224 max input size; 1.0 max depth multiplier
    MobileNet SSD V1/V2:
    320x320 max input size; 1.0 max depth multiplier
    Inception V1/V2:
    224x224 fixed input size
    Inception V3/V4:
    299x299 fixed input size

All models must be a quantized TensorFlow Lite model (.tflite file) less than 100MB.
The restriction above will be removed.
The first-generation Edge TPU is capable of executing deep feed-forward neural networks (DFF) such as convolutional neural networks (CNN), making it ideal for a variety of vision-based ML applications.

Example Models available.
  • Object recognition.
  • Insect recognition.
  • Plants recognition.
  • Baird recognition.
  • Face recognition. 
  • ...

Can the Edge TPU perform accelerated ML training?

Sort of. The Edge TPU is not capable of backward propagation, which is required to perform traditional training on a model. However, using a technique described in Low-Shot Learning with Imprinted Weights, you can perform accelerated transfer-learning on the Edge TPU by embedding new vectors into the weights of the last fully-connected layer on a specially-built and pre-trained convolutional neural network (CNN).
USB Accelerator dongle
USB Accelerator dongle

What would you need to use to USB Accelerator?

Any Linux computer with a USB port (preferably USB3 port)
  • Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+)
  • System architecture of either x86_64 or ARM64 with ARMv8 instruction set.

Physical size.

It has a very small footprint as can be seen in diagram below.




Now you can have your Artificial Intelligence (AI) engine (Tensorflow) on your Laptop or standalone instance.

Google has new Artificial Intelligence (AI) engine (Tensorflow lite) on your Laptop or standalone instance.

Here is the description of the Standalone option.
I was experimenting with Artificial Intelligence AI for Radio voice recognition and RF signal identification system and always had to run my applications on Tensorflow lite engine in the google remote Cloud for development and testing due to the expensive hard ware required. I cant wait for this hardware to become available here in South Africa as it seems to be only available in USA for now. :-(
The part that interested me the most was the Pulse Width Modulation (PMW) with max Frequency of 0 - 66Mhz.
My main interest was to use the object recognition for the identification of Radio signals and the Voice recognition for automated radio control.

Coral Development hardware option 1.

It looks like a Raspberry Pi footprint.
This unit has all the hardware interface for embedded AI applications. 
If you want to have a standalone AI system then this should work well.
There is some examples how to use the Tensorflow lite implementation.
Corel Standalone Dev Board
  • GPIO header pinout. (40Pin Header)
  • Universal Asynchronous Receiver-Transmitter (UART). Programmable baud rates up to 4 Mbps.
  • Synchronous Audio Interface (SAI)
  • Inter-Integrated Circuit (I2C)
  • Serial Peripheral Interface (SPI)
  • Pulse Width Modulation (PMW) Frequency of 0 - 66Mhz.
  • Serial console port. Terminal port for local access.
  • HDMI port. This is a full-size HDMI 2.0a port.
  • USB 3.0 ports. There are three USB 3.0 ports. 
  • Ethernet port. Supports 10/100/1000 Mbps.
  • Bluetooth 4.1.
  • Microsd-slot.
  • Audio Connections. 4-pin stereo terminal, 3.5mm audio jack, microphone (x2)   
  • MIPI-DSI display connector. Resolution up to 1920x1080 at 60Hz.
  • MIPI-CSI2 camera connector pinout. 24-pin flex cable connector.
  • Power specifications The Coral Dev Board must be powered by 2 - 3A at 5V with USB Type-C power port.

Boot mode.

This board can be boot in 3 different modes.

  1. Serial download.
  2. eMMC. 8GB
  3. SD card.

Operating system.

Supports Mendel Linux (derivative of Debian)

Software models.

There is several pre-compiled examples.
  • Object recognition.
  • Plant recognition.
  • Baird recognition.
  • Human faces recognition.
  • Object Location detection.
Un-compiled examples
  • Gesture recognition.
  • Speech recognition.