How TinyML can transform IoT applications across industries

Technology

How TinyML can transform IoT applications across industries

TechAhead Team

October 30, 2020

How TinyML can transform IoT applications across industries

How TinyML can transform IoT applications across industries

Imagine a scenario where you can increase the speed of your IoT solutions without burning a hole in your customer’s pocket.

The one question buzzing through your brain is this – how.

Well, by using devices that embed TinyML.

But what exactly is TinyML?

If you don’t know what TinyML is, or how TinyML can transform your IoT applications, here is the complete lowdown for your benefit.

Tiny machine learning (TinyML) is an emerging discipline that aims to miniaturize machine learning algorithms to a size where it can be embedded in IoT devices. They are also referred to as edge AI or embedded AI because they provide AI capabilities to the embedded devices.

Doesn’t the concept sound oxymoronic?

Because the principle behind IoT devices is to keep them small and energy efficient.

Whereas machine learning algorithms have been growing exponentially in size and complexity due to increase in computational power of graphical processor units, availability of high volumes of data via Internet and advent of big data.

And then again, edge computing is already making IoT faster. So, do we really need TinyML? Because every new technology brings with it some overheads as well.

Good point, so let’s dive straight in.

Why do we need TinyML?

But before that, we need to ask, why do we need machine learning embedded into IoT devices? The IoT ecosystem seems to be working fine as of now.

Well, not exactly.

It has a few problems of its own, which need to be resolved before the world can become a connected place in true sense.

Let’s look at what those IoT problems are.

Privacy and security of data

As you know, the primary function of IoT devices is to collect relevant data. The data collected is sent to servers located on cloud for further processing, as per business requirements. But the moment data is to be transmitted over a network, it opens the proverbial can of worms in the form of privacy and security issues. Hacking of data while in transmission or on the cloud can compromise users as well as the service providers. According to IBM Cost of Data Breach report 2020, average cost of data breach is pegged at USD 3.86 million. The US is the most expensive with USD 8.64 million, and for India the figure is USD 1.9 million.

What if the devices did not need to send data over the network but process them themselves?

Carbon footprints

The humongous size of machine learning algorithms needs huge amount of energy for training purposes. The GPT-3 algorithm released in May 2020 boats of a network architecture containing a staggering 175 billion neurons, which is more than double the number present in human brain. This model costs around 10 million dollars to train and uses approximately 3 GWh of electricity.

In 2019, a group of researchers at University of Massachusetts estimated that training a single deep learning model can generate up to 626,155 pounds of CO2 emissions, approximately same as the carbon footprint of five cars over their lifetime.

Shouldn’t scientists and businesses be looking for ways to reduce this carbon footprint?

A big yes, if they want sustainable models of technological growth.

Latency

Do you know what is the very first thing that happens when you say “Hey Alexa?”

The device checks for Internet connection.

No, I am not joking.

When a user gives an instruction to an IoT device, that instruction is transmitted to the server. The serve executes the instructions and sends information back to the device. For instance, if you say "Hey Alexa, play some music," the instruction is sent to the server and then the music is played, which you can hear from your speakers.

The time lag between an IoT device sending data and then receiving data in response to it is called latency. If the network is slow, the device has a high latency.

And that is undesirable.

After all, the concept of IoT is to provide user engagement at speed.

How TinyML can help improve IoT applications?

The latest technological advancements in the field of Artificial Intelligence present TinyML as the bridge between edge computing and smart devices, promising to make it even faster. As the time taken to do stuff decreases, the energy, time and other resources would automatically decrease.

And there is more to it than meets the eye. TinyML can help reduce financial, environmental and security burdens associated with ML.

Let us explore in detail.

More energy efficient

Development in machine learning has been associated with more complex networks, larger neural networks and faster processing speeds, all of which are responsible for increasing the carbon footprint of AI technologies. Of late, the focus among a group of scientists and researchers has moved to decreasing the carbon footprint of machine learning without compromising on accuracy or reliability. This can be done by modifying the whole stack of machine learning algorithms – from hardware to software.

If not years, at least months of research goes into the hardware changes, which is time consuming.

But designing innovative software is quicker.

And this is where TinyML steps in – shrinking ML algorithms to decrease the carbon footprint.

This shrinking can happen without compromising on the algorithm’s reliability as we will see later, when we discuss how TinyML works.

Reduced latency

As you already know, latency is the time lag between an IoT device sending and receiving data. If the device has its own processing power, it will not need to send every instruction to the cloud for further processing and execution. As the dependence of the device on network speed for functioning reduces the overall latency decreases as well.

We all know customers always want more of good stuff.

And who are we to deny them if technology makes it possible.

Improved data privacy and security

Data is at its most vulnerable when it is travelling on a network. Imparting processing and analysis power to IoT devices through TinyML decreases this to and fro between the IoT device and cloud applications. This reduces data vulnerability automatically.

It is much easier and cheaper to secure an IoT device than the whole network. This helps improve data privacy and security issues.

Collect only required data

Data Analytics projects tend to generate humongous amount of data, most of which are redundant. Intelligent IoT device can be programmed to collect data only when the triggering event happens.

Take the example of a CCTV camera installed outside the gates of an 11-storey building.

If the device keeps collecting footage 24 hours a day, can you imagine the amount of storage space required on the server? Even if the footage is automatically deleted after a week or so.

It would be a better idea to record footage only when a triggering event is detected.

So, how does the system work?

An object recognition system is installed with the video camera, which is in always-on mode. The moment an object is detected, the CCTV camera starts recording.

Makes more sense, right? Because the amount of data generated will be reduced drastically, decreasing the resources required to store and process it.

Ready to see get an overview of how TinyML works?

How TinyML works?

As you already know, training the machine learning model is critical to its success. There can be no compromise at this stage. After the training phase is over, ML is converted to TinyML in these four steps:

how TinyML works?
  1. Compression or distillation – After the model has been trained, it can be compressed in a way that its accuracy is not altered. The two most popular techniques used for post-training compression include pruning and knowledge distillation.
    1. Pruning removes those nodes in the algorithm that have minimum to no utility for providing output with accuracy and reliability. Usually neurons with lower weights are removed. The network then needs to be retrained on the pruned architecture before deployment.
    2. In the knowledge distillation technique, information in the large model that has been trained is transferred to a smaller network model with fewer parameters. This automatically compresses the knowledge representation and hence the size so that it can be used on devices with lesser memory.
  2. Quantization – The compressed model is quantized to make it compatible with the architecture of the device in which it is embedded example lost desktops and laptops in a 32-bit or 64-bit presentations for mathematical calculations. On the other hand, most of the microcontrollers used for TinyML use 8-bit arithmetic for performance optimization. Quantization takes care of this discrepancy between the chip and the embedded device.
  3. Encoding – This is an optional step because it does nothing but compresses the model further using Huffman encoding principles.
  4. Compilation – Finally the model is compiled into a format that can be interpreted and executed by neural network interpreters like TF Lite and TF Lite Micro. It is finally compiled into C or C++, the languages used by most of the microcontrollers for efficient memory usage.

But not everything is idyllic when it comes to putting theory into practice.

There are challenges in implementation that researchers are working day and night to overcome.

Challenges in integration Tiny ML with IoT

A major challenge in integrating TinyML with IoT can be attributed to the engineering and technological barriers to developing microcontrollers – the hardware where TinyML resides.

Microcontrollers must meet these two criteria

  1. Large enough to run the operating system with its libraries, neural network interpreters, neural weights and architecture, and store intermediate results during processing.
  2. Small enough to consume energy in the range of mWs so that the embedded device can run using coin batteries.

The good news is that these challenges do not stop TinyML from used in IoT applications.

Because these are endeavours to improve further; TinyML can be and is being used effectively in its current avatar.

TinyML use cases

30 billion microcontroller units were shipped in 2019. The boost in microcontroller industry has been attributed to the growing demand of TinyML for IoT devices. TinyML is revolutionizing multiple industries from retail and manufacturing to healthcare and fitness.

Manufacturing – TinyML can be used to monitor equipment in real-time and send out alerts when preventive maintenance is necessary. This would help reduce downtime owing to equipment failure.

Retail – Inventory management is the single-largest pain point in retail Industry. TinyML can be used to monitor shelves in stores, update stock levels automatically and send alert as soon as item quantities reach pre-defined levels. Thus, TinyML can be effective in preventing out of stock situations.

Mobility — TinyML sensors installed on strategic traffic locations can help route traffic effectively, reducing congestion, improving emergency response times and decreasing pollution. They can also be used to improve passenger safety.

Agriculture — TinyML can be in embedded in livestock wearables to monitor vital parameters like heart rate, temperature body temperature normal blood pressure, coma etc., which can help in predicting illnesses.

Healthcare — TinyML can be used to monitor critical as well as palliative care patients in real time, sending out alerts when emergency action is required.

Summary

Tiny machine learning or TinyML is a miniaturized version of machine learning algorithms that can be embedded in IoT devices. The TinyML algorithms are capable of performing the same actions as compared to a typical machine learning algorithm but at smaller size and complexity. A typical IoT ecosystem has three main problems – privacy and security of data, humongous carbon footprints and device latency.

TinyML can help improve IoT applications by making them energy efficient, reducing latency, collecting only required data and improving data privacy and security. TinyML equips the embedded device with processing power so the data need not travel to and from the cloud for its functioning. This reduces device latency as well as vulnerability to data hacking while traveling on the network.

Machine learning algorithm is converted to TinyML in these steps – compression or distillation, quantization, encoding (optional) and compilation. At the end of the process, the TinyML model is powerful enough to run the operating system, neural network interpreters and store neural weights and intermediate results during processing. However, it is so small that it consumes energy in the range of mWs. TinyML is revolutionizing multiple industries from retail and manufacturing to healthcare and fitness.

TechAhead Team

Written by

TechAhead Team

TechAhead is a leading software development company, with its traces present on the world map as well. With an exceptionally organised and talented team, we at TechAhead are ready to accept all the challenges you give us and are equipped to cater all your development needs.

back to top