Justifying connected things –  Business models

How come so many Internet of Things (IoT) ventures fail? 

According to data from McKinsey, about 75% of IoT-based businesses don’t make it off the ground. That’s very significant indeed, especially considering all the hype that the technology has received over the past decade. 

Is it because the scope for object connectivity is more limited than we first thought, or is it that companies are jumping into the market with business models that are unsuitable for IoT and don’t maximise the opportunities it offers? 

This latter suggestion wouldn’t be unheard of amongst early adopters for any new technology — but is a particular hurdle in the IoT space which is such a significant shift away from the status quo. Below, we’ll take a look at why this ‘new tech old business model’ is stymying IoT development and explore some solutions that move away from this. 


What are IoT companies doing wrong currently? 

One of the major pain points IoT companies come up against is trying to impose a hardware-based business model onto a technology that centres around connectivity and service provision.

This hardware-centric approach focuses, much like manufacturing has always done until recently, on the traditional design-build-sell process.

The issue with this is that for an IoT business to succeed, it needs to provide continuous value for consumers. The product itself is just the start — companies need to plan for networks on which their products operate as well as service platforms which collect and manage data. The margins here are less clear-cut and, given how young a technology IoT really is, there’s not a huge pool of knowledge on how to do this. 

In other words, it’s easy enough for IoT companies to build something that works, but much more difficult to predict, forecast and follow through with making it profitable. 

Why do connected business models work best for IoT companies?

IoT works at its best when it provides continuous value for customers — and this is achieved through the platforms and networks that allow people to process the data your physical product provides. 

In this sense, the connectivity of the product is as important as the product itself. Business plans which centre this, rather than being explicitly product-focused, are more likely to support the continuous costs associated with IoT technology.

In terms of what this looks like on a practical level, connected business plans should consider: 



One simple way to set up your business for the ongoing costs you’ll face as an IoT company is to use a subscription-based model for payments. 

The good news is that companies are increasingly used to paying for key technologies on a subscription basis thanks to the recent SaaS boom. Key to IoT companies success is transferring this concept onto what people view as a physical product. 

Facilitation of pilot projects

As well as being relatively new as a technology, IoT solutions can take a little while to realise a positive ROI — the nature of the tech means that you’re in it for the long game. 

This means that helping customers set up pilot projects should be central to all connected business plans. Accompanying customers on the first stage of their journey with advice on getting the most of the data they produce could help along an investment that they’d otherwise be hesitant to make. 

Circular, not linear

One of IoT’s biggest appeal points is that it can help companies shift to what we call the ‘circular economy’. This reimagines the traditional product lifecycle (buy, use, dispose, replace) into something much more sustainable. 

The circular economy centres reuse, maintenance and recycling to create a system in which products last longer, less waste is produced and fewer raw materials are needed. As more companies look to a sustainable future, connected IoT products, their ability to self regulate, and potential to assist with key maintenance tasks look to play an increasingly important role.

Look towards the future with your IoT business plan and consider how your product could fit into such a system. 

What connected IoT business models are out there and how do they work? 

The good news is that despite these teething problems, IoT looks set to stick around. 

We can predict this because, despite the issues experienced by many ventures at the moment, there are a number of proven connected business models out there right now which companies use very successfully. 

The most successful of these business models include: 

  • Compliance monitoring: compliance is a huge expense for manufacturers (each year US manufacturers spend an estimated $192 billion on it). Using a connected device to monitor key compliance metrics like emissions is cheaper and more efficient than having someone come to check every quarter — and more transparent too.  
  • Predictive maintenance: we all understand the delays, frustration and losses that broken equipment can cause, even on a small scale. Devices that monitor activity levels, stress and other key metrics can issue automated alerts when they start to underperform, so that supplier technicians can fix the issue before it becomes any bigger. 
  • Remote diagnostics: you can now automate condition monitoring and optimising using smart sensors. Examples here include warehouse temperatures for perishable goods and soil conditions for plants. 
  • Asset tracking: microcontrollers connected to mobile internet can track and monitor an asset from anywhere on Earth. This is an unprecedented level of transparency, and is particularly useful for supply chains looking to reduce loss and theft and improve fleet efficiency and demand forecasting. 
  • Automatic fulfillment: smart devices can be programmed to order certain products automatically when they run out. See the Amazon Dash and ‘smart’ devices (like fridges and dishwashers currently making waves in the consumer space). 

A few final thoughts…

Estimates suggest that the IoT has a potential economic impact of between $3.9 and $11.1 trillion by the year 2025. To realise that, many IoT ventures will need to rethink how they structure their business. 

IoT products are physical products, so to many it would seem entirely natural to treat them as you would other consumer or commercial electronics. Yet because of the nature of these devices and how central the ‘service’ side of them is to customer success, a hardware-based business model will yield limited results in the long term.  

Instead companies should look to centre connectivity in their IoT business models. Ultimately, the appeal of IoT technology is the continuous value it delivers to customers, and business models that capitalise on this will bring the most success. 

What Does The Current Gartner Hype Curve Tell Us?

For the IT industry, the Gartner Hype Curve provides a graphical representation of the maturity and adoption of various technologies and applications. The analysis accompanying Gartner Hype Cycles also gives an indication of how potentially relevant these solutions are to resolving real life business problems, and the opportunities that they can provide to businesses, for expanding or improving their operations, and gaining a competitive edge.

As with all kinds of analysis however, it’s important for anyone reading and interpreting the data to appreciate the underlying principles guiding the research, and to understand the full implications of all the observations that the analysis brings to light.

Gartner Hype Cycle Methodology

According to Gartner, Inc., the methodology that the research firm uses in preparing their Gartner Hype Cycle, “gives you a view of how a technology or application will evolve over time, providing a sound source of insight to manage its deployment within the context of your specific business goals.”

Each Hype Cycle looks in depth at five key phases in the life cycle of a particular technology or application:

  1. Innovation Trigger: This is a breakthrough or discovery that gains public attention and media coverage. Often at this ideas stage, no usable products or viable business models are available. Much of the hype comes from proof of concept (PoC) evidence, and the potential implications of the new technology.
  1. Peak of Inflated Expectations: As a result of all the early publicity, a number of success stories often appear at this point — but tales of failure may be just as common. Some companies will take action on the basis of these early successes, but many will not.
  1. Trough of Disillusionment: During this phase of the cycle, interest in the new technology declines and cynicism begins to set in, as experiments and implementations fail to deliver the promised results. However, if surviving providers of the new technology manage to improve their products and services to the satisfaction of early adopters, investment in the development may continue.
  1. Slope of Enlightenment: As time goes on, second and third-generation products or services appear from the technology providers. More instances of how the new technology can benefit the enterprise begin to emerge, and observers now have a better understanding of how it works. As a result, more enterprises begin to invest and fund pilot schemes. However, more conservative elements still remain on the fence.
  1. Plateau of Productivity: At the final stage of Hype Cycle maturity, mainstream adoption of the new technology starts to take off. For businesses looking to invest or implement development projects of their own, there’s a clearer understanding of the criteria for assessing provider viability.

A typical Gartner Hype Curve might look like this:


(Image source: Gartner)

What The Current Gartner Hype Curve Suggests

The current Gartner Hype Curve considers five technology trends which are “revolutionising how customers experience digital”, and should provide food for thought for businesses making their strategic plans for 2020 and beyond.

1. Multiexperience

Observers in retail and other industries whose consumers take a multi-platform approach to interacting with brands will already be familiar with what Gartner calls “multiexperience.” It’s a blanket term for the various devices and apps that people use on their many digital journeys. This typically involves a combination of interaction modes and touch points, ranging from web and mobile apps, through natural-language-based chat and voice interfaces, to gestures used in 3D or virtual environments.

For businesses wishing to keep pace with this trend, their in-house development teams or external contractors should master mobile app design, development, and architecture. These teams should create mobile apps with modalities based on specific touch points, while engineering a consistent and unified user experience (UX) across web, mobile, wearable devices, conversational interfaces, and immersive experiences.

2. Machines Without Interfaces

So-called “interfaceless” machines are becoming more widespread, as manufacturers in various sectors are phasing out on-board instrument panels in favour of apps that run on their handler’s mobile devices. Device control is being enhanced by the large, high-resolution screens now common on mobile devices. Meanwhile, control software design is easier with the availability of configurable APIs (application programming interfaces).

3. Agent Interfaces

As interface design evolves across a range of industries, interfaces incorporating Artificial Intelligence (AI) are enabling developers to predict what users intend to do, on the basis of information gleaned from past interactions.

Conversational UIs (or chatbots) are an example of these intelligent agent interfaces, which have the potential to greatly influence how enterprises interact with their consumers, offer services, and provide tools to their employees.

4. Facial Recognition Payment Systems

Pioneered and gaining popularity in China, facial recognition payment systems use QR codes and the scanning / recognition capabilities of mobile device cameras and sensors to bypass traditional cash and card-based mechanisms.

Though the technology requires a high degree of confidence and trust in the payment service provider, these systems are gaining adoption outside of China. Apple’s Face ID with Apple Pay is one example.

5. Inclusive Design

As diversity becomes a key issue both in and outside the work place, designers must give consideration to all potential users of their products and services. By taking into account the special needs of all possible communities, inclusive design can serve the broadest possible population of users. To ensure this, the data sources used in design efforts must reflect all potential user segments, and avoid data sets that are too narrow or non inclusive.

Should We Take This At Face Value?

Gartner Inc., places the emphasis on Chief Information Officers (CIOs), as the business leaders who most need to understand how digital experiences are developed and delivered. The research firm’s clients use the Gartner Hype Curve and its implications as the basis for understanding the promise of an emerging technology within the context of their particular industry, and each individual enterprise’s appetite for risk.

Early adopters need to weigh the balance of a potentially risky investment in largely untested technology against the success that could emerge from getting ahead of the rest of the market.

Executives with a more modest approach to risk-taking will generally insist on a sound cost / benefit analysis of new technologies or methods, before making any financial commitments.

In the case of technologies and services with too many unanswered questions concerning their commercial viability, it might be better to adopt a more conservative stance, and wait until others  in your sector have been able to deliver tangible value.

Industry analyst Elaine Burke proposes an additional phase to the Gartner Hype Cycle after the plateau of productivity, to reflect the practical reality of when everyday technology becomes a source of everyday frustration.

Burke argues that The Morass of Malfunction should be included, to take account of the stage in a technology’s maturity when a disconnect occurs between user expectations and the technology provider’s development plan. A typical example would be the experience of waiting for a website to load new elements while you are scrolling, and just as you click or tap on the thing you were looking for, the whole layout jumps, and you’re instantly transported to somewhere you didn’t want to go.

By including some concession to the usability issues of a technology after it gains widespread acceptance, the Gartner Hype Curve could give a more complete picture of its life cycle.

Do We Want Yet Another App For Home Automation?

With less hassle in getting everyday tasks done, improved efficiency, and the lure of reduced energy costs, home automation has much to recommend it. Yet with the growing popularity of smart homes and the resulting expansion of the Internet of Things or IoT that supports it, there has also come a proliferation of systems for managing gadgets in the home, and app after app for managing those systems.

A Growing App Ecosystem

The domestic services made possible through smart technology go beyond simple energy consumption and household management, to include a range of applications from assisted living, through security and remote monitoring, to the control and management of home appliances and devices.

(Image source: Statista.com)

Globally, revenue in the smart home market currently amounts to US$84,637 million, according to Statista.com. This figure is expected to show a compound annual growth rate (CAGR ) of 18.2% in the next three years, resulting in a market volume of US$139,808 million by 2023. According to research by PreciseSecurity.com, global smart home market revenue is expected to reach a value of $158 billion in the next four years.

Smart appliances generate the most significant share of the overall market income. Global consumers are expected to spend $21.5 billion this year on devices which they can connect to smartphones or tablets for better control, convenience, and information. This market segment is expected to increase to $39.6 billion by 2024.

Germany and the UK are both predicted to produce smart home incomes of around $4.8 billion this year. Sellhousefast.uk surveyed 1,462 UK households to discover the smart home products they intend to own in 2020, and found that a smart thermostat (71%) and a smart doorbell (66%) are the products that UK buyers most expect to own this year.

Market analysis by eMarketer suggests that Amazon, Google, Apple, and Samsung have created some of the largest IoT and smart-home platforms in the West, while Baidu, Alibaba and Xiaomi are among the top providers in China. Hardware manufacturers, security and telecommunications providers, utilities, software firms, and startups are others in the space also jockeying for position.

(Image source: eMarketer.com)

Home Automation — Or A Confusion Of Options?

Though the apps and interfaces used in governing these various smart technologies may share some common ground, different manufacturers may favour different app and UI designs, different types of network media, and different communications protocols.

The problem of interoperability between smart home systems vexes both consumer electronics dealers and customers, alike. While connected devices from a single vendor will work fine with each other, interoperability between vendors can become problematic — especially if the vendors don’t use similar technology, or employ a shared protocol standard. An example would be the popular Lutron Caseta, which is sold at large home improvement chains and uses a proprietary wireless communication that is similar — but not identical — to the competing Zigbee and Z-Wave standards employed by other home automation vendors such as GE, Jasco, and Philips Hue.

If you mix home automation vendors in your household, there’s a real danger that each one will end up confined to its own network and controlling app, making the issue of coordinating the various appliances and systems a real headache.

The main problem with buying several smart home devices from several different manufacturers is that you’ll have to install and learn how to navigate several different apps, in order to use those devices properly. A better option would be to have a single screen that shows  the status of all of your smart home devices, with a unified set of controls that enable you to make changes or adjustments across the board.

Unifying The Smart Home On A Single Platform

Providing interoperability for smart home appliances and systems can be achieved in two ways: implementing universal communications protocols, or having a central hub or gateway application that connects various components and acts as an interpreter between the different smart home devices or sub-networks.

Various attempts have already been made at creating a universal set of standards for communication protocols governing home automation devices. In 2000, three main European standards — ethe EIB (European Installation Bus), the EHS Protocol (European Home Systems) and BatiBus  — were combined into a standardised communications protocol called KNX (Konnex), which currently has over 270 manufacturers from 33 countries amongst its membership. Other initiatives include communications protocols such as UPnP (Universal Plug and Play), the BACnet (Building Automation and Control Network), and the DLNA (Digital Living Network Alliance).

In essence, a central network hub or gateway should act as a home network router, capable of connecting multiple home computers, smart appliances, and peripheral devices (e.g., printers) to one another, and to the internet. Given the unpredictable nature of smart technology evolution, gateways must be able to continually search for, recognise, and adapt to, new devices added to a network. Platforms such as the Open Services Gateway Initiative (OSGi) and Jini are capable of adapting to the addition and removal of new devices, without requiring manual installation, upgrading or resetting.

Some Examples

For existing homes which already include a variety of automation systems and smart devices which were acquired over the course of time and from different manufacturers, more custom made solutions for integration may be required. Even with new purchases, it’s necessary to first install the manufacturer’s own brand of control software to set up the device and get it running, before dealing with the configuration of a hub or any single app home automation software.

So for example, with Google Home, you would need to open the Home app, tap the add button, click “Set Up Device,” and then the “Works with Google” option. From the displayed list of manufacturers, you then have to find the right one and follow the steps of the linking process.

Similarly with Alexa, you must first open the Alexa app, tap the hamburger menu at the top left of the screen, and then choose “Add Device.” The displayed headings ( “Light”, “Plug”, etc.) give options for the category of smart device. Having chosen, you then have to select the device manufacturer and follow the prompts to link your accounts.

For Apple based systems, compatible devices can be governed via HomeKit and the Home app. Tapping “Add Accessory” enables you to use the camera on your iPhone or iPad to scan the QR code on the smart device’s box. You can then follow the prompts to name the device and add it to a particular room.

Especially valuable in a home automation context is the ability to automate a group of tasks that you habitually perform in sequence, by setting up routines or scenes — pre-programmed actions that the system performs on your behalf (e.g., locking all the doors before turning off the lights at bedtime).

If you’re unsure how to use a smart hub to link different devices or set up your routines, you can check the manufacturer’s web site for specific instructions. Note also that, even if you do use a single app for coordinating your home automation, it’s wise to retain the original apps that came with each device, as this software is often necessary for enabling updates and security patches.

Final Thoughts

As the home automation market continues to expand and the roll out of new 5G technologies fuels smart home evolution, adoption is set to grow steadily in the coming years. Along with recent IoT investments by Google, Apple, Amazon, and Alibaba, the market is consolidating, with more and more products becoming cross-compatible.

So in answer to the question “Do we want yet another app for home automation?” the likely response for the foreseeable future is “Not really, so long as a central network hub or single app controller is available.”

The Usage Of 5G IoT Radio Network Services

With the coming of 5G IoT radio network services, we are embarking on an overhaul of the global communications infrastructure — effectively replacing one wireless architecture created this century, with another one that aims to lower energy consumption and maintenance costs.

The principal stakeholders of 5G wireless technology — telecommunications providers, transmission equipment makers, antenna manufacturers, and server manufacturers — are all looking to deliver on the promise that, once all of 5G’s components are fully deployed and operational, cables and wires will become a thing of the past, when it comes to delivering communications, entertainment, network connectivity, and a host of other services.

In this article, we’ll be looking at how close they are to making good on this promise, by examining some of the existing and proposed use cases for 5G IoT radio network services.

What Is 5G?

5G is the fifth generation of mobile network technology. Each generation has its own defining characteristics such as frequency bands, advances in transmission technology, and bit rates.

The first generation or 1G was introduced in 1982. It was never an official standard, though several attempts were made to standardise digital wireless cellular transmission, none of which became global. 2G / GSM launched in around 1992, at the same time as much of the world was adopting CDMA. The global standards community finally came together with their 3rd Generation Partnership Project (3GPP), and 3G appeared in the early 2000s. 4G was standardised in 2012.

At maximum performance, 4G networks have a theoretical download speed of 1 gigabyte per second. 5G networks start at 10 gigabytes per second, with a theoretical maximum of 20 gigabytes per second or beyond. 5G also offers lower latency or network lag, which essentially means less time for information to travel through the system.

In addition to raw speed and stability, 5G offers a form of segmentation known as network slicing. This allows for multiple virtual networks to be created on top of a shared physical infrastructure. It also provides the ability to span across multiple parts of a network, such as the core network, transport layer, or access network. Each network slice can create an end to end virtual network, with both compute and storage functionality.

What Are 5G IoT Radio Network Services?

Internet of Things or IoT devices and platforms use a variety of wireless technologies, including short-range technology of the unlicensed spectrum, such as Wi-Fi, Bluetooth, or ZigBee, and technologies from the licensed spectrum, such as GSM and LTE. The licensed technologies offer a number of benefits for IoT devices, including enhanced provisioning, device management, and service enablement.

The emerging licensed technology of 5G IoT radio network services provide a range of opportunities to the IoT which are not available with 4G or other technologies. These include the ability to support a massive number of static or mobile IoT devices, which have a diverse range of speed, bandwidth, and quality of service requirements.

Much of the global 5G plan involves multiple, simultaneous antennas, some of which use a spectrum that telecommunications providers agree to share with each other. Other parts of the deployment will include portions of the unlicensed spectrum that telecommunications regulators will keep open for everyone at all times. For this reason, some of the 5G technologies include systems that will enable transmitters and receivers to arbitrate access to unused channels in the unlicensed spectrum.

Most of these 5G IoT radio network services can be grouped under three main categories: enhanced mobile broadband (eMBB), massive IoT (also known as massive Machine Type Communications or mMTC), and critical communications.

Enhanced Mobile Broadband (eMBB)

Enhanced mobile broadband (eMBB) under 5G will have the capacity to support large volumes of data traffic and large numbers of users, including IoT devices. Some estimates put this capacity at a minimum of 100GB per month per customer, greatly expanding the consumer IoT market by delivering high-speed, low-latency, reliable, and secure connections. In addition, the cost of data transmission per bit is set to decrease, making the prospect of “unlimited” data bundles finally feasible.

Enhanced mobile broadband will support the delivery of high definition video at the consumer level (e.g., for TV and gaming), and immersive communications, such as video calls and augmented reality and virtual reality (AR and VR). Some predictions for 5G latency put it as low as 1 millisecond between a device and its base station, increasing the prospects for fingertip control over remote assets (the so-called “tactile internet”), and high definition video conferencing. It will also facilitate data transfer for smart city services, including IoT video cameras for surveillance.

eMBB is intended to service more densely populated metropolitan areas with download speeds approaching 1 Gbps (gigabit per second) indoors, and 300 Mbps (megabits per second) outdoors. This will require the installation of extremely high-frequency millimetre-wave (mmWave) antennas throughout the landscape, potentially numbering in the hundreds or even thousands.

For more rural and suburban areas, enhanced mobile broadband is looking to replace the 4G LTE system, with a new network of lower-power omnidirectional antennas that provide a 50 Mbps download service.

Massive IoT (mMTC)

Massive Machine Type Communications or mMTC allow machine-to-machine (M2M) and internet of Things (IoT) applications to operate without imposing burdens on other classes of service. 3GPP narrowband IoT (NB-IoT) and Long-Term Evolution Machine Type Communications (LTE-M) are existing technologies which are integral to the new breed of 5G era fast broadband communications.

These 4G technologies are expected to continue under full support in 5G networks for the foreseeable future. They are currently providing mobile IoT solutions for smart cities, smart logistics, and smart metering. As 5G evolves, they will be used to access multimedia content, stream augmented reality and 3D video, and to cater for critical communications like factory automation, and smart power grids.

mMTC maintains service levels by implementing a compartmentalised service tier for devices that require a download bandwidth as low as 100 Kbps, but with latency that’s kept low at around 10 milliseconds.

Critical Communications

For critical communications requirements where bandwidth matters less than speed, Ultra Reliable and Low Latency Communications (URLLC) technology can provide an end-to-end latency of 1 millisecond or less. This level of service would enable autonomous or self-driving vehicles, where decision and reaction times have to be near instantaneous. For enterprises, the extreme reliability and low latency of 5G will allow for smart energy grids, enhanced factory automation, and other demanding applications with rigorous requirements.

URLLC actually has the potential to make 5G competitive with satellite, which opens up the possibility of using 5G IoT radio network services as an alternative to GPS for geographical location.

A supplementary set of 5G standards called “Release 16” that was scheduled for the end of 2019 includes specifications for Vehicle-to-Everything (V2X) communications. This technology incorporates low-latency links between moving vehicles (especially those with autonomous driving systems) and cloud data centres. This enables much of the control and maintenance software for moving vehicles to operate from within static data centres, staffed by human personnel.

Use Cases For 5G IoT Radio Network Services

Industry analysts reckon that the initial costs of 5G infrastructure improvement could be astronomical. In order to cover themselves financially, telecommunications companies and other stakeholders in the ecosystem will need to offer new classes of service to new customer segments.

A number of use cases currently exist or are on the horizon.

Cloud And Edge Computing

The wireless technology of 5G IoT radio network services offers the potential for distributing cloud computing services much closer to users than most of the data centres operated by major players like Amazon, Google, or Microsoft. For critical, high-intensity workloads, this could make 5G service providers viable competitors as cloud providers.

Similarly, by bringing processing power closer to the consumer and minimising the latency caused by distance, 5G becomes a vehicle for edge computing environments, where data handling has to occur as close to devices and applications as possible. With latency reductions of a sufficient magnitude, applications that currently require desktop systems or laptops could be relocated to smaller and more mobile devices with significantly less on-board processing power.

Automotive Industry

The high bandwidth connectivity of 5G can provide a seamless and high quality of service for vehicle navigation, infotainment, and other services in standard and autonomous vehicles. Low latency and high bandwidth connections can support vehicle platoons, which improve fuel efficiency and reduce the number of drivers required on the road.

Near-zero latency is enabling the development of driverless or autonomous vehicle technology, while network slicing provides road and infrastructure managers with a greater degree of flexibility, and the option to allocate network slices to specific functions.

(Image source: GSMA)

While it’s unlikely that there will be mass adoption of fully autonomous vehicles on public roads for some years to come, connected and smart vehicles are becoming increasingly popular. For instance, 75% of cars shipped in 2020 in Australia are likely to be have some form of connectivity.

Media And Content Delivery

The high bandwidth and low latency of 5G enable the high volume transmission of high definition video in real time. This makes both video conferencing and streaming entertainment faster and more engaging for the participants. These activities also become more versatile, as 5G can support live broadcasting using smartphones, and interactive and immersive VR experiences.

5G Fixed Wireless Access (FWA) systems allow home broadband services to be set up quickly and cost-effectively in rural and other areas that don’t have access to fixed line home broadband. FWA can deliver speeds similar to fibre-based services, at a considerably lower cost (around 74%) than wired connections.

Coupled with edge computing, the low latency and high bandwidth of 5G can enhance the cloud gaming experience, with the edge processing of large volumes of data reducing the need for more powerful AR / VR headsets. Similar enhancements using augmented and virtual reality are enabling organisations in the retail sector to create memorable customer experiences.


5G IoT radio network services allow the connection of large numbers of devices in a secure and cost-efficient manner, while low latency connectivity enables the virtual control of machines. Fewer processing units are therefore required on the factory floor, while telemetry or information exchange can occur between a large number of interconnected devices in real time.

As with the automotive industry, network slicing allows manufacturers to allocate network slices to specific functions, and a combination of cloud computing, eMBB, and mMTC can facilitate the transmission of real time information at high resolutions.

Health Care

Cables and wires in operating theatres could be replaced by the low latency and secure wireless connections made possible through 5G. For hospital administration, data analytics across medical records will have improved efficiency, while AR and VR delivered via low latency and high bandwidth 5G can aid in diagnosis, and the training of medical staff. Remote real-time diagnostics can also be enhanced by delivering high quality video over 5G.

In future, 5G IoT radio network services may even power robots for the dispensing of pharmaceuticals, support diagnostics, and performing certain types of surgery.

Smart Cities

5G IoT networks have the potential to aid in city management, for example, through the deployment of city-wide air quality monitors, and alert systems for health and safety hazards. The mass digitalisation of some public services is possible, and the use of connected vehicles for police and emergency services, linked to traffic lights.

Network slicing will allow city managers to provide higher security and reliability for mission-critical services.

Smart Utilities

Improved edge computing will enable utilities providers to better scale their number of connected devices, and deploy platforms and analytics capable of handling the increased data volumes in real-time.

5G wireless could provide a flexible and cost-effective alternative to last mile fibre, and assist the longer term management of complex virtual energy production plants.

Looking Ahead

While 5G wireless will do away with much of the cabling architecture of current cities, the platform’s requirements for short-range infrastructure — numerous small, low power base stations containing the transmitters and receivers — will create a new and characteristic form of landscape.

The 5G mobile cellular networks in use today are evolving from existing 4G networks, which will continue to serve many functions. Moving forward, 5G IoT radio network services providers will need to ensure that their networks support both current and future use case requirements.

What’s New With BLE5? And How Does It Compare To BLE4?

Since its introduction in 1998, Bluetooth wireless has carved a niche as one of the principal technologies enabling users to connect phones or other portable equipment together. Heralding the next phase in the evolution of this technology is Bluetooth 5.0 or BLE5, the latest version of the platform, and a Low Energy (LE) variant that brings significant advantages over its predecessor, BLE4.

Drawing on updated forecasts from ABI Research and insights from several other analyst firms, the Bluetooth® Market Update 2020 examines the growth and health of the Bluetooth SIG member community, trends and forecasts for each of the key Bluetooth wireless solution areas, and predictions, trends, and opportunities in Bluetooth vertical markets.

(Image source: Bluetooth.com)

According to this year’s report, annual Bluetooth enabled device shipments will exceed six billion by 2024, with Low Energy technologies contributing to much of this activity. In fact, Bluetooth Low Energy (LE) technology is setting the new market standard, with a Compound Annual Growth Rate (CAGR) of 26%.

(Image source: Bluetooth.com)

By 2024, 35% of annual Bluetooth shipments will be LE single-mode devices, and with the recent release of LE Audio, forecasts indicate that Bluetooth LE single-mode device shipments are set to triple over the next five years.

Within the Bluetooth LE market, BLE5 is making its mark on the Bluetooth Beacon and Internet of Things (IoT) sectors, creating new opportunities in areas such as Smart Building, Smart Industry, Smart Homes, and Smart Cities using mesh connections.

Some Bluetooth Basics

Before considering how BLE5 compares to what’s come previously, we’ll give you a basic understanding of the technology involved, and how it has evolved to its current level.

Bluetooth is both a high speed, low powered wireless technology and a specification (IEEE 802.15.1) for the use of low power radio communications that can link phones, computers and other network devices over short distances without wires.

Links are established via low cost transceivers embedded within Bluetooth-compatible devices. The technology typically operates on the frequency band of 2.45GHz, and can support up to 721KBps of data transfer, along with three voice channels. This frequency band has been set aside through international agreement for the use of industrial, scientific, and medical devices.

Standard Bluetooth links can connect up to eight devices simultaneously, with each device offering a unique 48 bit address based on the IEEE 802 standard. Connections may be point to point or from a single point to multiple points.

A Bluetooth Network consists of a Personal Area Network or piconet, which contains a minimum of two to a maximum of eight Bluetooth peer devices — usually in the form of a single “master” and up to seven “slaves.”

(Image source: Elprocus.com)

The master device initiates communication with other devices, and governs the communications link and data traffic between itself and the slave devices associated with it. A slave device may only begin its transmissions in a time slot immediately following the one in which it was addressed by the master, or in a time slot explicitly reserved for its use.

How Bluetooth Has Evolved

In 1998, the technology companies Ericsson, IBM, Nokia, and Toshiba formed the Bluetooth Special Interest Group (SIG), which published the first version of the platform in 1999. This first version could achieve a data transfer rate of 1Mbps. Version 2.0+EDR had a data speed of 3Mbps, while version 3.0+HS stepped up its speed of data transfer to 24 Mbps.

(Image source: Amar InfoTech)

Which brings us to versions 4 and 5.

How BLE5 Compares To BLE4

Versions 1 to 3 of the platform operated via Bluetooth radio, which consumes a large amount of energy in its work. Bluetooth Low Energy technology or BLE was originally created to reduce the power consumption of Bluetooth peripherals. It was introduced for Bluetooth 4.0 and continued to improve through the BLE4 series, whose last version was 4.2.

Design and performance-wise, BLE5 has the edge over BLE4, in a number of different aspects.

1. Speed

BLE5 achieves a data transfer speed of 48MBps. This is twice that of BLE4. Bluetooth 5.0 has a bandwidth of 5Mbps, which is more than two times that of Bluetooth 4.2, whose maximum bandwidth is 2.1 Mbps. This effectively increases the data rate of BLE5 to 2Mbps, which allows it to reach a net data rate of about 1.4Mbps, if you ignore overheads like addressing. While this isn’t fast enough to stream video, it does permit audio streaming.

2. Range

The range of BLE5 is up to four times that of Bluetooth 4.2. A BLE4 solution can reach a maximum range of about 50m, so with Bluetooth 5.0 something in the vicinity of 200m is possible — though some researchers suggest that BLE5 can be connected up to 300 metres or 985 feet. These figures are for outdoor connections.

Indoors, Bluetooth 5 actively operates within a radius of 40 metres. Compare this with the 10m indoor radius of BLE4, and it’s clear that BLE5 has the advantage when it comes to using wireless headphones some distance away from your phone, for example, or for connecting devices throughout a house, as opposed to within a single room.

3. Broadcast Capability

Bluetooth 5 supports data packets eight times bigger than the previous version, with a message capacity of about 255 bytes (BLE4 has a message capacity about 31 bytes). This gives BLE5 considerably more space for its actual data load, and with more data bits in each packet, the net data throughput is also increased.

Largely because of the increased range, speed, and message capacity of BLE5, Bluetooth 5 Beacon has been growing in popularity.

4. Compatibility

In terms of compatibility, BLE4 works best with devices compatible with version 4 of the series, but will not work with devices that are using Bluetooth 5. BLE5 is backwards compatible with all versions of Bluetooth up to version 4.2 — but with the limitation that not all Bluetooth 5 features may be available on these devices.

5. Power Consumption

While both BLE5 and BLE4 are part of the Bluetooth Low Energy ecosystem, BLE5 has been designed to consume less power than its predecessor. So Bluetooth 5 devices can be left running for longer periods, without putting too much stress on their batteries.

Historically, this has been a particular problem with smart watches and devices with smaller form factors, like IoT sensors. With the redesigned power consumption system in Bluetooth 5, most such devices will increase their battery life

6. Resiliency

BLE5 was developed with the consideration that important processes involving Bluetooth most often occur in an overloaded environment, which negatively affects its operation. Compared to Bluetooth 4.2, BLE5 works much more reliably in overloaded environments.

7. Security

In April 2017, security researchers discovered several exploits in Bluetooth software (collectively called “BlueBorne”) affecting various platforms, including Microsoft Windows, Linux, Apple iOS, and Google’s Android. Some of these exploits could permit an attacker to connect to devices or systems without authentication, and to effectively hijack a complete device.

BLE5 has addressed much of this vulnerability, with bit-level security, and authentication controls using a 128bit key.

What This Means For Practical Applications Of BLE5

With its low power consumption, inexpensive hardware, and small form factors, BLE5 provides scope for a wide range of applications.

In previous iterations, Bluetooth Low Energy technology was largely used for storage, beacons, and other low-power devices, but came with some serious limitations. For instance, wireless headphones were unable to exchange messages under BLE4.

With Bluetooth 5.0, all audio devices can share data via Bluetooth Classic, and Bluetooth Low Energy is now more applicable for wearable devices, smart IoT devices, fitness monitoring equipment, and battery-powered accessories such as wireless keyboards.

BLE5 also includes a feature which makes it possible to recreate the sound on two connected devices (headphones, speakers, televisions, etc.) at the same time. Connected to a common “command centre”, each device can independently choose its information transfer priority — greater transfer speed, or increased distance over which devices can interact.

Bluetooth 5 also allows for serial connections between devices. So for components of the IoT, each device can connect to a neighbouring element, rather than having to seek out a distant command centre. This has positive implications for the scaling of larger IoT deployments.

At the domestic level, Bluetooth mesh networking is playing a key role in automating the smart homes of tomorrow. Major home automation platforms such as Alibaba and Xiaomi are developing Bluetooth mesh networks to meet a growing demand for device networks in the home.

Mesh networking is also providing a foundation for commercial lighting control systems supported by innovators like Osram, Murata, Zumtobel, and Delta Electronics. These systems employ Bluetooth mesh networking to create large-scale device networks that can act as the central nervous system of a building. Applications span the retail, tourism, and enterprise sectors, and can even help organisations establish a platform that enables advanced building services, such as asset tracking.

At the consumer level, Bluetooth LE Audio under BLE5 now has enhanced performance, which has enabled support for hearing aids, and introduced Audio Sharing. This platform enhancement enables the transmission of multiple, independent, synchronised audio streams, providing a standardised approach for developers to build high-quality, and truly wireless ear buds. And the new Broadcast Audio feature enables a source device to broadcast an audio stream to an unlimited number of audio sink devices, opening up new opportunities for innovation.

So the evolution of Bluetooth from BLE4 to BLE5 sees performance improvements that go beyond increased data rates, wider range, and more broadcast capacity. And applications for now and the future may include IoT, smartphones, Bluetooth beacons, and numerous other devices.

Mikel Wassenius

Mikel is an analytical and curious person who loves learning new things and solving a good puzzle, which is probably why the problem-solving nature of software development has been such a good fit for him. 

As a member of the Vinnter team his work has coincided with his primary areas of interest within development; Cloud Solutions and cloud-based backend work with high-level languages. Mikel is currently having experience in Java, C# and Python. But the fun and challenging part of an assignment is always to solve the customer requirement with an elegant solution. The language of choice for solving the problems is of secondary importance.

For Mikel the most rewarding part of being a consultant at Vinnter is being a member of an agile team driving towards the same goal, as well as being challenged with new technologies or problems to solve is a driving force.

Kristofer Månsson

Kristofer is an outgoing person who brings a positive attitude to the group and will not quit until the job is done. Lately he has gradually shifted his interests from the technical aspects of software development towards business and project management, with a stronger emphasis on leadership in advanced technical projects.

He is an experienced development manager in most aspects of system development. His long time developing software systems, as well in the front-end as in the back-end, gives him a unique profile when managing software development teams. 

He has experience from roles as CTO, Project Manager, Team Leader, Scrum Master, Advisor, Frontend Developer and System Developer.


Johan Lövgren

Johan is an extremely dynamic and energetic person who brings drive and competence to the team. As an entrepreneur since several years back he has shown evidence for being capable of delivering great results within record time and with a small budget. His results has successfully been released to the market and resulted in known products.

His broad competence ranges from design aspects to electronics developments and embedded coding. This is a result from his personality as well as his MSc. degree from Chalmers University of Technology.

The Evolution of Micro Processing Units (MPUs)

Microprocessors or micro processing units (MPUs) are nothing short of amazing. By integrating the complete computation engine onto one electronic component, the computing power that once required a room full of equipment can now be fabricated on a single chip, usually about the size of a fingernail, though can indeed be much smaller still.

Serving as the central processing unit (CPU) in computers, microprocessors contain thousands of electronic components and use a collection of machine instructions to perform mathematical operations and move data from one memory location to another. They contain an address bus that sends addresses to memory, read (RD) and write (WR) lines to tell the memory whether it wants to set or get the address location, and a data bus that sends and receives data to and from memory. Micro processing units also include a clock line that enables a clock pulse to sequence the processor, and a reset line that resets the program counter and restarts execution.

(Basic micro processing unit. Image source: computer.howstuffworks.com)

The microprocessor is at the very core of every computer, be it a PC, laptop, server, or mobile device, serving as the instrument’s “brain”. They are also found in home devices, such as TVs, DVD players, microwaves, ovens, washing machines, stereo systems, alarm clocks, and home lighting systems. Industrial items contain micro processing units, too, including cars, boats, planes, manufacturing equipment and machinery, gasoline pumps, credit card processing units, traffic control devices, elevators, and security systems. In fact, pretty much everything we do today depends on microprocessors – and they are, of course, a fundamental component of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices which are becoming more and more prevalent in homes and crucial to businesses all over the globe.

It’s safe to say that these tiny pieces of equipment have had – and will continue to have – an enormous influence technologically, economically, and culturally. But where did micro processing units first originate, and what can we expect from them in the future?

A Brief History of Micro Processing Units

The very first commercially available micro processing unit was the Intel 4004, released by Intel Corporation way back in 1971. The 4004 was not very powerful, however, and not very fast – all it could do was add and subtract, and only 4 bits at a time. Even so, it delivered the same computing power as the first electronic computer built in 1946 – which filled an entire room – so it was still impressive (revolutionary, in fact) that everything was on one tiny chip. Engineers could purchase the Intel 4004 and then customize it with software to perform various simple functions in a wide variety of electronic devices.

(The Intel 4004. Image source: intel.co.uk)

The following year, Intel released the 8008, soon followed by the Intel 8080 in 1974 – both 8-bit microprocessors. The 8080 was commercially popular, and could represent signed numbers ranging from -128 to +127 – an improvement over the 4004’s -8 to +7 range, though still not particularly powerful, and so the 8080 was only used for control applications. Other micro processing units, such as the 6800 from Motorola and the z-80 from Zilog were also popular at this time.

The third generation of 16-bit micro processing units came between 1979 and 1980, and included the 8088, 80186 and 80286 from Intel, and the Motorola 6800 and 68010. The speeds of these microprocessors were four times faster than their second-generation predecessors.

(Table of various microprocessors Intel has introduced over the years. Image source: computer.howstuffworks.com)

The fourth generation of 32-bit microprocessors were developed between 1981 and 1995. With 32-bit word size, these processors became very popular indeed as the CPU in computers. In 1993, following a court ruling two years earlier which prevented Intel from trademarking “386” as the name of its then most powerful processor, the company released the 80586 by the name Intel P entium, opening a new era in consumer microprocessor marketing. No longer were processors referred to solely by numbers, but instead carried a brand name, and the trademarked “Pentium” soon became something of a status symbol amongst computer owners.

The fifth generation arrived in 1995 with high-performance and high-speed 64-bit processors. As well as new versions of Pentium, over the years these have included Celeron, Dual, and Quad-core processors from Intel, and many more from other developers including Motorola, IBM, Hewlett Packard and AMD. See Computer Hope’s “Computer Processor History” for an extended list of computer processors over the years, or the Wikipedia entry “Microprocessor Chronology”.

The Future of Microprocessors

As time and technology advance, microprocessors get increasingly powerful. Today, nearly all processors are multi-core, which improves performance while reducing power consumption. A multi-core processor works in exactly the same way as two or more single microprocessors. However, as a multi-core processor only uses one socket within the system, there is a much faster connection between the processor and the computer. Intel remains the strongest competitor in the microprocessor market today, followed by AMD.

Micro processing units have also gotten smaller and smaller over the years as well. In the 1960s, computer scientist and Intel Co-Founder Gordon Moore made an interesting observation – that every twelve months, engineers were able to double the number of transistors on a square inch piece of silicon. This held true for about ten years, then in 1975, Moore revised his forecast for the next decade to a doubling every 24 months – which indeed proved to be more or less accurate until around 2012.

(Image source: wikipedia.org)

However, we’re now starting to reach the physical limits for how small transistors can get. Up until recently, the industry standard was 14 nanometres (1nm = one-billionth of a metre). Then came Apple’s Bionic processor – which powers the iPhone XR, XS, and XS Max – measuring in at 7nm. Since then, IBM has been experimenting with 5nm chips, and researchers at MIT and the University of Colorado have developed transistors that measure a record-setting 2.5 nm wide.  

However, Moore’s Law cannot continue ad infinitum for a number of reasons. For starters, we must consider the threshold voltage– i.e. the voltage at which the transistor allows current to pass through. The problem lies in the imperfection of a transistor behaving as a switch, as it leaks some small amount of current when turned off. The situation worsens as the density of transistors on a chip increase. This puts a grave toll on transistor performance which can only be maintained by increasing the threshold voltage. As such, though transistor density may be increased, it would provide comparatively little improvement in speeds and energy consumption, meaning the operations performed on new chips would take, more or less, the same time as the chips used today – unless a better architecture is implemented to solve the problem.

Due to such limitations placed on microprocessors today, researchers are exploring new solutions and new materials in place of silicon – such as gallium oxide, hafnium diselenide, and graphene – to keep building the performance of microprocessors.

Single-Board Microcomputers

As central processing units shrink, so too do computers themselves. Over the last 60 or so years, the computer has evolved from a machine that filled an entire room to a device that can fit neatly in your pocket. And just as electronics have shrunk, so too has the price.

Today, for just a handful of dollars consumers can purchase single-board microcomputers – about the size of a credit card – with some pretty impressive communicative options, multimedia capabilities and processing power. One of the machines at the vanguard of this low-cost, high-power, small-size computing revolution is the Raspberry Pi, launched by the Raspberry Pi Foundation in 2012 as a $35 board to promote teaching of basic computer science in schools and developing countries. 

Raspberry Pi Family Photo. (Image source: opensource.com)

The original Raspberry Pi – the Pi 1 Model B – had a single-core 700MHz CPU with 256MB RAM. There have been several iterations and variations since that initial release, however, with the latest model – the Pi 4 Model B – boasting a quad core 1.5GHz CPU with up to 4GB RAM. All models can be transformed into fully-working computers with just a little bit of modest tinkering plus your own keyboard, mouse and monitor – users have even had success using a Raspberry Pi as a desktop PC for regular office work, including web browsing, word processing, spreadsheets, emailing, and photo editing. All for under $55. 

(Imager source: raspberrypi.org)

Of course, Raspberry Pi has competitors – most notably Arduino, a company that produces single-board microcontrollers (using a variety of microprocessors) that can be used to design and build devices that interact with the real world. Both Raspberry Pi and Arduino devices are widely used in development and prototyping projects, particularly for IoT devices. They are, however, much in the domain of the hobbyist – craftsmen and women trying their hand at creating useful everyday tools such as remote controls for garage doors and thermometer devices, as well as more fun projects like gaming consoles, robots and drones. There are also various other proprietary development boards available from companies such as ST Microelectronics and Texas Instruments.

While all of these development kits are good for prototyping, they are less suitable for mass production. Here at Vinnter, we use both Raspberry Pi and Arduino devices in development projects where embedded systems need to be adopted. The challenge, however, comes when we move from the prototype to an industrialized product development project for two main reasons – cost and size.

Though Raspberry Pi and Arduino units are relatively cheap for the consumer market, up to $35 a throw is simply not viable when planning for 10,000 or even 100,000 units to be produced for inclusion in other products. The other problem that comes from a development kit like Arduino or Raspberry Pi is the large size. True, these devices are impressively small for what they are – but because they include additional functions and features (such as USB connectors, ethernet connectors, HDMI connectors, etc.) that most likely won’t be required for the product being developed, they are simply too big and impractical for most real-world applications. For example, if you were developing a smart watch, then a credit-card-sized device isn’t practical at all. In addition, unnecessary functions increase power consumption – which is an important consideration, especially for battery-powered products.

Final Thoughts

Microprocessors have come a long way since the humble Intel 4004. Now capable of controlling everything from small devices such as watches and mobile phones to large computers and even satellites, it’s safe to say that with their low-cost availability, small size, low power consumption, high versatility and high reliability, the microprocessor is one of the most important and significant inventions responsible for powering our modern world.

When developing microprocessor-based commercial products or new technology products for businesses, however, it is essential to design a custom microprocessor board that’s fit for purpose. Though many very capable individuals at organizations can have much success prototyping new products using single-board microcomputer devices such as the Raspberry Pi, when it comes to large-scale production, the project will eventually have to be migrated to a production design. As such, companies currently prototyping new products will find working with third parties that have the resources and expertise to take their microprocessor-based developments and ideas to the next stage invaluable.

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and secure IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, creating new, highly-secure connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more.

The Value of Robotic Automation Lies in Reskilling the Workforce

The first robot entered the workplace in 1959, at an automotive die casting plant in Trenton New Jersey. Since then, many other industries such as electrical/electronics, rubber and plastics, pharma, cosmetics, food and beverage, and the metal and machinery industry have accelerated their adoption of robotic automation. By 2017, there were over two million operational robots across the world, with the number projected to almost double to 3.8 million units by 2021. Today, global robot density (number of robot units per 10,000 employees) in manufacturing industries stands at 74, up from 66 units in 2015. 

(Image source: engineering.com)

The Economic Impact of Robots

A June 2019 How Robots Change The World report from Oxford Economics estimates that a 1% increase in stock of robots could boost output per worker by 0.1% across the entire workforce. The study also projects a 5.3% increase in global GDP, equivalent to about $5 trillion, if robot installations were increased 30% above the baseline forecasts for 2030. 

There have been several studies over the years that have established the superlative impact of robotic automation on productivity, competitiveness and economic growth. There are also studied arguments about how robotic automation enables businesses to reshore jobs, increases demand for higher-skilled workers, addresses rising labor scarcity, and creates new job opportunities that do not even exist today.  

The Social Impact of Robots

But all that opportunity is not without its challenges. For instance, the Oxford Economics study found that, on average, each newly installed robot displaces 1.6 manufacturing workers. This means that up to 20 million manufacturing jobs could be at risk of displacement by 2030. 

It is also necessary to acknowledge that robots are no longer a purely manufacturing phenomenon. Though the automotive sector pioneered and continues to pursue the deployment of robots, today, many other manufacturing sectors, including electrical/electronics, rubber and plastics, pharmaceutical and cosmetics, food and beverage, metal and machinery etc., are investing heavily in robotic automation. And the same is true outside of manufacturing, with retail and ecommerce, sales and marketing, customer service, IT and cybersecurity, and many more sectors and segments besides all deploying robotic automation and artificial intelligence (AI) software to enhance business intelligence and customer experiences. The market for professional services robots is also expected to grow at an average rate of 20-25% between 2018 and 2020. The entire field of robotics is advancing much faster today thanks to falling sensor prices, open source development, rapid prototyping and digital era constructs such as Robotics-as-a-Service and AI. 

The Long and the Short of It Is…

(Image source: mckinsey.com)

… The robots are coming in practically every industry. But should we really all be fearful for our jobs? It is indeed the widely-held view. According to McKinsey, there’s widespread belief around the world that robots and computers will do much of the work currently done by humans in the next 50 years. 

Research from the World Economic Forum (WEF) agrees that millions of jobs are likely to be displaced by automation, but we have less to fear from robots than many seem to think – at least in the short term. Though the Swiss think tank predicts that robots will displace 75 million jobs globally by 2022, 133 million new ones will be created – a net positive. 

The report notes four specific technological advances as the key drivers of change over the coming years. These are: ubiquitous high-speed mobile internet, artificial intelligence, widespread adoption of big data analytics, and cloud technology. “By 2022, according to the stated investment intentions of companies surveyed for this report, 85% of respondents are likely or very likely to have expanded their adoption of user and entity big data analytics,” write the report’s authors. “Similarly, large proportions of companies are likely or very likely to have expanded their adoption of technologies such as the Internet of Things and app- and web- enabled markets, and to make extensive use of cloud computing. Machine learning and augmented and virtual reality are poised to likewise receive considerable business investment.”

The Reskilling Revolution 

WEF finds that nearly 50% of companies expect that automation will lead to some reduction in their full-time workforce by 2022, based on the job profiles of their employee base today. However, nearly a quarter expect automation to lead to the creation of new roles in the enterprise, and 38% of the businesses surveyed expect to extend their workforce to new productivity-enhancing roles. And this, indeed, is key to the robotic revolution and why so many companies are committed to investing in new technologies in the first place – because robotics, automation, machine learning, cloud computing, and big data analytics can enhance the productivity of the current workforce in the new digital economy and improve business performance.

In industries like manufacturing, these technologies provide seamless connections across production and distribution chains, streamlining the process of getting products from the assembly line into the hands of the customer. But it’s not just manufacturing – everything from healthcare to retail will benefit from these emerging and maturing technologies. And it’s not necessarily the case that robots and algorithms will replace the current workforce, either – rather, WEF says, they will “vastly improve” the productivity of existing jobs and lead to many new ones in the coming years. 

In the near future, it is expected that workers will be doing less physical work (as more and more of it is handled by robots), but also less information collecting and data processing (as these, too, will be automated), freeing up workers for new tasks. Though there will be more automatic real-time data feeds and data monitoring that won’t require workers to enter and analyze it, there will also be more work on the other end of spectrum, where real humans spend time making decisions based on the data collected, managing others, and applying expertise. Indeed, automation is more likely to augment the human workforce than replace it. 

The ability to digitize information and data is stimulating complete redesigns of end-to-end processes, customer experience strategies, and creating more efficient operations. Data analytics, indeed, is a key part of realizing the potential of all next generation technology – including robotics and automation – to enable better real-time reaction to trends and what customers want. 

Though there will inevitably be a decline in some roles as certain tasks within them become automated or redundant, in their place emerges a demand for new roles – though this does mean that the existing workforce will need to be retrained to update their skills. WEF says that among the range of roles that are set to experience increasing demand are software and applications developers, data analysts and scientists, and ecommerce and social media specialists – roles, the authors say, that are significantly based on and enhanced by the use of technology. Also expected to grow are roles that leverage distinctively “human” skills – those in customer service, sales and marketing, training and development, people and culture, organizational development, and innovation management. There will also be accelerating demand for wholly new specialist roles related to understanding and leveraging the latest emerging technologies – AI and machine learning specialists, big data specialists, process automation experts, security analysts, user experience and human-machine interaction designers, and robotics engineers.   

In short, the robotics revolution will spur a reskilling revolution – and businesses already seem to be on board with this idea. 66% of respondents in a McKinsey study assigned top-ten priority to addressing automation/digitization-related skill gaps. 

(Image source: mckinsey.com)

Final Thoughts 

As we head towards 2020, robotics, automation and related technologies are becoming a prerequisite for any company that wishes to remain competitive. Businesses large and small are embracing automation technologies – from fully-fledged assembly line robots to customized call center chatbots – to help simplify business processes, improve productivity and deliver better customer experiences at scale. This trend is only going to accelerate in the future, and though the rise of the robots may be a cause of concern for many in the labor market, the reality is that organizations won’t be able to solve all of their problems with automation alone. Rather, the robots are coming to augment the human workforce, not replace it. Job roles may change, and new skills may be required, but unless companies of all sizes start automating their processes, they will soon find themselves gobbled up by those that do.  

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, and create new connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more.